Palo Alto, California
One Concern is a startup which specializes in Artificial Intelligence for Natural Disasters. With our team of global disaster scientists and engineers, we have developed a platform that helps emergency responders prepare for and make the most critical decisions during the first moments of a crisis, with the main goal of saving lives.
We are looking for a Data Engineer to join our team at One Concern. This role will be responsible for creating, improving and maintaining the complete data pipeline from ingestion to access and will support our data science, engineering and customer success teams. You should have a keen eye for data and have experience in dealing with multiple databases and datasets. Preferably, you should have experience working with geospatial datasets as you will be dealing with them a lot with this role.
Our ideal candidate is a critical thinker with attention to detail and can handle multiple priorities, and is cool under pressure. You aren’t afraid to roll up your sleeves when needed and can clearly and concisely communicate thoughts and ideas, both written and verbally, and think outside of your direct sphere of responsibility. The work you do will impact the foremost collection of global geographic information used to support critical decision.
We work in a highly collaborative, challenging and exciting environment. Our engineering challenges are unique, so you should be comfortable stepping in uncharted territory and excited to create systems that can scale to all disasters and geographies. If you are problem solver, think out-of-the-box, love challenging yourself, and want to work for a cause, we would love to have you at One Concern.
We are committed to a workplace that reflects the community we serve. We especially encourage women, people of color, and others who are underrepresented in the tech industry to apply.
What you will do:
- Lead the development of data pipeline architecture
- Develop, improve and maintain data pipelines to ingest real-time data from multiple sources
- Work closely with the data science and the engineering teams to develop, improve and maintain data pipelines to provide access to relevant data streams
- Build the architecture required for optimal extraction, transformation and loading of data from a wide variety of data sources using SQL and AWS technologies.
- Ensure data security and reliability as per product and customer needs
- 3+ years of experience in building and optimizing data pipelines, architectures and datasets.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience writing production code in Python
- Professional and technical communication skills such as writing technical documentation, writing status reports, writing algorithm description documents, authoring presentation charts, and presenting to an audience
- MS or BS in Computer Science, Statistics or related engineering fields
- Experience working with machine learning pipelines
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Working knowledge on programming languages like R
- Domain knowledge of industry
Perks and Benefits
- Market-competitive salary plus equity
- Comprehensive medical, dental, and vision insurance
- Daily lunches, and a fully-stocked kitchen
- Generous PTO policy
- Team off-sites
- Flexible working hours