Data Infrastructure Engineer
San Francisco, CA
We are a San Francisco based team building self-driving semi trucks. We are working to prevent the 4,000 fatalities a year caused by truck accidents, reduce the 50,000 truck driver shortage, and make moving goods across the country twice as fast and half as expensive, improving the efficiency of the entire economy.
In June we raised a 30 million dollar series B led by Sequoia, crossed 100,000 autonomous miles driven and are moving freight daily between LA and Phoenix. Our engineering team is made up of experienced and highly talented people from world class companies like Mercedes, Volkswagen, Ford, Uber ATG and Apple. We are looking for an experienced software developer with both backend and front-end skills to take on a number of projects.
About the role: The data infrastructure team is responsible for building the infrastructure which supports all of our engineering and operations. Our responsibilities include building the systems responsible for ingesting and processing the massive amounts of data generated by our autonomous fleet across the country, building and maintaining machine learning pipelines, developing infrastructure for simulation, real-time vehicle communication, and much more.
- Deploy, build and maintain infrastructure responsible for ingesting data from our vehicles at various centers across the country. This includes the hardware and operational processes as well as the software that powers the system.
- Develop advanced telemetry systems which allow our vehicles to stream video and data on-demand over LTE in real-time.
- Maintain the on-vehicle code responsible for data collection, monitoring and real-time communication.
- Build scalable data pipelines which operate on autonomous vehicle data to extract useful features, enable advanced queries, and power machine learning pipelines and simulation environments.
- Maintain the software execution environment for our vehicles including the host operating system, containerized environments, and the deployment procedure for both of these.
- Experience with big data and/or infrastructure
- BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience
- Significant experience with Python, C++, Go, or similar
- Experience working with classical relational and NoSQL databases
- Experience with Kafka, Hadoop, Spark, or other data processing tools
- Experience building scalable data pipelines
- Significant experience working with AWS and/or GCP
- Experience with Docker and Kubernetes or other container orchestration frameworks
- Proven ability to independently execute projects from concept to implementation
- Attention to detail and a passion for correctness
- Help revolutionize transportation as we know it! 🚚🤖
- Work in a small team that operates at an incredibly fast pace
- Lunch, dinner and snacks
- Competitive salary, equity and benefits including medical, dental & vision
When you apply, address the application to Adrianna and let her know why you want to join our team.