Research Engineer - Computer Vision, Deep Learning

San Francisco, CA
Full - Time
Imagine taking autonomous vehicle technology from ‘100 vehicles in a couple of geo-fenced regions’ to ‘1 million vehicles across 100+ cities’. A major bottleneck towards realizing this is in large-scale multi-modal mapping. Maps are a composite of relevant prior information: high-resolution scans of the ground surface, geometric data accurate to the centimeter, driving-logs and accurately labelled environmental elements (such as lane markers, crosswalks and signs). is a young company - started in Aug 2017; and has very quickly started working with 10+ autonomous vehicle teams on their mapping challenges. We are looking for really smart software engineers who want to enable the future of autonomous vehicles across the world. You will be implementing perception models to build and update maps that power self-driving systems.


    • Build Robust perception models to automatically detect and annotate environmental elements.
    • Enhance perception models by leveraging multiple modalities of sensor data.
    • Train neural network models at very large scale with varying environments and context.
    • Design and Implement multi-vehicle collaborative inference and map creation.

Skills we are looking for:

    • Experience with deep learning for segmentation, robust scene analysis and object detection.
    • Experience with cross modal transfer learning.
    • Familiarity with computer vision on hybrid hardware architectures and parallel programming.
    • Passionate about Visual SLAM, SFM and autonomous vehicle data.
    • MS/PhD in Computer Science, Electrical Engineering, Robotics or equivalent
The data is incontrovertible that diversity leads to higher quality innovation. Consequently, we actively encourage people of all gender, backgrounds and experiences to apply.