Machine Learning Engineer, Perception

Munich
Munich Germany /
Full-time /
Hybrid
Plus is a global provider of highly automated driving and fully autonomous driving solutions with headquarters in Silicon Valley, California. Named by Forbes as one of America’s Best Startup Employers and Fast Company as one of the World’s Most Innovative Companies, Plus’s open autonomy technology platform is already powering vehicles in commercial use today. Working with one of the largest companies in the U.S., vehicle manufacturers, and others globally, Plus is helping to make driving safer, more comfortable, and more sustainable. Plus has received a number of industry awards and distinctions for its transformative technology and business momentum from Fast CompanyForbesInsiderConsumer Electronics ShowAUVSI, and others. If you’re ready to make a huge impact and drive the future of autonomy, Plus is looking for talented individuals to join its fast-growing teams.

We are seeking a highly skilled Machine Learning Engineer with deep expertise in developing Bird’s Eye View (BEV) fusion models using multimodal sensor inputs, particularly LiDAR. You will play a central role in designing scalable perception algorithms that integrate data from camera, LiDAR, and radar sensors to support autonomous driving and 3D scene understanding.

Responsibilities:

    • Design, implement, and optimize BEV-based perception models that fuse camera, LiDAR, and radar inputs.
    • Benchmark perception models using large-scale datasets and well-defined quantitative metrics.
    • Collaborate cross-functionally with research, data, and deployment engineers to refine models and support real-world applications.
    • Maintain a strong focus on performance, robustness, and scalability for deployment in production systems.

Required Skills:

    • Master’s or Ph.D. in AI, Computer Science, Electrical Engineering, Robotics, or a related field.
    • Proficiency in Python and experience building deep learning pipelines.
    • Strong expertise in PyTorch, TensorFlow, or JAX.
    • Proven experience with LiDAR-based 3D perception and BEV representation models
    • Deep understanding of multimodal sensor fusion architectures and techniques.
    • Familiarity with camera, LiDAR, and radar modalities and their synchronization, calibration, and integration in perception pipelines.
    • Solid foundation in computer vision, deep learning, and 3D geometry.

Preferred Skills:

    • Industry or academic experience in autonomous vehicle perception, robotics, or related areas.
    • Hands-on experience developing deep learning models in real-world or production environments.
    • Experience with distributed training, high-performance computing, or GPU acceleration.
Your opportunities joining Plus
Work, learn and grow in a highly future-oriented, innovative and dynamic field.
Wide range of opportunities for personal and professional development.