Localization & Odometry Engineer, Vehicle Perception

Tokyo
Product & Technology – AD/ADAS /
Employee /
Hybrid
About Woven by Toyota
Woven by Toyota is enabling Toyota’s once-in-a-century transformation into a mobility company. Inspired by a legacy of innovating for the benefit of others, our mission is to challenge the current state of mobility through human-centric innovation — expanding what “mobility” means and how it serves society.

Our work centers on four pillars: AD/ADAS, our autonomous driving and advanced driver assist technologies; Arene, our software development platform for software-defined vehicles; Woven City, a test course for mobility; and Cloud & AI, the digital infrastructure powering our collaborative foundation. Business-critical functions empower these teams to execute, and together, we’re working toward one bold goal: a world with zero accidents and enhanced well-being for all.

=========================================================================

TEAM
At Woven by Toyota, the vehicle perception odometry and localization team develops algorithms that power production-ready autonomous driving (AD) and advanced driver assistance (ADAS) systems. Our team is responsible for building algorithms that provide cornerstone information about vehicle state and location within a sparse low-fidelity map. We seek to leverage the state-of-the-art from probabilistic inference and machine learning to build novel sensor fusion methods that will expand the capabilities of Toyota vehicles.

WHO ARE WE LOOKING FOR?
We are looking for a skilled developer to help build state-of-the-art sensor fusion algorithms to provide odometry and localization information critical to an autonomous driving stack. A strong candidate will be familiar with modern filtering and estimation techniques as well as ML modeling in order to extract desired state from vision, inertial, and GNSS sensors.

Our vehicle perception odometry and localization team has a collaborative culture. We’re looking for a strong team player with a passion to innovate and deliver a product to the customer.

We recognize the unique capabilities each team member brings and encourage applicants to reach out even if they do not match all of the characteristics described below.

RESPONSIBILITIES

    • Contribute to rule-based and ML model R&D by prototyping, validating and iterating on existing and new model architectures for vehicle state estimation
    • Own end-to-end development of new ML models, from data strategy, initial development, optimization, production platform validation, and fine tuning based on metrics and on-road performance
    • Lead multi-person projects and influence the overall AD/ADAS architecture and technical direction
    • Enable and help other engineers on the team to be more effective through coaching and leading by example when it comes to writing high-quality code, providing high-quality code and design document reviews and delivering rigorous reports from data-driven experiments
    • Work in a high-velocity environment and employ agile development practices
    • Team player and “get things done” mentality
    • Collaborate closely with stakeholders in downstream customer teams to define interfaces and requirements for the Perception stack

MINIMUM QUALIFICATIONS

    • MS or higher degree in CS/CE/EE, or equivalent industry experience
    • 3+ years experience with sensor fusion problems such as state estimation, localization, and SLAM
    • Experience with ML frameworks such as PyTorch or Tensorflow
    • Experience in machine learning workflows: data sampling and curation, pre-processing, model training, ablation studies, evaluation, deployment, inference optimization
    • Strong programming skills in C++ and Python
    • Passionate about applying ML methodology to advance self driving technology
    • Strong communication skills and ability to communicate concepts clearly and precisely
    • Experience in production-level software development best practices

NICE TO HAVES

    • Experience in developing vision-first perception and mapping ML models
    • Hands-on experience with building algorithms for autonomous systems
    • Experience in runtime optimization for mission-critical systems on Linux and UNIX-like real-time operating systems
    • Experience with automotive grade edge-compute platforms
    • Experience with building safety-critical software architectures
=========================================================================
Important Points
・All interviews will be arranged via Google Meet, unless otherwise stated.
・The same job descriptions are available in both English and Japanese; therefore, we kindly ask that you apply to only one version.
・We kindly request that you submit your resume in English, if possible. However, Japanese resumes are also acceptable. Please note that, depending on the English proficiency requirements of the role, we may request an English version of your resume later in the process.

WHAT WE OFFER
・Competitive Salary - Based on experience
・Work Hours - Flexible working time
・Paid Holiday - 20 days per year (prorated)
・Sick Leave - 6 days per year (prorated)
・Holiday - Sat & Sun, Japanese National Holidays, and other days defined by our company
・Japanese Social Insurance - Health Insurance, Pension, Workers’ Comp, and Unemployment Insurance, Long-term care insurance
・Housing Allowance
・Retirement Benefits
・Rental Cars Support
・In-house Training Program (software study/language study)

Our Commitment
・We are an equal opportunity employer and value diversity.
・Any information we receive from you will be used only in the hiring and onboarding process. Please see our privacy notice for more details.