ML Research Engineer
San Francisco, California
Engineering /
Full-time /
On-site
Transformers feel magical -- they understand text at near-human level. Yet traditional search engines (cough, Google) don't feel magical -- they don't understand text at near-human level. It doesn't have to be this way.
At Exa, we're training foundational models for search. Our goal is to build systems that can instantly filter the world's knowledge to exactly what you want, no matter how complex your query. Search is a unique research problem within generative AI -- instead of generating a single piece of text, search involves asking the same questions about billions of texts.
No other AI lab is exploring this direction. That means if we don't find novel methods for search, they won't be found. Our team has already made breakthroughs not seen in the literature, and we have more coming. Want to explore the search frontier with us?
Desired Experience
- You have graduate-level ML experience (or are an exceptionally strong undergrad)
- You can code up a transformer from scratch in pytorch
- You're comfortable creating large-scale datasets
- Care about the problem of finding high quality knowledge and recognize how important this is for the world
Example Projects
- Train a 70B parameter version of our current model
- Build an RLAIF pipeline for search
- Dream up a novel architecture for search in the shower, then code it up and beat our best model's top score
$130,000 - $300,000 a year
We are open to sponsoring International candidates (e.g STEM OPT, OPT, H1B, O1, E3)
This is an in-person opportunity in San Francisco. We're big believers in in-person culture!