Staff Research Engineer, Model Efficiency
Seattle / San Francisco / New York / Toronto
Tech – Modeling /
Full-time, Remote /
Hybrid
Who are we?
Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.
We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.
Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.
Join us on our mission and shape the future!
Why this role?
Large Language Models (LLMs) have demonstrated remarkable performance across various tasks. However, the substantial computational and memory requirements of LLM inference pose challenges for deployment. The mission of the model efficiency team is pushing the limits of LLM serving efficiency of our foundation models with techniques such as model architecture optimization, efficient algorithms, and software/hardware co-optimization.
As a Staff Research Engineer in the model efficiency team, you will develop innovative solutions to boost the performance of LLM inference.
Please Note: We have offices in Toronto, San Francisco, New York and London. We embrace a remote-friendly environment, and as part of this approach, we strategically distribute teams based on interests, expertise, and time zones to promote collaboration and flexibility. You'll find the Model Efficiency team concentrated in the EST and PST time zones.
You may be a good fit for the model efficiency team if you:
- Have a PhD in Machine Learning or a related field
- Understand LLM architecture, and how to optimize LLM inference given resource constraints
- Have significant experience with one or more techniques that enhance model efficiency
- Have strong software engineering skills
- Have an appetite to work in a fast-paced high-ambiguity start-up environment
- Published and presented at top-tier conferences and venues (ICLR, ACL, NeurIPS)
- Are passionate about mentoring others
If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! If you consider yourself a thoughtful worker, a lifelong learner, and a kind and playful team member, Cohere is the place for you.
We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.
Our Perks:
🤝 An open and inclusive culture and work environment
🧑💻 Work closely with a team on the cutting edge of AI research
🍽 Weekly lunch stipend, in-office lunches & snacks
🦷 Full health and dental benefits, including a separate budget to take care of your mental health
🐣 100% Parental Leave top-up for 6 months for employees based in Canada, the US, and the UK
🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement
🏙 Remote-flexible, offices in Toronto, New York, San Francisco and London and co-working stipend
✈️ 6 weeks of vacation
Note: This post is co-authored by both Cohere humans and Cohere technology.