Software Engineer - LLM Inference
We believe AI will fundamentally transform how people live and work. CentML's mission is to massively reduce the cost of developing and deploying ML models so we can enable anyone to harness the power of AI and everyone to benefit from its potential.
Our founding team is made up of experts in AI, compilers, and ML hardware and has led efforts at companies like Amazon, Google, Microsoft Research, Nvidia, Intel, Qualcomm, and IBM. Our co-founder and CEO, Gennady Pekhimenko, is a world-renowned expert in ML systems who holds multiple academic and industry research awards from Google, Amazon, Facebook, and VMware.
About the Position:
As a member of the LLM inference team, you will help build state-of-the-art software with the goal of enabling LLM inference to become more efficient, scalable, and accessible. Are you interested in architecting and implementing the best inference stacks in the LLM world? Work and collaborate with a diverse set of teams involving resource orchestration, distributed systems, inference engine optimization, and writing high-performance GPU kernels.
Come join our team and contribute towards democratizing Machine Learning for the world!
- Write safe, scalable, modular, and high-quality (C++/Python) code for our core backend software.
- Perform benchmarking, profiling, and system-level programming for GPU applications.
- Provide code reviews, design docs, and tutorials to facilitate collaboration among the team.
- Conduct unit tests and performance tests for different stages of the inference pipeline.
Who you are:
- Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.
- Strong coding skills in Python and C/C++.
- 2+ years of industry experience in software engineering.
- Knowledgeable and passionate about machine learning and performance engineering.
Nice to haves:
- Solid fundamentals in machine learning and deep learning.
- Solid fundamentals in operating systems, computer architecture, and parallel programming.
- Research experience in systems or machine learning.
- Industry experience in building enterprise-scale large distributed systems.
- Experience with training, deploying, or optimizing the inference of LLMs in production is a plus.
- Experience with performance modeling, profiling, debugging, and code optimization or architectural knowledge of CPU and GPU is a plus.
We strongly encourage you to include sample projects (e.g. Github) that demonstrate the qualifications above.
For recent graduates, you can optionally submit your unofficial transcripts.
Benefits & Perks
- An open and inclusive culture and work environment
- Fully stocked kitchen at the office
- Full health and dental benefits
- Parental Leave top-up for 6 months
- Continuous education budget
- Generous vacation - we're not saying unlimited, but if you need extra time to recharge, just ask
At CentML, we celebrate our differences and value cultivating an inclusive environment for all. We welcome applications of all kinds and are committed to providing an equal opportunity process.