Software Engineering Intern - LLM Inference
Toronto, ON
Internships /
Internship /
Hybrid
About Us
We believe AI will fundamentally transform how people live and work. CentML's mission is to massively reduce the cost of developing and deploying ML models so we can enable anyone to harness the power of AI and everyone to benefit from its potential.
Our founding team is made up of experts in AI, compilers, and ML hardware and has led efforts at companies like Amazon, Google, Microsoft Research, Nvidia, Intel, Qualcomm, and IBM. Our co-founder and CEO, Gennady Pekhimenko, is a world-renowned expert in ML systems who holds multiple academic and industry research awards from Google, Amazon, Facebook, and VMware.
About the Position:
As a member of the LLM inference team, you will help build state-of-the-art software with the goal of enabling LLM inference to become more efficient, scalable, and accessible. Are you interested in architecting and implementing the best inference stacks in the LLM world? Work and collaborate with a diverse set of teams involving resource orchestration, distributed systems, inference engine optimization, and writing high-performance GPU kernels.
Come join our team and contribute towards democratizing Machine Learning for the world!
This hybrid opportunity is based in CentML’s Downtown Toronto office, with a minimum of 3 days in the office. Business requirements may require office attendance for 5 days/week in the office for extended periods.
Responsibilities:
- Write safe, scalable, modular, and high-quality (C++/Python) code for our core backend software.
- Perform benchmarking, profiling, and system-level programming for GPU applications.
- Provide code reviews, design docs, and tutorials to facilitate collaboration among the team.
- Conduct unit tests and performance tests for different stages of the inference pipeline.
Who you are:
- Strong coding skills in Python and C/C++.
- Knowledgeable and passionate about machine learning and performance engineering.
- fundamentals in machine learning and deep learning.
- Fundamentals in operating systems, computer architecture, and parallel programming.
- Experience with training, deploying, or optimizing the inference of LLMs in production is a plus.
- Experience with performance modeling, profiling, debugging, and code optimization or architectural knowledge of CPU and GPU is a plus.
Benefits & Perks
- An open and inclusive work environment
- Employee stock options
- Best-in-class medical and dental benefits
- Parental Leave top-up
- Professional development budget
- Flexible vacation time to promote a healthy work-life blend
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability, and any other protected ground of discrimination under applicable human rights legislation.
CentML strives to respect the dignity and independence of people with disabilities and is committed to giving them the same opportunity to succeed as all other employees.
Inclusiveness is core to our culture at CentML, and we strive to ensure you get the most from your interview experience. CentML makes reasonable accommodations for applicants with disabilities. If a reasonable accommodation is needed to participate in the job application or interview process, please reach out to the Talent team.