Research Scientist - Distributed Machine Learning

Sunnyvale, CA
Research /
Full-time /
On-site
About the Institute of Foundation Models
We are a dedicated research lab for building, understanding, using, and risk-managing foundation models. Our mandate is to advance research, nurture the next generation of AI builders, and drive transformative contributions to a knowledge-driven economy.

As part of our team, you’ll have the opportunity to work on the core of cutting-edge foundation model training, alongside world-class researchers, data scientists, and engineers, tackling the most fundamental and impactful challenges in AI development. You will participate in the development of groundbreaking AI solutions that have the potential to reshape entire industries. Strategic and innovative problem-solving skills will be instrumental in establishing MBZUAI as a global hub for high-performance computing in deep learning, driving impactful discoveries that inspire the next generation of AI pioneers.



Role Overview
Build and scale distributed pre-training frameworks
·      Set up DeepSpeed / FSDP / Megatron-LM across multi-node GPU clusters.
·      Create robust launch scripts, resilient checkpoints, and job monitoring (e.g. NCCL/GLOO/GPU).
Turn mathematical ideas into fast production code
·      Prototype new optimizers or attention methods (like in PyTorch/NumPy/JAX orothers).
·      Convert them into efficient CUDA/Triton kernels with custom gradients and tests.
Boost training efficiency and stability
·      Lead mixed-precision training, push bf16, fp8, etc, into daily runs, track their accuracy-vs-speed gains, and be able to analyze numeric stability
·      Apply kernel fusion, communication tuning, and memory optimization to reach state-of-the-art throughput.
Accelerate research velocity
·      Build logging, metrics, and other experiment-tracking tools for rapid iteration.
·      Design ablation studies and statistical tests that validate—or refute—new ideas.
·      Mentor interns and junior engineers through clear async design docs and code reviews.
You’ll work side-by-side with researchers, ship production code, and shape the future of large language models.
Why You’ll Love This Job
·      Frontier-scale impact – Train and ship cutting-edge models powering MBZUAI research and industry collaborations.
·      Research × Engineering blend – Move breakthrough papers into real systems and publish your own results.
·      End-to-end mastery – Touch everything from petabyte data loaders to custom low-level kernels—experience that’s rare elsewhere.
·      Open, mission-driven science – Join a transparent culture tackling problems that truly advance AI.
·      Founding-team growth – Help set direction for IFM U.S. and lead the next generation of AI development.
Key Responsibilities
·      Framework Ownership – Productionize a PyTorch/JAX pre-training stack and keep it reliable at scale.
·      Custom Optimizer Implementation – Code new algorithms in distributed frameworks directly from mathematical specs.
·      Experiment Infrastructure – Build reusable modules, logging, and metrics dashboards that speed up research cycles.
·      Performance Optimization – Apply kernel fusion, communication optimization, and memory management to thousands of GPU jobs.
·      Distributed Debugging – Rapidly diagnose gradient synchronization, collective-ops, or fault-tolerance issues.
·      Collaboration – Document designs clearly, run post-mortems, and partner with global research teams.
Qualifications
Must-Haves
·      5 + years combined industry or hands-on research experience with large-scale deep-learning training.
·      Led at least one large-scale transformer pre-training run
·      Expert PyTorch or JAX/Flax plus DeepSpeed, FSDP, Megatron-LM, or MosaicML Composer.
·      Experience with distributed training at scale (100+ GPUs).
·      Proven multi-node GPU work (Slurm, K8s, or Ray) and NCCL/GLOO debugging.
·      Strong software engineering skills on large ML codebases
·      Ownership of mixed- or low-precision paths (bf16, fp8, 4-bit) with accuracy validation.
·      Clear written communication (design docs, RFCs, post-mortems).
Nice-to-Haves
·      NeurIPS / ICML / ICLR papers or open-source contributions to major ML frameworks.
·      Experience implementing optimization algorithms (e.g., SGD variants, Adam, second-order methods).
·      Background in numerical computing.
·      Ability to translate math and
·      build high-perf CUDA/Triton kernels.
$300,000 - $600,000 a year
Total compensation target:$300 K – $600 K (base salary + target bonus up to 30 %), commensurate with experience.
Visa Sponsorship
This position is eligible for visa sponsorship.

Benefits Include
*Comprehensive medical, dental, and vision benefits 
 *Bonus
*401K Plan
*Generous paid time off, sick leave and holidays
*Paid Parental Leave
*Employee Assistance Program
*Life insurance and disability