Senior Quality Assurance Engineer
Abu Dhabi
Engineering /
Full-time /
On-site
About the Institute of Foundation Models
We are a dedicated research lab for building, understanding, using, and risk-managing foundation models. Our mandate is to advance research, nurture the next generation of AI builders, and drive transformative contributions to a knowledge-driven economy.
As part of our team, you’ll have the opportunity to work on the core of cutting-edge foundation model training, alongside world-class researchers, data scientists, and engineers, tackling the most fundamental and impactful challenges in AI development. You will participate in the development of groundbreaking AI solutions that have the potential to reshape entire industries. Strategic and innovative problem-solving skills will be instrumental in establishing MBZUAI as a global hub for high-performance computing in deep learning, driving impactful discoveries that inspire the next generation of AI pioneers.
The Role
As a Senior Quality Assurance Engineer, you will play a critical role in ensuring the reliability, safety, and performance of our AI-driven products and software platforms. Unlike traditional QA roles, this position requires a deep understanding of both software testing methodologies and the unique challenges presented by AI systems, including model evaluation, fairness, robustness, and multi-modal feature testing. You will collaborate closely with cross-functional teams to develop comprehensive test strategies covering both AI components and conventional software systems.
Arabic language proficiency is preferred to support certain data and product validation tasks.
Key Responsibilities
- Design and implement comprehensive test plans and test cases for AI-powered applications, APIs, and platform features.
- Develop automated testing frameworks for AI models, data pipelines, and software features, primarily leveraging Python.
- Evaluate AI model outputs for correctness, fairness, robustness, and consistency across various tasks and datasets.
- Collaborate with AI researchers, data scientists, and engineers to define testing metrics and acceptance criteria for AI models.
- Conduct manual and exploratory testing where automation is insufficient, especially for AI feature behavior.
- Perform end-to-end testing across distributed systems, APIs, databases, web applications, and model inference pipelines.
- Implement performance testing for both AI inference and software systems to ensure responsiveness and scalability.
- Develop continuous integration (CI) and continuous deployment (CD) pipelines to ensure reliable and repeatable testing processes.
- Track, analyze, and report defects and quality metrics to stakeholders, driving root cause analysis and resolution.
- Contribute to internal quality standards, best practices, and documentation to strengthen the QA function within IFM.
Academic Qualifications
- Bachelor’s degree in Computer Science, Software Engineering, AI, or a related technical field required.
- Master’s degree or equivalent experience in AI systems, software testing, or quality assurance preferred.
Professional Experience - Required
- Extensive experience in software quality assurance, test automation, and validation of complex software systems.
- Strong proficiency in Python, with hands-on experience in test automation frameworks such as PyTest or similar.
- Understanding of AI/ML model testing challenges, including non-deterministic outputs, dataset biases, and model evaluation metrics.
- Experience testing distributed systems, RESTful APIs, cloud-native applications, and microservices architectures.
- Familiarity with CI/CD pipelines using tools such as Jenkins, GitLab CI, or GitHub Actions.
- Strong analytical skills, attention to detail, and ability to identify edge cases and failure modes.
- Excellent communication skills, with the ability to collaborate across multi-disciplinary teams.
Professional Experience - Preferred
- Experience testing AI models (LLMs, computer vision, multi-modal models) and AI-enhanced software features.
- Proficiency with AI/ML libraries and frameworks such as TensorFlow, PyTorch, or Hugging Face for model validation.
- Familiarity with data quality assurance, dataset validation, and annotation workflows.
- Knowledge of cloud infrastructure platforms (AWS, Azure, GCP) and containerized deployments (Docker, Kubernetes).
- Arabic language proficiency to assist in linguistic, data, or region-specific validation scenarios.
- Contributions to open-source QA tools or AI testing frameworks.