Evals Platform Engineer

London
Evals Team /
Full-time /
On-site
Applications deadline: The final date for submissions is 25 April 2025. However, we review applications on a rolling basis and encourage early submissions.

ABOUT APOLLO RESEARCH

The capabilities of current AI systems are evolving at a rapid pace. While these advancements offer tremendous opportunities, they also present significant risks, such as the potential for deliberate misuse or the deployment of sophisticated yet misaligned models. At Apollo Research, our primary concern lies with deceptive alignment, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight.

Our approach focuses on behavioral model evaluations, which we then use to audit real-world models. We also combine black-box approaches with applied interpretability. In our evaluations, we focus on LM agents, i.e. LLMs with agentic scaffolding similar to AIDE or SWE agent. We also study model organisms in controlled environments (see our security policies), e.g. to better understand capabilities related to scheming. 

At Apollo, we aim for a culture that emphasizes truth-seeking, being goal-oriented, giving and receiving constructive feedback, and being friendly and helpful. If you’re interested in more details about what it’s like working at Apollo, you can find more information here.

ABOUT THE TEAM

The current evals team consists of Mikita Balesni, Jérémy Scheurer, Alex Meinke, Rusheb Shah, Bronson Schoen, Andrei Matveiakin, Felix Hofstätter, and Axel Højmark. Marius Hobbhahn manages and advises the team, though team members lead individual projects. You would work closely with Rusheb and Andrei, who are the full-time software engineers on the evals team, but you would also interact a lot with everyone else. You can find our full team here.

ABOUT THE ROLE

We are looking for a Platform Engineer to build, scale, and maintain the infrastructure that supports our frontier AI evaluation research, with a strong emphasis on security. As the infrastructure specialist on a small team, you'll have broad decision-making authority on our infrastructure stack. You’ll design and provision the infrastructure that our researchers depend on daily and also contribute to the software platform we build on top of this infrastructure.

You will work as part of a small cross-functional team, collaborating with software engineers and research scientists to ensure our infrastructure is scalable, secure, and efficient.

We welcome applicants of all ethnicities, genders, sexes, ages, abilities, religions, sexual orientations, regardless of pregnancy or maternity, marital status, or gender reassignment.

Responsibilities
- Design, implement, scale, and maintain infrastructure for running frontier LLM evals
- Work closely with software engineers and researchers to understand and address infrastructure needs
- Choose and integrate appropriate technologies for our infrastructure stack
- Administer and secure internal AWS accounts
- Enforce security best practices
- Manage IAM permissions and access control
- Manage CI/CD pipelines
- Design and build data storage systems for evaluation results
- Help set up and manage organisation-wide security processes
- Track and manage infrastructure spending
- Contribute to development of internal software tools that leverage our infrastructure

Required skills
- Experience leading infrastructure projects from start to finish
- Experience implementing security best practices for cloud and containerized environments
- Solid knowledge of AWS, including IAM and EKS
- Strong hands-on experience with Kubernetes
- Experience with Infrastructure as Code tools
- Strong software engineering skills, preferably in Python
- Ability to work well with researchers and understand their technical needs

Strong candidates may have some of the following
- Experience with Cilium, gVisor, or Karpenter
- Experience working with LLM evaluations
- Experience building and managing data storage systems
- Experience setting up and maintaining data collection, monitoring, and alerting systems.
- Exposure to startup environments or early-stage engineering teams
- Track record of building and scaling infrastructure from scratch with fast turnaround
- Cybersecurity experience

Values: We’re looking for team members who thrive in a collaborative environment and are results-oriented. You can find out more about our culture here

We want to emphasize that people who feel they don’t fulfill all of these characteristics but think they would be a good fit for the position nonetheless are strongly encouraged to apply. We believe that excellent candidates can come from a variety of backgrounds and are excited to give you opportunities to shine. 

Representative projects
- Build a job orchestration system to run frontier LLM evals on remote machines
- Design and implement a database of all historical eval runs
- Create a permissions structure for our repositories of tasks and results with fine-grained per-project access controls
- Provision and maintain secure remote dev machines for researchers
- Set up and maintain a centralised docker container repository
- Provision and maintain an internal kubernetes cluster
- Implement security best practices across all AWS accounts and resources

EVALS TEAM WORK. The evals team focuses on the following efforts:

    • As a platform engineer, you work closely with all of our research scientists by building high-quality infrastructure to support all of the following efforts. 
    • Conceptual work on safety cases for scheming, for example, our work on evaluation-based safety cases for scheming
    • Building evaluations for scheming-related properties, such as situational awareness or deceptive reasoning.
    • Conducting evaluations on frontier models and publishing the results either to the general public or a target audience such as AI developers or governments, for example, our work in OpenAI’s o1-preview system card.
    • Creating model organisms and demonstrations of behavior related to deceptive alignment, e.g. exploring the influence of goal-directedness on scheming.
    • Designing and evaluating AI control protocols. We have not started these efforts yet but intend to work on them starting Q2 2025.

LOGISTICS

    • Start Date: Target of 2-3 months after the first interview.
    • Time Allocation: Full-time.
    • Location: The office is in London, and the building is shared with the London Initiative for Safe AI (LISA) offices. This is an in-person role. In rare situations, we may consider partially remote arrangements on a case-by-case basis.
    • Work Visas: We can sponsor UK visas 

BENEFITS

    • Salary: a competitive UK-based salary.
    • Flexible work hours and schedule.
    • Unlimited vacation.
    • Unlimited sick leave.
    • Lunch, dinner, and snacks are provided for all employees on workdays.
    • Paid work trips, including staff retreats, business trips, and relevant conferences.
    • A yearly $1,000 (USD) professional development budget.
We want to emphasize that people who feel they don’t fulfill all of these characteristics but think they would be a good fit for the position nonetheless are strongly encouraged to apply. We believe that excellent candidates can come from a variety of backgrounds and are excited to give you opportunities to shine.

Equality Statement: Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation.

How to apply: Please complete the application form with your CV. The provision of a cover letter is optional but not necessary. Please also feel free to share links to relevant work samples.

About the interview process: Our multi-stage process includes a screening interview, a take-home test (approx. 2 hours), 3 technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no leetcode-style general coding interviews. If you want to prepare for the interviews, we suggest working on hands-on LLM evals projects (e.g. as suggested in our starter guide), such as building LM agent evaluations in Inspect.

Applications deadline: The final date for submissions is 25 April 2025. However, we review applications on a rolling basis and encourage early submissions.

* This role is supported by AI Futures Grants, a UK Government program designed to help the next generation of AI leaders meet the costs of relocating to the UK. AI Futures Grants provide financial support to reimburse relocation costs such as work visa fees, immigration health surcharge and travel/subsistence expenses. Successful candidates for this role may be able to get up to £10,000 to meet associated relocation costs, subject to terms and conditions.