Research Engineer, Safety
San Francisco /
Applied AI /
The Applied Safety team is building safety specifications, processes, and measurement tools for general-purpose AI. We’re looking for safety-focused research engineers to work on measurement tools with us: we aim to develop a world-class toolkit for measuring the safety-relevant characteristics of our datasets, models, and algorithms. This is high-impact work that will help teams across OpenAI meet safety goals.
This is not about safety for narrow AI systems like autonomous vehicles: this is about safety for general-purpose AI systems that have large, uncharted surface areas of potential risk. Given that the field is quite young, your work may be foundational for future standards and professional duties.
In this role, you will:
- Take ambiguous, open-ended problems in measuring safety for general-purpose AI, make them tractable and well-posed, and build the solutions.
- Write clean, performant code for tools you build, with a focus on making them usable for researchers and engineers across the company.
- Engage with literature and experts across many different research domains (social sciences, physical sciences, economics, politics, etc.) to reason about impact from or interactions with general-purpose AI. Figure out how to measure safety concerns relevant for those domains.
- Build tools that help us proactively prevent a wide range of potential AI safety issues from the mundane to the speculative.
- Develop with a focus on scale and data. For example, if you want to measure whether a model has a certain qualitative behavior, you may need to build a dataset of hundreds or thousands of examples of that behavior to check against.
- Contribute to building a safety culture at OpenAI by shipping safety tools internally and helping everyone make the most use of them.
This role might be a good fit for you if you:
- Have strong programming skills.
- Are a fast learner who can quickly spin up on risks and impacts from bleeding-edge tech that will profoundly change the world in the near future.
- Are sincerely interested in reducing harms from AI.
Nice to haves:
- Experience working on large-scale natural language datasets or image datasets.
- Experience building and productionizing classifiers.
- Research experience in ML / AI.
We’re building safe Artificial General Intelligence (AGI), and ensuring it leads to a good outcome for humans. We believe that unreasonably great results are best delivered by a highly creative group working in concert. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
This position is subject to a background check for any convictions directly related to its duties and responsibilities. Only job-related convictions will be considered and will not automatically disqualify the candidate. Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodations via email@example.com.
- Health, dental, and vision insurance for you and your family
- Unlimited time off (we encourage 4+ weeks per year)
- Parental leave
- Flexible work hours
- Lunch and dinner each day
- 401(k) plan with matching