Senior Systems Safety Engineer

San Francisco /
Applied AI /
Full-time
We’re looking for someone with expertise in safe engineering and systems design to help us ensure that outcomes from increasingly-advanced AI products and systems are safe and beneficial. In this role, you will be developing safety features, tools, systems and controls based on best practices in a bleeding-edge technical field, adapting historical lessons learned from other fields for open-ended Applied AI use cases where requirements are notoriously hard to specify, implement, and verify. 

This is not about safety for narrow AI systems like autonomous vehicles: this is about safety for general-purpose AI systems that have large, uncharted surface areas of potential risk. This role will blend AI product safety best practices with more traditional systems safety engineering.  While this role most closely resembles traditional systems safety engineering work, there are no existing best practices for this technology and your work may help shape the foundations for future standards and professional duties.

In this role, you will...

    • Work with researchers, engineers, and project managers to specify safety or other risk-related requirements for projects within the Applied AI team.
    • In partnership with our Policy and Technical Safety teams, investigate incidents or unexpected behavior in Applied products, and develop concrete proposals to mitigate or prevent these. 
    • Work in an exciting and fast-paced environment that blends foundational AI research with the design, development, and operation of novel AI products and systems.
    • In partnership with other technical Safety functions, participate in the development of safety lifecycle management workflows for advanced AI systems with open-ended use cases.
    • Develop approaches to accident modeling and risk analysis that are good fits for open-ended AI systems.
    • Engage in creative, out-of-the-box thinking about ways open-ended AI products might have impacts, especially for unintentional and hard-to-predict side effects they may create.
    • Make use of strong skills for building consensus and alignment on project safety goals, balancing the needs of the many stakeholders involved, including considerations on safety, security, reliability, cost, and maintainability.
    • Be a key driver in connecting OpenAI’s bleeding-edge research on AI technical safety research with practical product development.
    • Contribute to developing and implementing safety norms and procedures in ways that scale gracefully with the increasingly-powerful AI products and systems that may arise over time.
    • Help systematize the process of logging and aggregating safety-related information in Applied AI and disseminating safety-related information within Applied AI and to partner teams.
    • Help design and execute procedures for safety incident investigation and response within Applied AI.
    • Contribute to building a safety culture at OpenAI that helps everyone understand why and how to participate in Applied AI safety-related activities.
    • Participate in cross-functional efforts across many teams to help the organization set and meet safety goals.

Your background looks something like…

    • Experience developing practical tools, systems and auditable controls for real-world applications.
    • A track record of working in risk analysis and goal-setting for safety-critical systems, ideally safety leadership in large multi-stakeholder projects.
    • Experience and familiarity with systems safety, possibly standards for particular kinds of systems (like ISO 26262 or similar), and safety lifecycle management.
    • Fluency in the concepts and terminology of modern ML / AI.
    • Direct experience working on ML / AI (eg as an engineer or hobbyist)—or working on safety for such—is a huge plus but not necessary.\
    • Experience working in a fast-paced technology or similar-style company is a nice to have.
    • You’re a fast learner who’s open to quickly spinning up on risks associated with a category of bleeding-edge technology that has profound implications for the world.
About OpenAI

We’re building safe Artificial General Intelligence (AGI), and ensuring it leads to a good outcome for humans. We believe that unreasonably great results are best delivered by a highly creative group working in concert. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

This position is subject to a background check for any convictions directly related to its duties and responsibilities. Only job-related convictions will be considered and will not automatically disqualify the candidate. Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.

We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodations via accommodation@openai.com.

Benefits 

- Health, dental, and vision insurance for you and your family
- Unlimited time off (we encourage 4+ weeks per year)
- Parental leave
- Flexible work hours
- Lunch and dinner each day
- 401(k) plan with matching