AI Policy & Safety Researcher (San Francisco)

San Francisco, CA /
Engineering /

Generally Intelligent was founded with the vision of fostering a more abundant and equitable society through the deployment of safe, generally capable agents. We believe that practical methods, strategies, and policies for safety need to be part of the design process from the very beginning.

This is a greenfield role that will have significant influence over the direction of our AI policy & safety strategy. You will work on everything from envisioning what the world may look like with more agentic, generalizable AI to defining concrete policies and best practices. The role will involve considerable research and exploration as we collaboratively (both internally and externally) investigate actionable paths toward a safe version of the future with more capable AI systems.

Example projects

Further develop our internal safety principles and engineering best practices, solicit feedback from other researchers, and develop them into more public frameworks which others can easily use and deploy.
Conduct comprehensive literature reviews to draw inspiration and insight from a variety of fields, especially the practical engineering safety best practices that have been developed over decades in other engineering disciplines.
Partner with our ML Research Engineers to evaluate state-of-the-art methods through a policy & safety lens, as well as experiment with improved techniques for making safe and robust agents.
Collaborate with other AI research organizations to develop sets of credible precommitments and binding agreements that ensure safety is never deprioritized due to competitive pressures.

You are

Deeply passionate about the topic of AI policy and safety and eager to contribute to the future of the field.
From any number of fields—significant prior technical AI/ML experience is not required. Prior work in policy, governance, and machine learning are all relevant, though we are also excited to engage with people from other unconventional research backgrounds!
A very strong written and verbal communicator who can help distill and present our thoughts and findings to the wider community.
Comfortable working on an open-ended, ill-defined, and constantly evolving problem.


Work directly on creating software with human-like intelligence.
Generous compensation, equity, and benefits.
$20K+ yearly budget for self-improvement: coaching, courses, conferences, etc.
Actively co-create and participate in a positive, intentional team culture.
Spend time learning, reading papers, and deeply understanding prior work.
Frequent team events, dinners, off-sites, and hanging out.

How to apply

All submissions are reviewed by a person, so we encourage you to include notes on why you're interested in working with us. If you have any other work that you can showcase (open source code, side projects, etc.), certainly include it! We know that talent comes from many backgrounds, and we aim to build a team with diverse skillsets that spike strongly in different areas.

We try to reply either way within a week or two at most (usually much sooner).

Learn more about our full interview process here.

About us

We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.

Our research is focused primarily on self-supervised and generative video and audio models. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.