Product Policy Lead

San Francisco, CA /
Product /
/ Hybrid
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe for our customers and for society as a whole.

As the Product Policy Lead, you will set the foundation for Anthropic’s approach to safe deployments. You will develop the policies that govern the use of our systems, oversee the technical approaches to identifying current and future risks, and build the organizational capacity to mitigate product safety risks at-scale. You will work collaboratively with our Product, Societal Impacts, Policy, Legal, and leadership teams to develop policies and processes that protect Anthropic and our partners.

You’re a great fit for the role if you’ve served in leadership positions in the fields of Trust & Safety, product policy, or risk management at fast-growing technology companies, and you recognize that emerging technology such as generative AI systems will require creative approaches to mitigating complex threats.

Please note that in this role you may encounter sensitive material and subject matter, including policy issues that may be offensive or upsetting.

Representative projects

    • Set the strategy and define the build-out of Anthropic’s approach to product policy. You will determine the policies for how our systems can be used, oversee the development of risk identification and monitoring functionality, and build out our Product Policy function, including policy analysts, engineers, data scientists, and operations analysts 
    • Lead the development of Anthropic’s policies on how our systems can be used, from the identification and prioritization of needed policies, to research efforts ensuring those policies are informed by subject matter experts, to the testing and iteration of draft policies, to implementation
    • Build out and oversee the technical components of our product policy organization, including the engineers and data scientists who develop innovative methods for identifying and mitigating system abuse
    • Work collaboratively with the Product and Societal Impacts teams, as well as external partners, to deeply understand potential use cases for Anthropic systems and the requisite policies to govern them effectively
    • Own the end-to-end execution of product policy enforcement, including investigations of novel use cases and edge-case policy decisions, as well as the eventual buildout and scaling of a policy operations function
    • Communicate Anthropic’s policies externally and work collaboratively with other organizations to build strong community norms amongst AI developers

You might be a good fit if you:

    • Enjoy building programs from the ground up. You think holistically and can proactively identify the needs of an organization, making key hires or developing new programs as needed. You have demonstrated experience growing a dedicated function and scaling its impact.
    • Are an excellent communicator. You make ambiguous problems clear and identify core principles that can translate across scenarios. You advise leadership, internal teams, and customers on specific policy decisions, as well as industry trends more broadly.
    • Have strong people management skills. You’re an experienced manager with a track record for building high-functioning, cohesive teams. You recruit and mentor individual contributors and other managers across policy, technical, and operations teams. 
    • Have a passion for making powerful technology safe and societally beneficial. You anticipate unforeseen risks, model out scenarios, and provide actionable guidance to internal stakeholders.
    • Thrive on collaboration and build trust with teams across the organization. You handle sensitive and high-stakes policy decisions with professionalism and diplomacy. You respectfully influence stakeholders to act on data and insights from your team.
    • Think creatively about the risks and benefits of new technologies, and think beyond past checklists and playbooks. You stay up-to-date and informed by taking an active interest in emerging research and industry trends.
Compensation and Benefits*
Anthropic’s compensation package consists of three elements: salary, equity, and benefits. We are committed to pay fairness and aim for these three elements collectively to be highly competitive with market rates.

Salary - The expected salary range for this position is $250k - $295k.

Equity -  Equity will be a major component of the total compensation for this position. We aim to offer higher-than-average equity compensation for a company of our size, and communicate equity amounts at the time of offer issuance.

Benefits - Benefits we offer include:
- Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
- Comprehensive health, dental, and vision insurance for you and all your dependents.
- 401(k) plan with 4% matching.
- 21 weeks of paid parental leave.
- Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
- Stipends for education, home office improvements, commuting, and wellness.
- Fertility benefits via Carrot.
- Daily lunches and snacks in our office.
- Relocation support for those moving to the Bay Area.

* This compensation and benefits information is based on Anthropic’s good faith estimate for this position, in San Francisco, CA, as of the date of publication and may be modified in the future. The level of pay within the range will depend on a variety of job-related factors, including where you place on our internal performance ladders, which is based on factors including past work experience, relevant education, and performance on our interviews or in a work trial.


Deadline to apply: None. Applications will be reviewed on a rolling basis.

Hybrid policy: For this role, we prefer candidates who are able to be in our office more than 25% of the time, though we encourage you to apply even if you don’t think you will be able to do that.

How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. 

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Mulitmodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us! Anthropic is a public benefit corporation based in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.