AI Risk Management Researcher

Anywhere in U.S., Berkeley, CA or Cambridge, MA preferred
Full-Time /
Job Summary
Lead research on a set of key gaps in the global risk management of advanced AI systems. By combining techniques from multiple disciplines, with creativity and grounding, you will be developing and explaining new analytical methodologies for adaptation by AI evaluators and auditors. We will be hiring two into this role: one focusing on offense-defense balances of systems, and the other focusing on broader risk quantification of systems, where each would lead respective projects.

About the Organization
The Center for AI Risk Management & Alignment (CARMA) envisages humanity being able to pull itself together enough to steer the reins of its journey successfully through the patch of unprecedented difficulties almost upon us from transformative AI, to better days: CARMA's mission is to lower the risks to humanity and the biosphere from transformative AI.
Via better grounding AI risk management, via policy research squarely addressing AGI, via showing a path to better tradeoffs in technical safety, and by fostering global perspectives on durable safety, CARMA aims to provide critical support to society for managing the outsized risks from advanced AI.
CARMA is a fiscally-sponsored project of Social & Environmental Entrepreneurs, Inc., a California 501(c)(3) nonprofit public benefit corporation.

* Apply a security mindset to analyzing prospective AI proliferation and usage dynamics
* Research aspects of AI R&D processes, AI safety, and prospective AGI capabilities
* Research and develop impact assessments related to global security, national security, public health, wellbeing, and other topics related to AI risk management
* Write papers explaining groundings and processes for developing and implementing particular types of risk management analyses you will develop
* Draft methodologies for AI developers and auditors, appropriate as contributions to AI standards working groups
* Focus on either a) comprehensive risk quantification of systems/models/datasets or b) questions of offense-defense balances of capabilities from systems/models/datasets

* A M.Sc. or higher, or a B.Sc. plus 6 years of experience, in either Computer Science, Security Studies, Risk Management, AI Policy, Cybersecurity or a closely related field
* A demonstrated focus on any one or more of: machine learning, AI, safety engineering, AI policy, complex systems, operations research, or operational risk management, or other relevant domains
* Experience in any two or more of the following: Security mindset, Security studies research, Cybersecurity, Safety engineering, AI governance, Operational risk management, Catastrophe risk management, Operations research, Industrial engineering, Futures studies, Foresight methods, Leading labs, Ontologies and knowledgebases, Incentive studies, Criminal psychology, or Technical standards development
* Relevant publications
* Skilled in developing informal, semi-formal, and formal models
* Skilled in technical and expository writing
* Experience working on, tracking, and successfully completing multiple concurrent tasks to meet deadlines with little supervision

* Fluidity in using the modern cognitive AI stack of LLMs, prompt engineering, and scaffolds
* Experience with scalable graph models
* Experience in probabilistic programming environments

CARMA/SEE is proud to be an Equal Opportunity Employer. We will not discriminate on the basis of race, ethnicity, sex, age, religion, gender reassignment, partnership status, maternity, or sexual orientation. We are, by policy and action, an inclusive organization and actively promote equal opportunities for all humans with the right mix of talent, knowledge, skills, attitude, and potential, so hiring is only based on individual merit for the job. Note that we are unable to sponsor visas at this time, but non-U.S. contractors would also be considered.
$150,000 - $180,000 a year
Salary plus good benefits