AI Risk Management Researcher - Offense/Defense Balances

Anywhere, Berkeley, CA or Cambridge, MA preferred
CARMA /
Full-Time /
Remote
Job Summary
Lead research on the offense/defense balances of advanced AI systems and their capabilities. Use methods from safety engineering, operational risk management, cybersecurity, and other disciplines to develop, critique, and make the case for new analytical methodologies with a focus on modeling societal impacts. These analyses and methodologies would be intended to be adapted and adopted by AI evaluators, auditors, standards bodies, and policymakers.

About the Organization
The Center for AI Risk Management & Alignment (CARMA) envisages humanity being able to pull itself together enough to steer the reins of its perilous journey to transformative AI successfully. CARMA's mission is to help accomplish that, to lower the risks to humanity and the biosphere from transformative AI. Via better grounding AI risk management, via policy research squarely addressing AGI, via showing a path to better tradeoffs in technical safety, and by fostering global perspectives on durable safety, CARMA aims to provide critical support to society for managing the outsized risks from advanced AI.
CARMA is a fiscally-sponsored project of Social & Environmental Entrepreneurs, Inc., a California 501(c)(3) nonprofit public benefit corporation. We collaborate closely with a variety of NGOs in the space and are well connected to achieve real impact with the gaps we will.

Responsibilities
* Apply a security mindset to analyzing prospective AI proliferation and usage dynamics
* Combine methods from safety engineering, operational risk management, cybersecurity, and a host of other disciplines to develop analyses and methodologies to model and weigh the offense/defense balances of differentially introduced capabilities
* Research and develop impact assessments related to global security, national security, public health, wellbeing, and other topics related to AI risk management
* Develop methods of aggregating modeled effects for AI models, AI systems, datasets, services, and other aggregate artifacts
* Write papers explaining groundings for the risk estimation methods you will develop
* Draft guidance for AI developers and auditors, appropriate as contributions to AI standards working groups

Requirements
* A M.Sc. or higher, or a B.Sc. plus 4 years of experience, in either Computer Science, Security Studies, Risk Management, AI Policy, Cybersecurity or a closely related field
* A demonstrated focus on any one or more of: machine learning, AI, safety engineering, AI policy, complex systems, operations research, or operational risk management, or other relevant domains
* Experience in any of the following: Security mindset, Security studies research, Cybersecurity, Safety engineering, AI governance, Operational risk management, Catastrophe risk management, Operations research, Industrial engineering, Futures studies, Foresight methods, Leading labs, Ontologies and knowledgebases, Incentive studies, Criminal psychology, or Technical standards development
* Skilled in developing informal, semi-formal, and formal models
* Relevant publications
* Skilled in technical and expository writing

Pluses
* Fluidity in using the modern cognitive AI stack of LLMs, prompt engineering, and scaffolds
* Experience with scalable graph models
* Experience in probabilistic programming environments

CARMA/SEE is proud to be an Equal Opportunity Employer. We will not discriminate on the basis of race, ethnicity, sex, age, religion, gender reassignment, partnership status, maternity, or sexual orientation. We are, by policy and action, an inclusive organization and actively promote equal opportunities for all humans with the right mix of talent, knowledge, skills, attitude, and potential, so hiring is only based on individual merit for the job. Note that we are unable to sponsor visas at this time, but non-U.S. contractors would also be considered.
$150,000 - $180,000 a year
Salary plus good benefits