Safety Researcher

San Francisco
Member of Technical Staff
We are seeking deep learning experts with a strong interest in ensuring that AI systems, present and future, are beneficial to humanity and aligned with human values.  We look for the following attributes in candidates:

- Track record of coming up with new ideas in deep learning (not necessarily safety-related) which achieve strong results, as demonstrated by one or more publications or major projects
- Strong implementation skills -- ability to quickly implement new ideas at small scale, and then participate in scaling them
- Strong interest and commitment to developing the technology required to align increasingly powerful AI with human values

Prior work or publications in safety-relevant areas (such as safe exploration, robustness, or reward learning) is a plus but not necessary.  Familiarity with debates and discussion over AI safety topics is also helpful but not necessary.