Engineer, Bias and Fairness
San Francisco /
Applied AI /
We’re looking for someone with expertise on bias and fairness in ML and preferably in Natural Language Processing to join our Applied AI team and help us ensure that our products are beneficial to humanity as a whole. You will be a key stakeholder in identifying and mitigating issues of bias and other harmful social impacts; advising on and implementing technical best practices for reducing harmful biases in language models and our products; developing scalable processes for incorporating feedback from people impacted by our systems; and helping us navigate ethical questions related to acceptable use cases for our technology.
We recognize that getting to good outcomes involves continuous discussion with people impacted by our technology, responding to that discussion with continued refinement of our technical approach, and a willingness to choose and hold red lines about use cases to support. A key element of our safety vision is to prevent our AI technology from causing systemic harm. We invite you to help us ensure we achieve this vision in practice.
In this role, you will…
- Recommend, adapt, and implement technical best practices for mitigating bias in ML language models intended for use in OpenAI products, particularly for advanced AI systems with open-ended use cases (for example, the OpenAI API).
- Work in an exciting and fast-paced environment that blends foundational AI research with the design, development, and operation of novel AI products and systems.
- Develop workflows to model and analyze bias- and fairness-related risks for open-ended AI systems, and train others to use those workflows.
- Work with researchers, engineers, and project managers to specify safety or other risk-related requirements for Applied AI projects.
- Engage in creative, out-of-the-box thinking about ways open-ended AI systems might have impacts, especially for unintentional and hard-to-predict side effects they may create in social contexts (eg incentive systems they create for people, and how those incentives lead to structural outcomes).
- Help create scalable norms and procedures for mitigating harmful social impacts from increasingly-capable AI products and systems deployed in an increasingly-wide range of social contexts.
- Contribute to the development of tools and processes for getting feedback from people impacted by AI systems we deploy, to ensure impacts are beneficial and not harmful.
- In partnership with our Safety and Policy teams, contribute to ongoing development of our policies for dataset construction and dataset use to figure out where and how we could apply dataset-centric bias mitigations.
- Surface and analyze concerns related to systemic social harms in discussions about acceptable and unacceptable product use cases.
Your background looks something like...
- A track record of excellent research or practical experience in bias and fairness in ML is needed for this role. (Though, note that while this work will intersect with research, it is not a research role.)
- You might be a top-tier researcher in this field, with multiple publications or other high-impact work.
- You might have a record of successful work on a team at a major company that routinely faces bias and fairness challenges in deploying ML products at scale.
- Added bonus if you have experience applying this in language-related research areas in particular.
- Direct work on technical mitigations for AI bias, along the lines of: a) Methods for quantifying, or giving otherwise rich and actionable descriptions of, biases in datasets, trained models, and systems involving models as components b) Formulation of optimization problems, eg objective function design, that can reduce harmful biases.
- An unwavering commitment to ensure that AI impacts truly work for all of humanity and not a narrow segment of it.
- You’re a fast learner who’s open to quickly spinning up on risks associated with a category of bleeding-edge technology that has profound implications for the world.
- You’re passionate about interdisciplinary work and excited to engage with social sciences and philosophy in addition to AI.
We’re building safe Artificial General Intelligence (AGI), and ensuring it leads to a good outcome for humans. We believe that unreasonably great results are best delivered by a highly creative group working in concert. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
This position is subject to a background check for any convictions directly related to its duties and responsibilities. Only job-related convictions will be considered and will not automatically disqualify the candidate. Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodations via firstname.lastname@example.org.
- Health, dental, and vision insurance for you and your family
- Unlimited time off (we encourage 4+ weeks per year)
- Parental leave
- Flexible work hours
- Lunch and dinner each day
- 401(k) plan with matching