Research Engineer, Preparedness (Biology/CBRN)

New Today

About the team The is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models. Specifically, the mission of the Preparedness team is to: Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic (not necessarily existential) to our society; and
Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and, more broadly, to safely handle the development of powerful AI systems.
Our team will tightly connect capability assessment, evaluations, and internal red teaming for frontier models, as well as overall coordination on AGI preparedness. The team’s core goal is to ensure that we have the infrastructure needed for the safety of highly-capable AI systems—from the models we develop in the near future to those with AGI-level capabilities. About you We are looking to hire exceptional research engineers that can push the boundaries of our frontier models. Specifically, we are looking for those that will help us shape our empirical grasp of the whole spectrum of AI safety concerns and will own individual threads within this endeavor end-to-end. In this role, you'll: Work on identifying emerging AI safety risks and new methodologies for exploring the impact of these risks
Build (and then continuously refine) evaluations of frontier AI models that assess the extent of identified risks
Design and build scalable systems and processes that can support these kinds of evaluations
Contribute to the refinement of risk management and the overall development of "best practice" guidelines for AI safety evaluations
We expect you to be: Passionate and knowledgeable about short-term and long-term AI safety risks
Able to think outside the box and have a robust “red-teaming mindset”
Experienced in ML research engineering, ML observability and monitoring, creating large language model-enabled applications, and/or another technical domain applicable to AI risk
Able to operate effectively in a dynamic and extremely fast-paced research environment as well as scope and deliver projects end-to-end
It would be great if you also have: First-hand experience in red-teaming systems—be it computer systems or otherwise
A good understanding of the (nuances of) societal aspects of AI deployment
An ability to work cross-functionally
Excellent communication skills
This role may require access to technology or technical data controlled under the U.S. Export Administration Regulations or International Traffic in Arms Regulations. Therefore, this role is restricted to individuals described in paragraph (a) of the definition of “U.S. person” in the U.S. Export Administration Regulations, 15 C.F.R. § .1, and in the International Traffic in Arms Regulations, 22 C.F.R. § .62. U.S. persons are U.S. citizens, U.S. legal permanent residents, individuals granted asylum status in the United States, and individuals admitted to the United States as refugees.
Location:
San Francisco

We found some similar jobs based on your search