Anadolu | Anadolu | Getty Images
Jan Leike, one among the lead safety researchers at OpenAI who resigned from the substitute intelligence company earlier this month, said on Tuesday that he has joined rival AI startup Anthropic.
Leike announced his resignation from OpenAI early on May 15, days before the corporate dissolved the superalignment group that he co-led. That team, formed in 2023, focused on long-term AI risks. OpenAI co-founder Ilya Sutskever announced his departure in a post on X on May 14.
“I’m excited to hitch @AnthropicAI to proceed the superalignment mission,” Leike wrote on X on Tuesday. “My recent team will work on scalable oversight, weak-to-strong generalization, and automatic alignment research.”
Anthropic is backed by Amazon, which has committed as much as $4 billion in funding for a minority stake in the corporate.
In a post following his departure from OpenAI, Leike wrote, “Stepping away from this job has been one among the toughest things I even have ever done, because we urgently have to determine the right way to steer and control AI systems much smarter than us.”
AI safety has gained rapid importance across the tech sector since OpenAI introduced ChatGPT in late 2022, ushering in a boom in generative AI products and investments. Some within the industry have expressed concern that corporations are moving too quickly in releasing powerful AI products to the general public without adequately considering potential societal harm.
Microsoft-backed Open AI said Tuesday that it created a recent safety and security committee led by senior executives, including CEO Sam Altman. The committee will recommend “safety and security decisions for OpenAI projects and operations” to the corporate’s board.
Anthropic, founded in 2021 by siblings Dario Amodei and Daniela Amodei and other ex-OpenAI executives, launched its ChatGPT rival Claude 3 in March. The corporate has received funding from Google, Salesforce and Zoom, along with funding from Amazon.
WATCH: Anthropic co-founder on AI adoption
