A gaggle of top artificial intelligence experts and executives warned the technology poses a “risk of extinction in an alarming joint statement released Tuesday.
OpenAI boss Sam Altman, whose firm created ChatGPT, and the “Godfather of AI” Geoffrey Hinton were amongst greater than 350 outstanding figures who see AI as an existential threat, in keeping with the one-sentence open letter organized by the nonprofit Center for AI Safety.
“Mitigating the danger of extinction from AI must be a worldwide priority alongside other societal-scale risks reminiscent of pandemics and nuclear war,” the experts said in a 22-word statement.
The temporary statement is the most recent in a series of warnings from leading experts regarding AI’s potential to foment chaos in society – with potential risks including the spread of misinformation, major economic upheaval through job losses and even outright attacks on humanity.
Scrutiny has increased following the runaway popularity of OpenAI’s ChatGPT product.
The potential risks were on display as recently as last week, when a probable AI-generated photo of a fake explosion on the Pentagon triggered a selloff that briefly erased billions in value from the US stock market before it was debunked.
The Center for AI Safety said the temporary statement was intended to “open up discussion” in regards to the topic given the “broad spectrum of necessary and urgent risks from AI.”
Apart from Altman and Hinton, notable signatories included the boss of Google DeepMind Demis Hassabis and one other outstanding AI lab leader, Anthropic CEO Dario Amodei.
Altman, Hassabis and Amodei were a part of a select group of experts that met with President Biden earlier this month to debate potential AI risks and regulations.
Hinton and one other signer, Yoshua Bengio, won the 2018 Turing Award, the computing world’s highest honor, for his or her work on advancements in neural networks that were described as “major breakthroughs in artificial intelligence.”
“As we grapple with immediate AI risks like malicious use, misinformation, and disempowerment, the AI industry and governments all over the world must also seriously confront the danger that future AIs could pose a threat to human existence,” said Dan Hendrycks, director of the Center for AI Safety.
“Mitigating the danger of extinction from AI would require global motion,” Hendrycks added. “The world has successfully cooperated to mitigate risks related to nuclear war. The identical level of effort is required to handle the risks posed by future AI systems.”
Despite his leading role at OpenAI, Altman has been vocal about his concerns regarding the unrestrained development of advanced AI systems.
In testimony on Capitol Hill earlier this month, Altman got here out in favor of presidency regulations for the technology, amongst other safety guardrails.
On the time, Altman admitted his worst fear is that AI could “cause significant harm to the world” without oversight.
Elsewhere, Hinton recently quit his part-time job as an AI researcher for Google in order that he could speak more freely about his concerns.
Hinton said he now partly regrets his life’s work, which could allow “bad actors” to do “bad things” that can be difficult to stop.
The 22-word statement was noticeably briefer than a previous open letter that generated scrutiny in March.
Billionaire Elon Musk was amongst a whole lot of experts who called for a six-month pause in advanced AI development in order that leaders could consider tips on how to safely proceed.
Their lengthy open letter – signed by among the same experts who backed the Center for AI Safety’s statement – suggested that AI’s dangers included the possible “lack of control of our civilization.”
Musk was much more blunt during an appearance at a Wall Street Journal conference in London last week, stating that he saw a “non-zero probability” of AI “going Terminator” – a reference to the worst-case scenario from James Cameron’s 1984 sci-fi film.
Ex-Google CEO Eric Schmidt echoed Musk’s fears, arguing that AI was not far off from becoming an “existential risk” to humanity that might end in “many, many, many, many individuals harmed or killed.”