The Microsoft Bing App is seen running on an iPhone on this photo illustration on 30 May, 2023 in Warsaw, Poland. (Photo by Jaap Arriens/NurPhoto via Getty Images)
Jaap Arriens | Nurphoto | Getty Images
Artificial intelligence may result in human extinction and reducing the risks related to the technology needs to be a worldwide priority, industry experts and tech leaders stated in an open letter.
“Mitigating the danger of extinction from AI needs to be a worldwide priority alongside other societal-scale risks corresponding to pandemics and nuclear war,” the statement on Tuesday read.
related investing news
Sam Altman, CEO of ChatGPT-maker OpenAI, in addition to executives from Google‘s AI arm DeepMind and Microsoft were amongst those that supported and signed the short statement from the Center for AI Safety.
The technology has gathered pace in recent months after chatbot ChatGPT was released for public use in November and subsequently went viral. In only two months after its launch, it reached 100 million users. ChatGPT has amazed researchers and most people with its ability to generate humanlike responses to users’ prompts, suggesting that AI could replace jobs and imitate humans.
The statement Tuesday said that there was increasing discussion a few “broad spectrum of vital and urgent risks from AI.”
But it surely said it could be “difficult to voice concerns about a few of advanced AI’s most severe risks” and had the aim of overcoming this obstacle and opening up the discussions.
ChatGPT has arguably sparked far more awareness and adoption of AI as major firms world wide have raced to develop rival products and capabilities.
Altman had admitted in March that he’s a “little bit scared” of AI as he worries that authoritarian governments would develop the technology. Other tech leaders corresponding to Tesla’s Elon Musk and former Google CEO Eric Schmidt have cautioned concerning the risks AI poses to society.
In an open letter in March, Musk, Apple co-founder Steve Wozniak and a number of other tech leaders urged AI labs to stop training systems to be more powerful than GPT-4 — which is OpenAI’s latest large language model. In addition they called for a six-month pause on such advanced development.
“Contemporary AI systems at the moment are becoming human-competitive at general tasks,” said the letter.
“Should we automate away all of the jobs, including the fulfilling ones? Should we develop nonhuman minds which may eventually outnumber, outsmart, obsolete and replace us? Should we risk lack of control of our civilization?” the letter asked.
Last week, Schmidt also individually warned concerning the “existential risks” related to AI because the technology advances.