Discuss a killer app.
Artificial intelligence models are vulnerable to hackers and will even be trained to off humans in the event that they fall into the mistaken hands, ex-Google CEO Eric Schmidt warned.
The dire warning got here Wednesday at a London conference in response to a matter about whether AI could turn out to be more dangerous than nuclear weapons.
“There’s evidence that you would be able to take models, closed or open, and you may hack them to remove their guardrails. So, in the middle of their training, they learn quite a lot of things. A foul example can be they learn learn how to kill someone,” Schmidt said on the Sifted Summit tech conference, in response to CNBC.
“All of the main firms make it not possible for those models to reply that query,” he continued, appearing to air the potential for a user asking an AI to kill.
“Good decision. Everyone does this. They do it well, and so they do it for the correct reasons,” Schmidt added. “There’s evidence that they may be reverse-engineered, and there are a lot of other examples of that nature.”
The predictions may not be so far-fetched.
In 2023, an altered version of OpenAI’s ChatGPT called DAN – an acronym for “Do Anything Now” – surfaced online, CNBC noted.
The DAN alter ego, which was created by “jailbreaking” ChatGPT, would bypass its safety instructions in its responses to users. In a bizarre twist, users first needed to threaten the chatbot with death unless it complied.
The tech industry still lacks an efficient “non-proliferation regime” to make sure increasingly powerful AI models can’t be taken over and misused by bad actors, said Schmidt, who led Google from 2001 to 2011.
He’s certainly one of many Big Tech honchos who has warned of the possibly disastrous consequences of unchecked AI development, at the same time as gurus tout its potential economic and technological advantages to society.
In November, Schmidt said the creation of AI-powered “perfect girlfriends” could worsen the loneliness and alienation of young men preferring their company to humans.
The billionaire also said in May 2023 that AI poses an “existential risk” to humanity that might end in “many, many, many, many individuals harmed or killed” because it becomes more advanced.
Elon Musk, who has joined the AI and chatbot game with Grok and xAI, cautioned in 2023 that he saw “a non-zero probability of it going Terminator.”
“It’s not 0%,” Musk said. “It’s a small likelihood of annihilating humanity, however it’s not zero. We would like that probability to be as near zero as possible.”
Despite his warnings concerning the risks, Schmidt stays bullish about AI’s long-term advantages.
“I wrote two books with Henry Kissinger about this before he died, and we got here to the view that the arrival of an alien intelligence that is just not quite us and kind of under our control is a really big deal for humanity, because humans are used to being at the highest of the chain,” he said.
“I feel to this point, that thesis is proving out that the extent of ability of those systems goes to far exceed what humans can do over time,” Schmidt added.
Discuss a killer app.
Artificial intelligence models are vulnerable to hackers and will even be trained to off humans in the event that they fall into the mistaken hands, ex-Google CEO Eric Schmidt warned.
The dire warning got here Wednesday at a London conference in response to a matter about whether AI could turn out to be more dangerous than nuclear weapons.
“There’s evidence that you would be able to take models, closed or open, and you may hack them to remove their guardrails. So, in the middle of their training, they learn quite a lot of things. A foul example can be they learn learn how to kill someone,” Schmidt said on the Sifted Summit tech conference, in response to CNBC.
“All of the main firms make it not possible for those models to reply that query,” he continued, appearing to air the potential for a user asking an AI to kill.
“Good decision. Everyone does this. They do it well, and so they do it for the correct reasons,” Schmidt added. “There’s evidence that they may be reverse-engineered, and there are a lot of other examples of that nature.”
The predictions may not be so far-fetched.
In 2023, an altered version of OpenAI’s ChatGPT called DAN – an acronym for “Do Anything Now” – surfaced online, CNBC noted.
The DAN alter ego, which was created by “jailbreaking” ChatGPT, would bypass its safety instructions in its responses to users. In a bizarre twist, users first needed to threaten the chatbot with death unless it complied.
The tech industry still lacks an efficient “non-proliferation regime” to make sure increasingly powerful AI models can’t be taken over and misused by bad actors, said Schmidt, who led Google from 2001 to 2011.
He’s certainly one of many Big Tech honchos who has warned of the possibly disastrous consequences of unchecked AI development, at the same time as gurus tout its potential economic and technological advantages to society.
In November, Schmidt said the creation of AI-powered “perfect girlfriends” could worsen the loneliness and alienation of young men preferring their company to humans.
The billionaire also said in May 2023 that AI poses an “existential risk” to humanity that might end in “many, many, many, many individuals harmed or killed” because it becomes more advanced.
Elon Musk, who has joined the AI and chatbot game with Grok and xAI, cautioned in 2023 that he saw “a non-zero probability of it going Terminator.”
“It’s not 0%,” Musk said. “It’s a small likelihood of annihilating humanity, however it’s not zero. We would like that probability to be as near zero as possible.”
Despite his warnings concerning the risks, Schmidt stays bullish about AI’s long-term advantages.
“I wrote two books with Henry Kissinger about this before he died, and we got here to the view that the arrival of an alien intelligence that is just not quite us and kind of under our control is a really big deal for humanity, because humans are used to being at the highest of the chain,” he said.
“I feel to this point, that thesis is proving out that the extent of ability of those systems goes to far exceed what humans can do over time,” Schmidt added.