
Google’s former chief executive officer, Eric Schmidt, predicted that probably the most powerful artificial intelligence systems will likely be housed on military bases surrounded by machine guns within the US and China.
“Eventually, in each the US and China, I believe there will likely be a small variety of extremely powerful computers with the potential for autonomous invention that may exceed what we wish to present either to our own residents without permission or to our competitors,” Schmidt told Noema Magazine in an interview that was published on Tuesday.
The previous Google boss who headed the search engine from 2001 to 2011 said that AI systems will gain knowledge at such a rapid pace inside the subsequent few years that they’ll eventually “begin to work together.”
Schmidt, whose net value has been valued by Bloomberg Billionaires Index at $33.4 billion, is an investor within the Amazon-backed AI startup Anthropic.
He said that the proliferation of AI knowledge in the subsequent few years poses challenges to regulators.
“Here we get into the questions raised by science fiction,” Schmidt said.
He identified AI “agents” as “large language model[s] that may learn something latest.”
“These agents are going to be really powerful, and it’s reasonable to expect that there will likely be thousands and thousands of them on the market,” in response to Schmidt.
“So, there will likely be lots and a number of agents running around and available to you.”
He then pondered the results of agents “develop[ing] their very own language to speak with one another.”
“And that’s the purpose after we won’t understand what the models are doing,” Schmidt said, adding: “What should we do? Pull the plug?”
“It can really be an issue when agents start to speak and do things in ways in which we as humans don’t understand,” the 69-year-old former executive said. “That’s the limit, in my opinion.”
Schmidt said that “an inexpensive expectation is that we will likely be on this latest world inside five years, not 10.”
He added that tech corporations have been working with Western governments on regulating the brand new technology.
Schmidt said that Western corporations dealing in AI are “well-run” and have “exposure to lawsuits” — thus minimizing risk.
“It is just not as in the event that they get up within the morning saying let’s work out tips on how to hurt anyone or damage humanity,” he said.
But Schmidt warned that “there are evil people” on the planet who “will use your tools to harm people.”
“All technology is dual use,” he said. “All of those inventions could be misused, and it’s essential for the inventors to be honest about that.”
Schmidt said that the issue of spreading misinformation through AI in addition to deepfakes is “unsolvable.”
“There are some ways regulation could be attempted. However the cat is out of the bag, the genie is out of the bottle,” he said.
Last yr, a bunch of tech leaders from OpenAI, Google DeepMind, Anthropic and other labs warned that future AI systems could pose a threat to humanity that may be deadlier than pandemics and nuclear weapons.
“Mitigating the chance of extinction from A.I. ought to be a world priority alongside other societal-scale risks, corresponding to pandemics and nuclear war,” the statement by the nonprofit Center for AI Safety read.

Google’s former chief executive officer, Eric Schmidt, predicted that probably the most powerful artificial intelligence systems will likely be housed on military bases surrounded by machine guns within the US and China.
“Eventually, in each the US and China, I believe there will likely be a small variety of extremely powerful computers with the potential for autonomous invention that may exceed what we wish to present either to our own residents without permission or to our competitors,” Schmidt told Noema Magazine in an interview that was published on Tuesday.
The previous Google boss who headed the search engine from 2001 to 2011 said that AI systems will gain knowledge at such a rapid pace inside the subsequent few years that they’ll eventually “begin to work together.”
Schmidt, whose net value has been valued by Bloomberg Billionaires Index at $33.4 billion, is an investor within the Amazon-backed AI startup Anthropic.
He said that the proliferation of AI knowledge in the subsequent few years poses challenges to regulators.
“Here we get into the questions raised by science fiction,” Schmidt said.
He identified AI “agents” as “large language model[s] that may learn something latest.”
“These agents are going to be really powerful, and it’s reasonable to expect that there will likely be thousands and thousands of them on the market,” in response to Schmidt.
“So, there will likely be lots and a number of agents running around and available to you.”
He then pondered the results of agents “develop[ing] their very own language to speak with one another.”
“And that’s the purpose after we won’t understand what the models are doing,” Schmidt said, adding: “What should we do? Pull the plug?”
“It can really be an issue when agents start to speak and do things in ways in which we as humans don’t understand,” the 69-year-old former executive said. “That’s the limit, in my opinion.”
Schmidt said that “an inexpensive expectation is that we will likely be on this latest world inside five years, not 10.”
He added that tech corporations have been working with Western governments on regulating the brand new technology.
Schmidt said that Western corporations dealing in AI are “well-run” and have “exposure to lawsuits” — thus minimizing risk.
“It is just not as in the event that they get up within the morning saying let’s work out tips on how to hurt anyone or damage humanity,” he said.
But Schmidt warned that “there are evil people” on the planet who “will use your tools to harm people.”
“All technology is dual use,” he said. “All of those inventions could be misused, and it’s essential for the inventors to be honest about that.”
Schmidt said that the issue of spreading misinformation through AI in addition to deepfakes is “unsolvable.”
“There are some ways regulation could be attempted. However the cat is out of the bag, the genie is out of the bottle,” he said.
Last yr, a bunch of tech leaders from OpenAI, Google DeepMind, Anthropic and other labs warned that future AI systems could pose a threat to humanity that may be deadlier than pandemics and nuclear weapons.
“Mitigating the chance of extinction from A.I. ought to be a world priority alongside other societal-scale risks, corresponding to pandemics and nuclear war,” the statement by the nonprofit Center for AI Safety read.







