Google DeepMind co-founder and Chief Executive Officer Demis Hassabis gives a conference throughout the Mobile World Congress (MWC), the telecom industry’s biggest annual gathering, in Barcelona on February 26, 2024.
Pau Barrena | Afp | Getty Images
PARIS — Deepseek’s AI model “might be one of the best work” out of China, Demis Hassabis, the CEO of Google DeepMind said on Sunday, but added that the corporate didn’t show any recent scientific advances.
Last month, China’s Deepseek released a research paper that rattled global markets after claiming its AI model was trained at a fraction of the associated fee of leading AI players and on less-advanced Nvidia chips.
Deepseek’s announcement sparked an aggressive stock sell-off and sparked considerable debate over whether large tech firms are spending an excessive amount of on AI infrastructure.
Hassabis praised Deepseek’s model as “a powerful piece of labor.”
“I feel its probably one of the best work I’ve seen come out of China,” Hassabis said at a Google-hosted event in Paris ahead of the AI Motion Summit that’s being hosted by the town.
The DeepMind CEO said the AI model shows that Deepseek can do “extremely good engineering” and that it “changes things on a geopolitical scale.”
Nevertheless, from a technology viewpoint, Hassabis said it was not a giant change.
“Despite the hype, there is not any actual recent scientific advance … it’s using known techniques [in AI],” he said, adding that the hype around Deepseek has been “exaggerated a bit of bit.”
The DeepMind CEO said that the corporate’s Gemini 2.0 Flash models, which Google this week released to everyone, are more efficient than DeepMind’s model.
Deepseek’s claims around its low price and the chips it uses have been questioned by experts, who think the associated fee of development for the Chinese firm’s models is higher.
AGI five years away
The AI world has been debating for years when the arrival of artificial general intelligence, or AGI, will occur. AGI broadly refers to AI that’s smarter than humans.
Hassabis said that the AI industry is “on the trail towards AGI,” which he describes as “a system that exhibits all of the cognitive capabilities humans have.”
“I feel we’re close now, you realize, perhaps we’re only, you realize, perhaps 5 years or something away from a system like that which could be pretty extraordinary,” Hassabis said.
“And I feel society must prepare for that and what implications that can have. And, you realize, be certain that we derive the advantages from that and the entire society advantages from that, but in addition we mitigate a number of the risks, too.”
Hassabis’ comments mirror those of others within the industry who’ve suggested that AGI could possibly be closer to reality.
OpenAI CEO Sam Altman this 12 months said that he’s “confident we all know the right way to construct AGI as we have now traditionally understood it.”
Still, many within the industry have also flagged multiple risks related to AGI. One in every of the most important concerns is that humans will lose control of the systems they created, a view shared by outstanding AI scientists Max Tegmark and Yoshua Bengio, who recently shared their concerns with CNBC over this way of AI.







