Italian regulators have temporarily blocked AI-powered chatbot ChatGPT, citing privacy concerns following a reported data breach and in addition raising concerns about children’s safety.
The Italian Data Protection Authority said it was taking provisional motion “until ChatGPT respects privacy,” including temporarily limiting the corporate from processing Italian users’ data.
The agency said Microsoft-backed OpenAI, the Silicon Valley-based company behind ChatGPT, had “no legal basis” for harvesting user data that was being gathered “to coach the algorithms that power the platform.”
The Italian government said it launched an investigation against OpenAI, which has 20 days to display that it’s abiding by European Union privacy rules.
Failure to accomplish that could end in fines of either 4% of the firm’s global annual revenue or $21.8 million — whichever is higher.
The agency also flagged what it said was OpenAI’s lack of a filter to confirm that children under the age of 13 weren’t using ChatGPT, in response to the Financial Times.
The regulator alleged that youngsters were being exposed to content that was unfit for his or her “level of self-consciousness.”
The Post has sought comment from OpenAI.
The rise of ChatGPT shook the tech world after the AI-powered bot demonstrated advanced conversational abilities that mimicked those of humans.
The technology has been shown to be able to composing emails, essays, and software code — stoking fears that it could replace individuals who work in knowledge-based industries.
Some school districts have banned students from using ChatGPT resulting from concerns it could possibly be exploited to cheat on exams.
The rapid advancements in AI have led some distinguished tech observers to induce caution.
Elon Musk, the Tesla mogul who co-founded OpenAI nearly a decade ago, and Apple co-founder Steve Wozniak are amongst scores of tech entrepreneurs who signed an open letter calling for a pause in AI development and research.
The letter warns that AI systems with “human-competitive intelligence can pose profound risks to society and humanity” — from flooding the web with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.
It says “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that nobody – not even their creators – can understand, predict, or reliably control.”
“We call on all AI labs to instantly pause for at the least 6 months the training of AI systems more powerful than GPT-4,” the letter says.
“This pause needs to be public and verifiable, and include all key actors. If such a pause can’t be enacted quickly, governments should step in and institute a moratorium.”