A photograph taken on November 23, 2023 shows the emblem of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a smartphone screen (left) and the letters AI on a laptop screen in Frankfurt am Principal, western Germany.
Kirill Kudryavtsev | Afp | Getty Images
The European Union on Friday agreed to landmark rules for artificial intelligence, in what’s prone to turn out to be the primary major regulation governing the emerging technology within the western world.
Major EU institutions spent the week hashing out proposals in an effort to achieve an agreement. Sticking points included learn how to regulate generative AI models, used to create tools like ChatGPT, and use of biometric identification tools, comparable to facial recognition and fingerprint scanning.
Germany, France and Italy have opposed directly regulating generative AI models, referred to as “foundation models,” as an alternative favoring self-regulation from the businesses behind them through government-introduced codes of conduct.
Their concern is that excessive regulation could stifle Europe’s ability to compete with Chinese and American tech leaders. Germany and France are home to a few of Europe’s most promising AI startups, including DeepL and Mistral AI.
The EU AI Act is the primary of its kind specifically targeting AI and follows years of European efforts to control the technology. The law traces its origins to 2021, when the European Commission first proposed a typical regulatory and legal framework for AI.
The law divides AI into categories of risk from “unacceptable” — meaning technologies that have to be banned — to high, medium and low-risk types of AI.
Generative AI became a mainstream topic late last yr following the general public release of OpenAI’s ChatGPT. That appeared after the initial 2021 EU proposals and pushed lawmakers to rethink their approach.
ChatGPT and other generative AI tools like Stable Diffusion, Google’s Bard and Anthropic’s Claude blindsided AI experts and regulators with their ability to generate sophisticated and humanlike output from easy queries using vast quantities of information. They’ve sparked criticism because of concerns over the potential to displace jobs, generate discriminative language and infringe privacy.
WATCH: Generative AI might help speed up the hiring process for health-care industry