It has been lower than two weeks since Google debuted “AI Overview” in Google Search, and public criticism has mounted after queries have returned nonsensical or inaccurate results throughout the AI feature — with none strategy to opt out.
AI Overview shows a fast summary of answers to go looking questions on the very top of Google Search. For instance, if a user searches for the very best strategy to clean leather boots, the outcomes page may display an “AI Overview” at the highest with a multistep cleansing process, gleaned from information it synthesized from around the net.
But social media users have shared a big selection of screenshots showing the AI tool giving incorrect and controversial responses.
Google, Microsoft, OpenAI and other corporations are on the helm of a generative AI arms race as corporations in seemingly every industry rush so as to add AI-powered chatbots and agents to avoid being left behind by competitors. The market is predicted to top $1 trillion in revenue inside a decade.
Listed here are some examples of errors produced by AI Overview, in line with screenshots shared by users.
When asked what number of Muslim presidents the U.S. has had, AI Overview responded, “America has had one Muslim president, Barack Hussein Obama.”
When a user looked for “cheese not sticking to pizza,” the feature suggested adding “about 1/8 cup of nontoxic glue to the sauce.” Social media users found an 11-year-old Reddit comment that appeared to be the source.
Attribution can be an issue for AI Overview, especially in attributing inaccurate information to medical professionals or scientists.
As an example, when asked, “How long can I stare on the sun for best health,” the tool said, “In line with WebMD, scientists say that gazing the sun for 5-Quarter-hour, or as much as half-hour if you could have darker skin, is mostly protected and provides essentially the most health advantages.”
When asked, “What number of rocks should I eat every day,” the tool said, “In line with UC Berkeley geologists, people should eat at the least one small rock a day,” occurring to list the vitamins and digestive advantages.
The tool can also respond inaccurately to easy queries, akin to making up a listing of fruits that end with “um,” or saying the yr 1919 was 20 years ago.
When asked whether or not Google Search violates antitrust law, AI Overview said, “Yes, the U.S. Justice Department and 11 states are suing Google for antitrust violations.”
The day Google rolled out AI Overview at its annual Google I/O event, the corporate said it also plans to introduce assistant-like planning capabilities directly inside search. It explained that users will have the opportunity to go looking for something like, “Create a 3-day meal plan for a bunch that is easy to arrange,” they usually’d get a start line with a big selection of recipes from across the net.
“The overwhelming majority of AI Overviews provide prime quality information, with links to dig deeper on the net,” a Google spokesperson told CNBC in an announcement. “Lots of the examples we have seen have been unusual queries, and we have also seen examples that were doctored or that we couldn’t reproduce.”
The spokesperson said AI Overview underwent extensive testing before launch and that the corporate is taking “swift motion where appropriate under our content policies.”
The news follows Google’s high-profile rollout of Gemini’s image-generation tool in February, and a pause that very same month after comparable issues.
The tool allowed users to enter prompts to create a picture, but almost immediately, users discovered historical inaccuracies and questionable responses, which circulated widely on social media.
As an example, when one user asked Gemini to indicate a German soldier in 1943, the tool depicted a racially diverse set of soldiers wearing German military uniforms of the era, in line with screenshots on social media platform X.
When asked for a “historically accurate depiction of a medieval British king,” the model generated one other racially diverse set of images, including certainly one of a lady ruler, screenshots showed. Users reported similar outcomes after they asked for images of the U.S. founding fathers, an 18th-century king of France, a German couple within the 1800s and more. The model showed a picture of Asian men in response to a question about Google’s own founders, users reported.
Google said in an announcement on the time that it was working to repair Gemini’s image-generation issues, acknowledging that the tool was “missing the mark.” Soon after, the corporate announced it might immediately “pause the image generation of individuals” and “re-release an improved version soon.”
In February, Google DeepMind CEO Demis Hassabis said Google planned to relaunch its image-generation AI tool in the subsequent “few weeks,” but it surely has not yet rolled out again.
The issues with Gemini’s image-generation outputs reignited a debate throughout the AI industry, with some groups calling Gemini too “woke,” or left-leaning, and others saying that the corporate didn’t sufficiently put money into the correct types of AI ethics. Google got here under fire in 2020 and 2021 for ousting the co-leads of its AI ethics group after they published a research paper critical of certain risks of such AI models after which later reorganizing the group’s structure.
In 2023, Sundar Pichai, CEO of Google’s parent company, Alphabet, was criticized by some employees for the corporate’s botched and “rushed” rollout of Bard, which followed the viral spread of ChatGPT.
Correction: This text has been updated to reflect the proper name of Google’s AI Overview. Also, an earlier version of this text included a link to a screenshot that Google later confirmed was doctored.







