Google said Thursday it could “pause” its Gemini chatbot’s image generation tool after it was widely panned on social media for creating “diverse” images that weren’t historically or factually accurate — similar to black Vikings, Native American popes and feminine NHL players.
Users blasted Gemini as “absurdly woke” and “unusable” after requests to generate representative images for subjects similar to America’s Founding Fathers resulted in bizarrely revisionist pictures.
“We’re already working to deal with recent issues with Gemini’s image generation feature,” Google said in an announcement posted on X. “While we do that, we’re going to pause the image generation of individuals and can re-release an improved version soon.”
Examples included an AI image of a black man who appeared to represent George Washington, complete with a white powdered wig and Continental Army uniform, and a Southeast Asian woman wearing papal attire regardless that all 266 popes throughout history have been white men.
In a single shocking example uncovered by the Verge, Gemini even generated “diverse” representations of Nazi-era German soldiers, including an Asian woman and a black man decked out in 1943 military garb.
Google had earlier admitted that the chatbot’s erratic behavior needed to be fixed.
“We’re working to enhance these sorts of depictions immediately,” Jack Krawczyk, Google’s senior director of product management for Gemini experiences, told The Post.
“Gemini’s AI image generation does generate a big selection of individuals. And that’s generally a very good thing because people around the globe use it. However it’s missing the mark here.”
The Post has reached out to Google for further comment.
It was a major misstep for Google, which had just rebranded its predominant AI chatbot product under the Gemini name earlier this month and introduced heavily touted latest features — including image generation.
The blunder also got here days after OpenAI, which operates the favored ChatGPT, introduced a latest AI tool called Sora that creates videos based on users’ text prompts.
Since Google has not published the parameters that govern the Gemini chatbot’s behavior, it’s difficult to get a transparent explanation of why it was inventing diverse versions of historical figures and events.
When asked by The Post to supply its trust and safety guidelines, Gemini acknowledged that they weren’t “publicly disclosed because of technical complexities and mental property considerations.”
The chatbot also admitted it was aware of “criticisms that Gemini might need prioritized forced diversity in its image generation, resulting in historically inaccurate portrayals.”
“The algorithms behind image generation models are complex and still under development,” Gemini said. “They could struggle to grasp the nuances of historical context and cultural representation, resulting in inaccurate outputs.”