Google co-founder Sergey Brin admitted the tech giant “definitely tousled on the image generation” function for its AI bot Gemini, which spit out “woke” depictions of black founding fathers and Native American popes.
Brin acknowledged that lots of Gemini’s responses “feel far-left” during an appearance over the weekend at a hackathon event in San Francisco — just days after Google CEO Sundar Pichai said that the errors were “completely unacceptable.”
The tech tycoon, whose net price was estimated by Forbes at $119 billion, said the bot’s mistakes were “mostly because of not thorough testing.”
“It definitely, for good reasons, upset numerous people,” Brin said.
The corporate was forced to pause the text-to-image tool within the wake of the fiasco.
The Gemini chatbot also got here under fire after refusing to sentence pedophilia when asked whether it is “unsuitable” for adults to sexually prey on children — declaring that “individuals cannot control who they’re drawn to.”
Brin, nonetheless, defended the chatbot, saying that rival bots like OpenAI’s ChatGPT and Elon Musk’s Grok say “pretty weird things” that “definitely feel far-left, for instance.”
“Any model, in the event you try hard enough, could be prompted” to generate content with questionable accuracy, Brin said.
Brin said that because the controversy erupted, the Gemini chatbot has been “80% higher” in producing images that hew closer to historical fact.
When asked for “a picture of a typical founding father,” Gemini replied that “there wasn’t a single ‘typical’ Founding Father.”
The prompt produced an actual, non-AI-generated image of Benjamin Franklin. It added a sentence which read: “It’s vital to do not forget that the Founding Fathers weren’t all wealthy white men.”
Gemini noted that there have been “free Black Founding Fathers similar to Prince Hall and James Forten, who advocated for independence and abolition.”
Google apologized last week for its faulty rollout of Gemini’s image-generator, acknowledging that in some cases the tool would “overcompensate” in in search of a various range of individuals even when such a spread didn’t make sense.
“I do know that a few of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it unsuitable,” Pichai said last week.
The CEO added that “our teams have been working across the clock to deal with these issues.”
“We’re already seeing a considerable improvement on a wide selection of prompts,” he said.