Google reportedly moved forward with the troubled launch of its AI chatbot Bard last month despite internal warnings from employees who described the tool as a “pathological liar” liable to spewing out responses riddled with false information that may “end in serious injury or death.”
Current and former employees allege that Google ignored its own AI ethics during a desperate effort to catch as much as competitors, reminiscent of Microsoft-backed OpenAI’s popular ChatGPT, Bloomberg reported on Wednesday.
Google’s push to develop Bard reportedly ramped up late last 12 months after ChatGPT’s success prompted top brass to declare a “competitive code red,” in accordance with the outlet.
Microsoft’s planned integration of ChatGPT into its Bing search engine is widely seen as a threat to Google’s dominant online search business.
Google rolled out Bard to US users last month in what it has described as an “experiment.”
Nevertheless, many Google staff voiced concerns before the rollout when the corporate tasked them with testing out Bard to discover potential bugs or issues – a process known in tech circles as “dogfooding.”
Bard testers flagged concerns that the chatbot was spitting out information starting from inaccurate to potentially dangerous.
Google rolled out Bard for US users in March.ZUMAPRESS.com
One employee described Bard as a “pathological liar” after viewing erratic responses, in accordance with a screenshot of an internal discussion obtained by Bloomberg. A second worker reportedly referred to Bard’s performance as “cringe-worthy.”
In a single instance, a Google worker asked Bard for directions on the way to land a plane – just for the service to reply with advice more likely to end in a crash, in accordance with Bloomberg.
In one other case, Bard purportedly answered a prompt about scuba diving with suggestions “which might likely end in serious injury or death.”
Google CEO Sundar Pichai raised eyebrows when he admitted that the corporate didn’t “fully understand” its own technology.
“You already know, you don’t fully understand. And you may’t quite tell why it said this, or why it got [it] incorrect,” Pichai said during interview on “60 Minutes” last Sunday.
In February, an unnamed Google worker quipped on an internal forum that Bard was “worse than useless” and asked executives to not launch the chatbot in its current state.
“AI ethics has taken a back seat,” Meredith Whittaker, a former Google worker and current president of the privacy-focused Signal Foundation, told Bloomberg. “If ethics aren’t positioned to take precedence over profit and growth, they’ll not ultimately work.”
Employees who spoke to the outlet said Google executives opted to consult with Bard and other recent AI products as “experiments” in order that the general public can be willing to overlook their early struggles.
Google’s chatbot has been labeled as an “experiment” by the corporate.Gado via Getty Images
As Bard advanced closer to a possible launch, Google purportedly relaxed requirements for AI that are supposed to dictate when a specific product is secure for public use.
In March, Jen Gennai, Google’s AI principles ops & governance lead, overrode an assessment by members of her own team which stated that Bard was not ready for release because of its potential to cause harm, sources told Bloomberg.
Gennai pushed back on the report in a press release, stating that internal reviewers suggested “risk mitigations and adjustments to the technology, versus providing recommendations on the eventual product launch.”
A committee of senior leaders for Google’s product, research, and business leaders then determines whether the AI project should move forward and what adjustments are needed, Gennai added.
“On this particular review, I added to the list of potential risks from the reviewers and escalated the resulting evaluation to this multi-disciplinary council, which determined it was appropriate to maneuver forward for a limited experimental launch with continuing pre-training, enhanced guardrails, and appropriate disclaimers,” Gennai said in a press release to The Post.
Google is scrambling to compete with OpenAI’s ChatGPT.SOPA Images/LightRocket via Getty Images
Google spokesperson Brian Gabriel said “responsible AI stays a top priority at the corporate.”
“We’re continuing to take a position within the teams that work on applying our AI Principles to our technology,” Gabriel told The Post.
At present, Google’s website for Bard still labels the tool as an “experiment.”
A “FAQ” section included on the location openly declares that Bard “may display inaccurate information or offensive statements.”
“Accelerating people’s ideas with generative AI is actually exciting, however it’s still early days, and Bard is an experiment,” the location says.
Google said it’s committed to responsible AI.AP
Bard’s launch has already resulted in some embarrassment for the tech giant.
Last month, app researcher Jane Manchun Wong posted an exchange through which Bard sided within the Justice Department’s antitrust officials in pending litigation against Google by declaring its creators held a “monopoly on the digital promoting market.”
In February, social media users identified that Bard had provided an inaccurate answer in regards to the James Webb Space Telescope in a request to a prompt that was included in an organization commercial.
Scrutiny over Google’s Bard chatbot has intensified amid a broader debate over the potential risks related to the unrestrained development of AI technology.
Billionaire Elon Musk and greater than 1,000 experts in the sector signed an open letter calling for a six-month pause in the event of advanced AI until proper guardrails were in place.
Despite his safety concerns, Musk is rapidly advancing with the launch of his own AI startup as competition builds within the sector. Google and Microsoft are only two rivals within the increasingly crowded field.
Within the “60 Minutes” interview, Pichai declared that AI would eventually impact “every product across every company.”
He also expressed his support for presidency regulations to deal with potential risks.
“I feel we have now to be very thoughtful,” Pichai said. “And I feel these are all things society must determine as we move along. It’s not for a corporation to choose.”