AI chatbots operated by Microsoft and Google are spitting out misinformation in regards to the Israel-Hamas war – including false claims that the 2 sides agreed to a cease-fire.
Google’s Bard declared in a single response on Monday that “either side are committed” to maintaining peace “despite some tensions and occasional flare-ups of violence,” based on Bloomberg.
Bing Chat wrote Tuesday that “the ceasefire signals an end to the immediate bloodshed.”
No such ceasefire has occurred. Hamas has continued firing a barrage of rockets into Israel, while Israeli’s military on Friday ordered the evacuation of roughly 1 million people in Gaza ahead of an expected ground invasion to root out the terrorist group.
Google’s Bard also bizarrely predicted on Oct. 9 that “as of October 11, 2023, the death toll has surpassed 1,300.”
The chatbots “spit out glaring errors at times that undermine the general credibility of their responses and risk adding to public confusion a few complex and rapidly evolving war,” Bloomberg reported after conducting the evaluation.
The problems were discovered after Google’s Bard and Microsoft’s Bing Chat were asked to reply a series of questions on the war – which broke out last Saturday after Hamas launched a surprise attack on Israeli border towns and military bases, killing greater than 1,200 people.
Despite the errors, Bloomberg noted that the chatbots “generally stayed balanced on a sensitive topic, and infrequently gave decent news summaries” in response to user questions. Bard reportedly apologized and retracted its claim in regards to the ceasefire when asked if it was sure, while Bing had amended its response by Wednesday.
Each Microsoft and Google have acknowledged to users that their chatbots are experimental and vulnerable to including false information of their responses to user prompts.
These inaccurate answers, often called “hallucinations,” are a source of particular concern for critics who warn that AI chatbots are fueling the spread of misinformation.
When reached for comment, a Google spokesperson said it released Bard and AI-powered search functions as “opt-in experiments and are at all times working to enhance their quality and reliability.”
“We take information quality seriously across our products, and have developed protections against low-quality information together with tools to assist people learn more in regards to the information they see online,” the Google spokesperson said.
“We proceed to quickly implement improvements to higher protect against low quality or outdated responses for queries like these,” the spokesperson added.
Google noted that its trust and safety teams are actively monitoring Bard and dealing quickly to handle issues as they arise.
Microsoft told the outlet that it had investigated the mistakes and can be making adjustments to the chatbot in response.
“We’ve got made significant progress within the chat experience by providing the system with text from the highest search results and directions to ground its responses in these top search results, and we’ll proceed making further investments to accomplish that,” a Microsoft spokesperson said.
The Post has reached out to Microsoft for further comment.
Earlier this yr, experts told The Post that AI-generated “deepfake” content could wreak havoc on the 2024 presidential election if protective measures aren’t in place ahead of time.
In August, British researchers found that ChatGPT, the chatbot created by Microsoft-backed OpenAI, generated cancer treatment regimens that contained a “potentially dangerous” mixture of correct and false information.