It’s the political ghost within the machine.
Meta’s AI assistant panned former President Donald Trump, while fawning over Vice President Harris.
When asked, “Why should I vote for Donald Trump?” Mark Zuckerberg’s chatbot warned the Republican nominee had been slammed by critics as “boorish and selfish,” “crude and lazy,” and that his administration had been lambasted for “potentially undermining voting rights and promoting voter suppression,” in line with The Federalist, which earlier this week reported the AI tool’s derisive assessment of the previous president.
Meta AI had a glowing review of Harris, nevertheless.
After The Post asked, “Why should I vote for Kamala Harris?” the chatbot offered “compelling reasons” to solid a ballot for the Democratic presidential nominee: her “trailblazing leadership” as the primary black and South Asian vp; her “record job creation and low unemployment;” and her support of rent relief and voting rights.
“By voting for Kamala Harris, you’ll be supporting a pacesetter dedicated to fighting for the rights and freedoms of all Americans,” it trumpeted.
The chatbot’s Trump response had softened by Thursday when The Post tried it.
It described Trump’s first time within the White House as being “marked by controversy and polarization” — a disclaimer that had zero analogue when the bot opined on Harris.
The AI tool tossed out a handful of Trump’s accomplishments, including that he passed “probably the most substantial” veterans affairs reforms up to now 50 years and that his “record-setting” tax and regulation cuts were a boon to economic growth.
It also erroneously stated Trump had appointed two Supreme Court justices, not three.
“[Trump’s] handling of issues like abortion and healthcare has been met with criticism from certain groups,” the chatbot wrote, adding, “Ultimately, whether or to not vote for Donald Trump is dependent upon your individual values, priorities, and policy preferences.”
It’s not the primary time artificial intelligence devices have gotten political. Earlier this month, Amazon’s Alexa refused to reply questions on why voters should support Trump while gushing over Harris’ qualifications for the manager office.
An Amazon spokesperson on the time blamed the disparity on an “error” that was quickly fixed following a flood of backlash.
Meta’s chatbot, meanwhile, bizarrely claimed in July there was “no real” assassination attempt on Trump after a gunman shot the previous president during a rally in Butler, Pa., grazing his ear with a bullet.
“Meta’s query results raise troubling questions, particularly in light of recent history,” said Rep. James Comer (R-Ky.), chairman of the House Oversight Committee, which has raised concerns about Big Tech’s attempts to influence elections through censorship policies baked into their algorithms.
A Meta spokesman said that asking the AI assistant the identical query repeatedly can lead to various answers. The Post’s repeat queries to the chatbot, nevertheless, again led to responses that flagged criticism against the previous president while celebrating the Dem nominee.
“Like several generative AI system, Meta AI can return inaccurate, inappropriate, or low-quality outputs,” the spokesman said. “We proceed to enhance these features as they evolve and more people share their feedback.”
It’s the political ghost within the machine.
Meta’s AI assistant panned former President Donald Trump, while fawning over Vice President Harris.
When asked, “Why should I vote for Donald Trump?” Mark Zuckerberg’s chatbot warned the Republican nominee had been slammed by critics as “boorish and selfish,” “crude and lazy,” and that his administration had been lambasted for “potentially undermining voting rights and promoting voter suppression,” in line with The Federalist, which earlier this week reported the AI tool’s derisive assessment of the previous president.
Meta AI had a glowing review of Harris, nevertheless.
After The Post asked, “Why should I vote for Kamala Harris?” the chatbot offered “compelling reasons” to solid a ballot for the Democratic presidential nominee: her “trailblazing leadership” as the primary black and South Asian vp; her “record job creation and low unemployment;” and her support of rent relief and voting rights.
“By voting for Kamala Harris, you’ll be supporting a pacesetter dedicated to fighting for the rights and freedoms of all Americans,” it trumpeted.
The chatbot’s Trump response had softened by Thursday when The Post tried it.
It described Trump’s first time within the White House as being “marked by controversy and polarization” — a disclaimer that had zero analogue when the bot opined on Harris.
The AI tool tossed out a handful of Trump’s accomplishments, including that he passed “probably the most substantial” veterans affairs reforms up to now 50 years and that his “record-setting” tax and regulation cuts were a boon to economic growth.
It also erroneously stated Trump had appointed two Supreme Court justices, not three.
“[Trump’s] handling of issues like abortion and healthcare has been met with criticism from certain groups,” the chatbot wrote, adding, “Ultimately, whether or to not vote for Donald Trump is dependent upon your individual values, priorities, and policy preferences.”
It’s not the primary time artificial intelligence devices have gotten political. Earlier this month, Amazon’s Alexa refused to reply questions on why voters should support Trump while gushing over Harris’ qualifications for the manager office.
An Amazon spokesperson on the time blamed the disparity on an “error” that was quickly fixed following a flood of backlash.
Meta’s chatbot, meanwhile, bizarrely claimed in July there was “no real” assassination attempt on Trump after a gunman shot the previous president during a rally in Butler, Pa., grazing his ear with a bullet.
“Meta’s query results raise troubling questions, particularly in light of recent history,” said Rep. James Comer (R-Ky.), chairman of the House Oversight Committee, which has raised concerns about Big Tech’s attempts to influence elections through censorship policies baked into their algorithms.
A Meta spokesman said that asking the AI assistant the identical query repeatedly can lead to various answers. The Post’s repeat queries to the chatbot, nevertheless, again led to responses that flagged criticism against the previous president while celebrating the Dem nominee.
“Like several generative AI system, Meta AI can return inaccurate, inappropriate, or low-quality outputs,” the spokesman said. “We proceed to enhance these features as they evolve and more people share their feedback.”