An onslaught of high-quality, AI-generated political “deepfakes” has already begun ahead of the 2024 presidential election – and Big Tech firms aren’t prepared for the chaos, experts told The Post.
The rise of generative AI platforms reminiscent of ChatGPT and photo-focused Midjourney have made it easy to create false or misleading posts, pictures and even videos – from doctored footage of politicians making controversial speeches to bogus images and videos of events that never actually occurred.
Striking examples of AI-generated misinformation have already circulated on the net – including a deepfake video of President Biden verbally attacking transgender people, false pictures of former President Donald Trump resisting arrest and viral photos of Pope Francis wearing a Balenciaga puffer jacket.
The result, in accordance with experts, is uncharted territory for tech firms reminiscent of Facebook, Twitter, Google-owned YouTube and TikTok, who’re set to face an unprecedented swell of high-quality deepfake content from US social media users and nefarious foreign actors alike.
To this point, the businesses have provided few details about their plans to guard users.
The Silicon Valley giants “should not prepared” to contend with election-related deepfakes because they’ve “no incentive” to cope with the difficulty, in accordance with Bradley Tusk, a political consultant and CEO of Tusk Enterprise Partners.
Generative AI advances have prompted a wave of deepfake images.Twitter / Eliot Higgins
“In reality, the incentives are virtually reversed — if someone creates a deepfake of Trump or Biden that finally ends up going viral, that’s more engagement and eyeballs on that social media platform,” Tusk told The Post.
“The platforms have been unable, and unwilling, to stop human-generated harmful content from spreading. This problem gets exponentially worse with the proliferation of generative AI,” he added.
Candidates have also begun making use of generative AI. Last month, Trump shared a deepfake video that depicted CNN anchor Anderson Cooper claiming the previous president had just finished “ripping” the network “a recent a—hole.”
GOP presidential contender and Florida Gov. Ron DeSantis’ campaign team shared an ad with manipulated pictures depicting Trump hugging Dr. Anthony Fauci in the course of the COVID-19 pandemic.
AI pictures of Pope Francis decked out in a Balenciaga jacket fooled tens of millions of users.TikTok/@vince19visuals
Misleading AI-generated posts from political campaigns are just one a part of the issue.
The larger issue, in accordance with many experts, is the likelihood that foreign adversaries and rogue elements will use generative AI to control voters or otherwise impact the integrity of US elections.
In May, a probable AI-generated photo of a fake explosion on the Pentagon went viral on Twitter – where it was shared by Kremlin-backed news outlet RT – and prompted a temporary stock market selloff.
The rapid advancements in generative AI mean the “rate of misinformation could increase dramatically” in comparison with recent elections, in accordance with Center for AI Safety director Dan Hendrycks, whose nonprofit recently organized a letter comparing the specter of AI to nuclear weapons or pandemics.
One fake video showed President Biden ranting against transgender people.
“They were creating content without today’s AI systems,” Hendrycks said. “Imagine how way more efficient they will probably be once they have AI to assist them generate stories, rewrite them to be more persuasive, and tailor them for specific audiences.”
A number of the tech world’s most outstanding figures, including Elon Musk and OpenAI CEO Sam Altman, have flagged AI-generated misinformation as some of the serious risks posed by the burgeoning technology.
In May, Altman told a Senate that he was “nervous” about the potential of AI disrupting elections and called it a “significant area of concern” that required federal regulation.
Other experts, including the “Godfather of AI” Geoffrey Hinton and Microsoft chief economist Michael Schwarz, have also publicly warned of bad actors using AI to control voters during elections.
When reached for comment, a Google representative pointed to recent marks from CEO Sundar Pichai, who touted the corporate’s investments in tools to detect and label synthetic content.
An AI-generated photo of a fake Pentagon explosion triggered a temporary stock selloff in May.Twitter/@KobeissiLetter
Last month, the corporate said it will begin labeling AI-generated images with identifying metadata and watermarks.
YouTube’s content policies ban the posting of content that has been doctored to control other users and removes offending posts through machine learning and human reviewers.
A TikTok spokesperson noted the ByteDance-owned app rolled out an artificial media policy earlier this yr, which requires any AI-generated or otherwise manipulated content that depicts a practical scene to be clearly labeled.
“We’re firmly committed to developing guardrails for the secure and transparent use of AI, which is why we announced a recent synthetic media policy in March 2023,” the TikTok spokesperson said in an announcement. “Like most of our industry, we proceed to work with experts, monitor the progression of this technology, and evolve our approach.”
A representative for Snapchat said the corporate “repeatedly evaluate[s] our policies to be sure that our protections keep pace as technologies evolve, including AI.”
A number of the fake photos showed Trump “resisting arrest.”Twitter / Eliot Higgins
Representatives for other major tech platforms, including Twitter, Meta and Microsoft, didn’t return requests for comment.
Except for the unprecedented technical difficulty of combating AI-generated content, tech firms need to walk a high-quality line between blocking misinformation and delving into censorship, in accordance with Sheldon Jacobson, a public policy consultant and professor of computer science on the University of Illinois at Urbana-Champaign.
Efforts to stop AI deepfakes may very well be construed as political bias against a selected party or candidate, Jacobson said.
Moreover, the tech firms have “little or no control” over the actions of foreign adversaries who resolve to misuse the technology for nefarious reasons.
“We aren’t China where we’re trying to regulate things,” Jacobson said. “It is a free communication system – but with which can be risks, and there may be going to be misinformation communicated. And now that you just usher in generative AI, this can be a whole recent level.”
An entire set of AI-generated photos featuring Donald Trump circulated earlier this yr.Twitter / Eliot Higgins
With the election still greater than a yr out, Jacobson said tech leaders at major firms are likely scrambling to develop a method to combat AI-generated deepfakes.
“I don’t think they’re saying anything because they don’t know what they’ll do. That’s the issue,” he added.
In Tusk’s view, Big Tech firms won’t take decisive motion to stop the flow of misinformation through AI-generated content unless lawmakers repeal Section 230 – the controversial clause that shields firms from viability for damaging content published on their platforms.
In May, the Supreme Court decided to depart Section 230 intact in a pair of cases that were considered probably the most significant challenges of the liability shield to this point. Nonetheless, lawmakers from each parties are still calling for Section 230 to be altered or repealed.
“If the financial repercussions of doing nothing are sufficiently big, the platforms will actually act and help prevent harmful content that has a negative impact on our democracy,” Tusk said.