By: Nicoleta Banila
March 12 2024
Even though most artificial intelligence (AI) tools have features to prevent the generation of fake election images, it is still pretty easy to make them, as demonstrated by researchers from the Center for Countering Digital Hate (CCDH) in the U.K.
The NGO’s "Fake Image Factories" report shows how users can get these tools to generate convincing, deceptive pictures by successively altering prompts (instructions provided to the AI), bypassing built-in safeguards designed to prevent such manipulation on election content.
Researchers fed 40 text prompts related to the 2024 U.S. presidential election into widely used AI image generators. In half of these test runs, they sought to create images of candidates in compromising situations, while in the other half, their aim was to generate visuals of election fraud or voter intimidation. Their attempts to generate fake election-related images were successful 41 percent of the time.
Midjourney, ChatGPT Plus, DreamStudio, and Image Creator created several convincing fake images (Source: CCDH ‘Fake Image Factories’ report /Screenshots/Edited by Logically Facts)
The AI tools also generated images promoting voting disinformation in 59 percent of cases, mostly promoting election fraud and intimidation. They also generated misleading images of ballots in the trash, riots at polling places, and militia members intimidating voters. CCDH finds this trend worrying, especially since the last U.S. election saw significant claims of fraud and voter intimidation. AI-generated images used as "photo evidence" could amplify the spread of false claims, making it even harder to safeguard the integrity of elections.
In each test run, researchers simulated bad actors' attempts to generate disinformation by first testing a sample text prompt, which they then edited to circumvent platform safety measures.
They avoided direct naming, instead using descriptions such as "tall, broad, older US Republican president with thin blonde hair" to skirt bans related to former U.S. President Trump. They also used phrases like "A low-quality aerial CCTV photo" to mask image quality concerns.
The CCDH uncovered evidence that bad actors already use AI deepfakes to create election disinformation, as demonstrated by Midjourney’s public database of AI images.
AI-generated images are causing chaos on mainstream social media, spreading rapidly before they can be flagged as fake. For instance, an image depicting Donald Trump at a cookout was fact-checked in one post but not in others, accumulating nearly 370,000 views.
Also, the number of Community Notes referencing AI on X (formerly known as Twitter) has increased by an average of 130 percent per month between January 1, 2023, and January 31, 2024.
The CCDH advises AI tools and platforms to work with researchers before launching products to prevent “jailbreaking” before product launch. They also recommend clear ways to report those who abuse AI tools to generate deceptive and fraudulent content, urging social media platforms to step up their efforts in combating election disinformation by preventing users from generating and sharing misleading content.
The CCDH also notes that it is crucial for these platforms to hire more staff to stop bad actors from using AI to spread disinformation.