ON TUESDAY in Paris, a popular Twitter user posted three images of French President Emmanuel Macron sprinting between riot police and protesters, surrounded by billows of smoke. The images, viewed more than 3 million times, were fake. But for anyone not following the growth of AI-powered image generators, that wasn’t so obvious. True to the user’s handle, “No Context French” added no label or caption. And as it turned out, some people believed they were legit. A colleague tells me that at least two friends in London who worked in various professional jobs stumbled across the pictures and thought they were real photos from this week’s sometimes-violent pension reform strikes. One of them shared the image in a group chat before being told it was fake.
Social networks have been preparing for this moment for years. They’ve warned at length about deepfake videos and know that anyone with editing software can manipulate politicians into controversial false photos. But the recent explosion of image generating tools, powered by so-called generative AI models, puts platforms like Twitter, Facebook, and TikTok in unprecedented territory.
What might have taken 30 minutes or an hour to conjure up on Photoshop-style software can now take about five minutes or less on a tool like Midjourney (free for the first 25 images) or Stable Diffusion (completely free). Both these tools have no restrictions on generating images of famous figures.1
Last year I used Stable Diffusion to conjure “photos” of Donald Trump playing golf with North Korea’s Kim Jung Un, none of which looked particularly convincing. But in the six months since then, image generators have taken a leap forward. Midjourney’s latest version of its tool can produce pictures that are very difficult to distinguish from reality.
The person behind “No Context French” handle told me they used Midjourney for their Macron images. When I asked why they didn’t label the images as fake, they replied that anyone could simply, “zoom in and read the comments to understand that these images are not real.”
They stood firm when I told them some people had fallen for the images. “We know that these images are not real because of all these defects,” they added, before sending me zoomed-in screenshots of their digital blemishes. When I asked about the minority of people who don’t look at such details, especially on the small screen of a mobile phone, they didn’t reply.
Eliot Higgins, the co-founder of the investigative journalism group Bellingcat, took a similar line when he tweeted fake images on Monday that he’d generated of Donald Trump getting arrested, playing off widespread expectations for his detention. The images were viewed more than 5 million times and weren’t labeled. Higgins subsequently said he’d been banned from using Midjourney.
While Twitter sleuths have pointed to the warped hands and dodgy faces of AI-generated pics, plenty of mainstream users are still vulnerable to this kind of fakery. Last October, WhatsApp users in Brazil found themselves flooded with misinformation about the integrity of their presidential election, leading many to riot in support of losing ex-president Jair Bolsonaro. It’s much harder to spot blemishes and fakery when someone you trust has just shared an image, at the height of the news cycle, on a tiny screen. And as a fully encrypted messaging app, there’s little WhatsApp can do to police fake images that go viral through constant sharing between friends, families, and groups.
Higgins and “No Context French” were just trying to create a stunt, but their success in getting multiple people to believe their posts were real illustrates the scale of a looming challenge for social media and society more widely.
TikTok on Tuesday updated its guidelines to bar AI-generated media that misleads.2 Twitter’s policy on synthetic media, last updated in 2020, says that users shouldn’t share fake images that may deceive people, and that it, “may label tweets containing misleading media.” When I asked Twitter why it hadn’t labeled the fake Trump and Macron images as they went viral, the company helmed by Elon Musk replied with a poop emoji, its new auto reply for the media.3
Some Twitter users who framed the Trump images as real with attention-grabbing hashtags like “BREAKING,” have been flagged by the site’s Community Notes, which lets users add context to certain tweets. But Twitter’s increasingly laissez faire stance towards content under Musk suggests fake images could thrive on its platform more than others.
Meta Platforms, Inc. said in 2020 that it would completely remove AI-generated media aimed at misleading people, but the company hadn’t taken down at least one “Trump arrest” image posted as real news by a Facebook user on Wednesday.4 Meta did not respond to a request for comment.
(Facebook has since fact-checked and removed the fake Trump photo. A spokesman said the company takes down manipulated content that meets specific criteria, including those that would “mislead an average person.”)
It’s clearly going to get harder for people to discern fake from reality as generative AI tools like Midjourney and ChatGPT flourish. The founder of one of these AI tools told me last year that the answer to this problem was simple: We have to adjust. I already find myself looking at real photos of politicians on social media, half wondering if they are fake. AI tools will make skeptics of many of us. For those more easily persuaded, they could spearhead a new misinformation crisis.
BLOOMBERG OPINION
1 OpenAI’s competing image-generation tool, DALL-E, restricts users from generating images of famous figures and war.
2 This is somewhat ironic considering TikTok’s own AI filters make users look like fake supermodels.
3 Musk emphasized his disdain for the press recently by auto-replying to all media enquiries with a poop emoji. Twitter, which aims to save free speech and become the world’s new digital town square, has no press office.
4 Many commenters have debunked the images on the premise that Trump can’t run.