Skip navigation
asphalt road with the words Fake News Circus written in chalk Getty Images

If AI Wrecks Democracy, We May Never Know

Propaganda doesn’t need to go viral to sway elections anymore. That makes AI’s impact more insidious and harder to detect.

(Bloomberg Opinion/Parmy Olson) -- This year promises to be a whopper for elective government, with billions of people — or more than 40% of the world’s population — able to vote in an election. But nearly five months into 2024, some government officials are quietly wondering why the looming risk of AI hasn’t, apparently, played out. Even as voters in Indonesia and Pakistan have gone to the polls, they are seeing little evidence of viral deepfakes skewing an electoral outcome, according to a recent article in Politico, which cited “national security officials, tech company executives and outside watchdog groups.” AI, they said, wasn’t having the “mass impact” that they expected.

That is a painfully shortsighted view. The reason? AI may be disrupting elections right now and we just don’t know it.       

The problem is that officials are looking for a Machiavellian version of the Balenciaga Pope. Remember the AI-generated images of Pope Francis in a puffer jacket that went viral last year? That’s what many now expect from generative AI tools — which can conjure humanlike text, images and videos en masse, making it just as easy to spot as previous persuasion campaigns that supported Donald Trump from Macedonia or spread divisive political content on Twitter and Facebook from Russia. So-called astroturfing was easy to identify when an array of bots was saying the same thing thousands of times. 

It is much harder to catch someone saying the same thing, slightly differently, thousands of times, though. That, in a nutshell, is what makes AI-powered disinformation so much harder to detect, and it’s why tech companies need to shift their focus from “virality to variety,” says Josh Lawson, who was head of electoral risk at Meta Platforms Inc. and now advises social media firms as a director at the Aspen Institute, a think tank. 

Don’t forget, he says, the subtle power of words. Much of the public discourse on AI has been about images and deepfakes, “when we could see the bulk of persuasion campaigns could be based on text. That’s how you can really scale an operation without getting caught.” 

Meta’s WhatsApp makes that possible thanks to its “Channels” feature, which can broadcast to thousands. You could, for instance, use an open-source language model to generate and send legions of different text posts to Arabic speakers in Michigan, or message people that their local polling station at a school is flooded and that voting will take six hours, Lawson adds.  “Now something like an Arabic language operation is in reach for as low sophistication as the Proud Boys,” he says.  

The other problem is that AI tools are now widely used, with more than half of Americans and a quarter of Brits having tried them. That means regular people — intentionally or not — can create and share disinformation too. In March, for example, fans of Donald Trump posted AI-generated fake photos of him surrounded by Black supporters, to paint him as a hero of the Black community. 

“It’s ordinary people creating fan content,” says Renee DiResta, a researcher with the Stanford Internet Observatory who specializes in election interference. “Do they mean to be deceptive? Who knows?” What matters is that with the cost of distribution already at zero, the cost of creation has come down too, for everyone.What makes Meta’s job especially challenging is that to tackle this, it can’t just try to limit certain images from getting lots of clicks and likes. AI spam doesn’t need engagement to be effective. It just needs to flood the zone.  

Meta is trying to address the problem by applying “Made with AI” labels, this month, to videos, images and audio on Facebook and Instagram — an approach that could become counterproductive if people begin to assume everything without a label is real.

Another approach would be for Meta to focus on a platform where text is prevalent: WhatsApp. Already in 2018, a flood of disinformation spread via the messaging platform in Brazil targeting Fernando Haddad of the Workers’ Party. Supporters of Jair Bolsonaro, who won the presidency, were reported to have funded the mass targeting.  

Meta could better combat a repeat of that — which AI would put on steroids —  if it brought its WhatsApp policies in line with those of Instagram and Facebook, specifically banning content that interferes with the act of voting. WhatsApp’s rules only vaguely ban “content that purposefully deceives” and “illegal activity.” 

A Meta spokesman said that this means the company “would enforce on voter or election suppression.” 

But clearer content policies would give Meta more authority to tackle AI spam on WhatsApp channels. You need that “for proactive enforcement,” says Lawson. If the company didn’t think that was the case, it wouldn't have more specific policies against voter interference for Facebook and Instagram. 

Smoking guns are rare with AI tools thanks to their more diffuse and nuanced effects. We should prepare ourselves for more noise than signal as synthetic content pours onto the internet. That means tech companies and officials shouldn’t be complacent about a lack of “mass impact” from AI on elections. Quite the opposite. 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish