Pin It
The Last of Us, 2023
The Last of Us, 2023(TV still)

Is AI really a threat to democracy?

As the world gears up for one of the most important election years in history, anxieties about deepfakes and voter manipulation. But is this serious cause for concern, or just the latest round in a moral panic about misinformation?

It’s 6 May 2024, the night before the UK general election. Labour is riding high in the polls, a Jessie Ware cover of “Things Can Only Get Better” has soared to the top of the charts, and it looks as though a landslide victory is guaranteed… but wait a second. At the last hour, a video is posted online that shows Keir Starmer giving a speech at the Jimmy Savile Memorial Hall. Grinning wildly and rubbing his crotch, he announces a pair of new flagship policies: 1. “God Save the Queen” will be replaced as Britain’s national anthem with “Vossi Bop”, and 2. Everyone over the age of 65 who voted for Brexit will be euthanised –“as humanely as possible”, he assures a crowd of oat milk cappuccino-quaffing, blue-haired Hackney-dwellers. A crack squad of Labour’s best fact-checkers rush to prove that the video is AI-generated – just look at his hands, for God’s sake! He has 14 fingers! – but it’s too late. The deepfake spreads like wildfire, and there isn’t enough time to undo the damage. When outraged Britons head to the polls the next day, Labour’s 40-point lead evaporates, the Tories keep their majority and, very sadly, Keir Starmer dies from a broken heart.

This scenario, while far-fetched, is not a million miles away from some of the warnings being made about the dangers of AI misinformation. This is the spectre looming over 2024, the most significant election year in history, with almost half of the world’s population – in countries including the USA, Mexico, India, and (almost certainly) the UK – heading to the polls. As AI makes propaganda easier to produce and more sophisticated than ever, will politics be transformed beyond recognition? Or is the idea just the latest round of a mostly baseless panic about disinformation which has already been raging for years?

In what might be a taste of things to come, AI has already emerged as a force in politics, although the extent of its impact remains unclear. A recent Bangladesh election was marred by a spate of deepfake videos, all of which favoured the incumbent prime minister, Sheikh Hasina – who eventually won her fourth term. A similar narrative played out in Slovakia last October. Just two days before its parliamentary election, an audio deepfake was posted online which appeared to show Michal Šimečka – the leader of the nation’s largest progressive party – plotting to rig the vote. As reported in Wired, this took place during a 48-hour moratorium on media reporting and political campaigning, which made it difficult to debunk. Šimečka lost the election. And while there is no evidence that deepfakes played a role in either outcome, in both cases the factions that stood to benefit the most from AI misinformation secured a victory.

In the context of the US or the UK, this kind of misinformation campaign might be more effective at the local level. If two days before the presidential election a deepfake Joe Biden used a racial slur or described Taylor Swift as “the epitome of basic white feminism”, it would be shared by many people who already hate him. But according to Zeve Sanderson, the executive director of NYU’s Center for Social and Politics, the national media would immediately leap into action and correct the record. “I’m more nervous about smaller elections with candidates who are less known, especially in the American context where we’ve seen the destruction of local media – there’s not going to be any incentives for the media ecosystem to fact-check a random state legislature race in a small state,” he tells Dazed. The implementation of AI misinformation could also be more localised and targeted: just last month, we saw AI robots calling up people in New Hampshire and telling them – in a near flawless imitation of Biden – that they shouldn’t vote in a forthcoming primary election.

AI could make misinformation campaigns more credible and persuasive, according to Ardi Janjeva, a research associate at the UK-based Centre for Emerging Technology and Security. “A deepfake video on its own might be easy to disprove, if fact-checkers can provide an article which states that the prime minister – or whoever – was somewhere else at the time. But if you’ve got the video plus a fake article saying that they were there, a voice recording of an interview he did at the same location, and some still images thrown in as well, you add to the picture of convincingness,” he tells Dazed.

“Misinformation constitutes a small minority of the average person’s media diet – most of what people consume online isn’t news or political at all” – Zeve Sanderson

The spectre of an information landscape awash with deepfakes, in which we can longer believe our own lying eyes, is the most attention-grabbing danger posed by AI. But for the most part, the technology is likely to accelerate existing trends. Thanks to generative AI, “bots” – computer programmes with the primary goal of amplifying certain messages – will become cheaper to run on a mass scale, and potentially more persuasive. “In recent Taiwanese elections, we saw networks of AI-enabled chatbots spreading political messages, which is very much the kind of modus operandi we’ve seen in recent years. Ai just allows you to be less reliant on humans to manage the process end to end,” Ardi says.

Whether it’s deepfakes or bot campaigns, AI is already making it easier to spread misinformation – not just for foreign states or domestic political actors, but for private groups and individuals, whether motivated by the desire to cause mischief or in pursuit of a serious political agenda. The Centre for Emerging Technology and Security is currently exploring how these problems might be mitigated: some of the proposed solutions include watermarking technology, which would label when content is AI-generated, and new legal frameworks for regulating its use. The recent Taiwan election is an example of what happens when countries tackle the problem head-on: while there were reportedly attempts by the CCP to influence voters, the impact was ultimately judged to be minimal, which as Ardi suggests, could indicate that the effort to raise awareness and mitigate the risks was successful.

On the other hand, misinformation has been a locus of anxiety for years now, and there is still little evidence that it bears a causal relationship with electoral outcomes. There was Russian interference in the Brexit referendum and the 2016 US presidential election, for example, but it’s yet to be established that it played a significant role in either result – arguably, the people behind the Remain campaign and Hillary Clinton’s presidential run have leant heavily on this explanation as a way of absolving their own political failures. “Most research on voting behaviour suggests that the effects of campaigns, including disinformation campaigns, tend to be small and negligible unless an election is very close, so it is unlikely, albeit not impossible, that an upcoming election will be swayed by malicious use of AI tools,” Professor Cristian Vaccari, who researches the role of digital technology in news media and political communication, tells Dazed.

“Misinformation constitutes a small minority of the average person’s media diet – most of what people consume online isn’t news or political at all, and among that, an even smaller percentage constitutes misinformation,” says Zeve. Misinformation is heavily concentrated in a small percentage of the population, he adds, which means it may be a more significant factor in driving extremism than in influencing elections – if you are inclined to believe a deepfake image of Joe Biden sacrificing a child in a Satanic ritual, you probably weren’t going to lend him your vote in the first place. “Most people have made up their mind going into election day,” Zeve says. “In a highly polarised environment, it’s hard to imagine that there’s going to be that many people for whom a single misinformation story would switch their vote from candidate to candidate.”

But even if they’re not liable to change their minds, voters could still be manipulated in different ways; they could be deceived by deepfakes of the politicians they admire, rather than led to believe lies about those they already hate: imagine an enthusiastic Republican receiving a personal phone call from Donald Trump, thanking them for their support and informing them at the last minute that the date of the election has been changed. This might only work on the most gullible, but the existence of a thriving scam phone call industry suggests that there are plenty of people who would be vulnerable to this kind of trickery.

Given that people are less susceptible to fake news than is typically assumed, is there any reason to believe that the introduction of AI will fundamentally alter this dynamic? For this to be the case, AI-generated misinformation would have to be substantially different to what we have seen before. According to Zeve, it is likely to increase the volume of misinformation. “The more misinformation that’s out there, the higher the probability that a misinformation narrative will enter into the mainstream information ecosystem,” he says. This is particularly concerning at a time when major social media companies like Meta, Twitter and TikTok are all scaling back their Trust and Safety departments, which are intended to tackle these problems.

“In a highly polarised environment, it’s hard to imagine that there’s going to be that many people for whom a single misinformation story would switch their vote” – Zeve Sanderson

If the internet does become awash with deepfakes, one potential side effect is what Zeve describes as “the liar’s dividend” – a scenario where politicians use the existence of AI to deny footage, photographs and recordings of events which really did take place. If Donald Trump’s notorious “grab her by the pussy” recording was leaked today, it’s not hard to imagine him – and his supporters – dismissing it as a deepfake. This would be easier to pull off with an audio recording, which comes with less context – such as location or time – than a video or an image (the reverse is also true: it’s harder to disprove a fake). Even if they’re not entirely convincing, deepfakes could become a way of muddying the waters, casting doubt about the veracity of genuine indiscretions, crimes and atrocities, and allowing people to pick and choose which sources of information they judge to be legitimate. This is already a prevalent dynamic, but it could become a lot worse.

It’s doubtful whether AI will ever become an effective tool for manipulating public opinion or changing the outcome of elections, but its existence could still have a profound effect on our political landscape. “Emerging research, including some I have contributed to, points to a troubling scenario, when exposure to inauthentic, AI-generated content, or to alarmist news coverage about the effects of disinformation campaigns, may reduce trust in news, satisfaction with democracy, and confidence in electoral outcomes,” says Dr Vacarri. “Ultimately, the biggest damage disinformation campaigns may do to news and democracy may not be to change many people’s minds about who to vote for in a certain election, but to erode citizens’ confidence that truth can be discerned from falsehood, that other citizens can be trusted to make sensible electoral choices, and that the outcomes of elections can therefore be trusted and accepted.”

When it comes to AI misinformation,  it could be that we have the most to fear from fear itself: the more panicky and alarmist we are about it, the more we play into the hands of people who want to sow chaos and distrust. Now if you’ll excuse me, I have to get back to editing a rather incriminating video of Keir Starmer describing the Barbie movie as “neoliberal rubbish.”

Download the app 📱

  • Build your network and meet other creatives
  • Be the first to hear about exclusive Dazed events and offers
  • Share your work with our community
Join Dazed Club