Pin It
Joe Biden sticks his tongue out in deepfake image
Joe Biden sticks his tongue out in deepfake imageVia Twitter

‘Digital wildfire’: how deepfakes became a new frontier for global conflict

US Special Forces are among those investing in the controversial technology to conduct psyops and spread ‘floods of falsehood’ – here, an expert tells us what this means for the future of post-truth warfare

Over the last few years, it’s often been said that we’re entering a “post-truth” society, but when the phrase was named word of the year by Oxford Dictionaries in 2016, deepfake videos hadn’t even been invented yet. Now, the truth (particularly online, where verified facts have always been hard to come by) is even more difficult to parse from fiction. World leaders can be made to deliver speeches on imminent alien invasions or bring back military conscription. People can be inserted into porn without their consent and without ever taking their clothes off. Whole alternate histories can be written and reproduced, with video footage to “prove” that they actually happened. Loved ones can even be “brought back to life”.

You may know deepfakes from fun novelty videos – Tom Cruise singing “Tiny Dancer” to Paris Hilton, Joe Biden reminiscing about smoking “dirt” weed – but, as they continue to improve, it’s undeniable that they’re changing the face of the internet and causing us to question our understanding of reality. For that reason, it should also be obvious why we don’t want them to fall into the wrong hands. Enter: the US Special Forces. More specifically, the United States Special Operations Command (or SOCOM), the organisation responsible for overseeing some of America’s most secretive clandestine operations.

In federal contracting documents analysed by the Intercept earlier this month, SOCOM expresses a desire to procure “next generation” deepfake technology in order to “generate messages and influence operations” – in other words, to conduct global propaganda campaigns and spread disinformation. This aim is detailed alongside other nefarious goals, like hacking into internet-enabled devices to listen in on foreign citizens, but it’s particularly significant because it signals the open adoption of deepfakes as a tool for psyops, which – as experts warn – could be turned on US citizens just as easily as they’re used against enemies outside the country’s borders.

Surely it’s illegal to use AI-powered video forgeries to influence the behaviour of citizens at home or abroad though, right? Well, not exactly. Like many new technologies, the legalities of deepfakes are a grey area, especially when it comes to newly emerging uses, like... psychological warfare. Not that it would matter, necessarily, if there were any established rules. As reported by the Washington Post, the Pentagon was forced to review how it performed “clandestine information warfare” in 2022, after Twitter and Facebook removed fake accounts it allegedly ran in violation of the platforms’ terms of service. This was just one example in an ongoing controversy about military and intelligence agencies’ rule-breaking online behaviour, which is said to be eroding the government’s credibility.

Let’s take a step back, though. Why has SOCOM chosen now to jump on the deepfake bandwagon in the first place, and what does it mean for the future of online (dis)information? Who else has our gullible, content-addled brains in the crosshairs? Can we learn to spot deepfake disinformation? We asked an expert to help us clear things up.

SOCOM’S INVESTMENT IN DEEPFAKES ISN’T A ONE-OFF

The early hype around deepfakes was all about this scenario – disinformation by governments to destabilise and confuse,” explains Sam Gregory, executive director at the human rights organisation WITNESS, which has spent the last five years researching the potential harms of the technology. “The reality so far [is] that we haven’t seen as much use of deepfakes in disinformation as the hype suggested. But the pace of research and development of tools is rapid and we are in the final stages of being able to prepare adequately.”

SOCOM isn’t alone in adopting deepfakes for unconventional warfare. Already, we’ve seen the widespread deployment of simpler “synthetic media” tools for disinformation, says Gregory, such as AI avatars for political sock puppet accounts on social media. In Venezuela, meanwhile, they’ve already started rolling out deepfake news anchors to share AI-generated propaganda, with similar schemes popping up in support of governments in China and Burkina Faso. AI-generated audio and images are even more widespread, thanks to commercially-distributed tools and the ease of generating content with limited samples.

Maybe the most high-profile case of using deepfakes in a political context so far, though, comes out of the Russian invasion of Ukraine. Back in March 2022, a video of Volodymyr Zelensky was shared widely across social media and broadcast via a hacked TV channel – it appeared to show the Ukrainian president announcing his surrender to Russian forces. Luckily, there was plenty of warning, and the low-quality video was quickly debunked. But “we shouldn’t assume that most cases will be like this one, so easily and rapidly challenged”, says Gregory, adding that most governments, journalists, and media companies aren’t equipped to with the tools to detect deepfakes.

‘MALICIOUS ACTORS’

In 2021, the FBI issued a warning that “malicious actors” would “almost certainly leverage synthetic content for cyber and foreign influence operations” in the coming months. Insert: a Spider-Man Pointing at Spider-Man meme, where the US and their “malicious” actors are revealed, to no one’s surprise, to be one and the same. Hypocritical? Yes. And that’s not to mention the ethical implications of deepfakes, which throw up a whole range of moral dilemmas even before they’re militarised.

Gregory agrees that SOCOM’s endorsement of deepfakes “sends the wrong message” to US citizens who want to manipulate the truth for themselves: “If the government can do it, why not us?” Naturally, it will also spark a global arms race to produce the most convincing forms of disinformation, with world governments targeting both foreign powers and their own populations.

For those who are already suspicious of the government (and honestly, who isn’t at this point?) the use of deepfakes also undermines any legitimate public communications that authorities share. “We know already that [their] increasing availability is used as an excuse to dismiss true accounts as potentially deepfaked,” he adds. This is known as the “liar’s dividend”, and has a “corrosive effect” on societal trust.

“We know already that the increasing availability [of the technology] is used as an excuse to dismiss true accounts as potentially deepfaked” – Sam Gregory

HOW SPECIAL FORCES COULD ACTUALLY DEPLOY DEEPFAKES

Much of the rhetoric around adopting deepfakes is focused on an international war for informational dominance, but that doesn’t get to the heart of the most likely (and insidious) ways that they might be deployed. “The most important ways are in fact within [authorities’] own countries,” says Gregory. 

As part of its “Prepare, Don’t Panic” initiative, WITNESS has spent years talking to activists and journalists around the world about their concerns surrounding deepfakes, many of whom are already targeted by their own military and intelligence services. “They pointed out how existing forms of manipulated media are used to accuse them of corruption, to make fake sexual images to compromise them, and also to undermine their credibility and their evidence,” he says. “So we should fear these tools in the hands of any military or intelligence service.”

Another approach to muddy the waters around inconvenient truths is to share “floods of falsehood” – ie “contradictory, multiple accounts of an event [that] force people to throw up their hands about finding the truth”. Tied in with the liar’s dividend, this could make real information (ranging from citizen journalism to official broadcasts) virtually impossible to locate amid an ocean of lies.

THINGS ARE ONLY GOING TO GET MORE CONFUSING

Deepfake videos were first introduced to the world in 2017, and while they’ve come on leaps and bounds since then, the technology is still effectively in its infancy. “Because of the research developments – and now the increasing investment because of generative AI hype – they are likely to get more realistic, require less training data to make a convincing image of any real person, and increasingly become accessible via mobile,” says Gregory.

Basically, deepfakes are here to stay, and if we think they’re disruptive now, we need to realise that this is only the beginning. “The future of deepfakes is more of them, more integrated with other media creation,” he adds, keen to stress that this will include useful and creative media that will “seem perfectly normal to us within years”. All the more reason to prepare methods to separate the good from the bad.

HOW WE CAN RECOGNISE AND RESIST DEEPFAKE PROPAGANDA

Needless to say, spotting synthetic media – and recognising whether it’s made with malicious intent – isn’t always going to be easy. As the technology progresses, videos are going to crawl out of the uncanny valley and become indistinguishable from reality, and the sheer volume of fake media will provide problems of its own. Unchecked, disinformation could spread like “digital wildfire”.

We can’t place the pressure on individuals to spot deepfakes, says Gregory. Not everyone has the time or resources to “parse out the pixels and spot the forensic clues of synthesis by themselves”. “[This] is why we need to make sure that they are as detectable as possible, and ensure the whole pipeline of producing AI-generated media works to counter malicious usages.”

Technologists, the creators of media, and legislators all play a vital role in shaping this ideal future of technological transparency. “We need to have structured ways that people can understand how media was made, embedded and durably travelling with media,” he explains. “This is often called ‘authenticity and provenance infrastructure’. These approaches show you the recipe for how media was made, and how it was edited.”

That being said, individuals can learn to be more careful about how they interact with online media, and not take every video that passes their TikTok feed at face value. How? A good place to start is the SIFT technique: Stop, Investigate the Source, Find Other Accounts, Trace the Original. To summarise, Gregory says: “Don’t let emotions get the better of you with a single source that’s too good to be true.”

WHAT LAWMAKERS ARE DOING ABOUT DEEPFAKES SO FAR

As people wake up to the dangers of unregulated deepfakes, new proposals for rules and regulations are constantly being introduced. The EU, for example, has included guidelines on synthetic media in its upcoming AI Act. The UK, alongside several US states, has introduced laws on specific AI issues such as generating non-consensual sexual images, or material that interferes with elections. In January, China’s “Deep Synthesis Provisions” were heralded as the strictest crackdown on deepfakes to date.

In many cases, these developments are a step in the right direction. The key issue to start with, Gregory notes, is “the biggest current malicious use-case”: targeting people (and mainly women) with faked sexual imagery. However, we do have to be careful that legislation doesn’t overstep the mark and infringe on freedom of expression. “We can see in the Chinese legislation how in fact the law is used to target legitimate political speech like satire, and to prevent people criticising the state,” says Gregory. “That’s the trend in many so-called ‘fake news’ laws globally and we have to avoid it with deepfake laws.”

There are a number of other grey areas that will need to be defined as “synthetic media and generative AI permeate our media and social media ecosystems in the future”. For one, we’ll have to draw a line between what synthetic content is acceptable and what’s not. We’ll also need to embed information in media to identify its provenance while navigating privacy protections and “fundamental human rights”. 

In the case of militarised deepfakes, of course, this all depends on governments – and their various, shady subsidiaries – playing by the rules they create. In a post-Wikileaks world, where psyops are a commonly-acknowledged undercurrent of our online lives, you’d be forgiven for thinking that this leaves us with little hope.

Download the app 📱

  • Build your network and meet other creatives
  • Be the first to hear about exclusive Dazed events and offers
  • Share your work with our community
Join Dazed Club