Artists like Luke Nugent are using artificial intelligence to imagine hyperreal alternate histories, but what happens when the technology falls into the wrong hands?
It’s sometime in the early 2000s. You are standing in an anonymous American mall, and you are surrounded by teenagers dressed in black hoodies, skinny leather pants, and crucifix jewellery. Bright blue fringes, heavy eye make-up, chains dangling from their belts. Not a mobile phone in sight, just mall goths loitering in the moment (although, if you did tap into their iPods, the playlist would contain all the usual suspects: Slipknot, Korn, Evanescence).
But wait. Something feels... off. And then you begin to notice the signs – an eye out of place here or there, an ambiguous mass of flesh hovering off to one side of the frame, shopfronts with illegible writing in an inhuman script. That’s not Hot Topic, you think. In fact, this is not a real place, or a real time, at all. It’s an elaborate fiction, a false memory dreamed up by cryptic technology borrowing bits and pieces from the human imagination.
Artificial intelligence has come a long way since people began coaxing images out of deep learning models around 2017. In fact, that’s an understatement. Just a few years ago, a vaguely humanoid blob on a virtual canvas was enough to fetch six figures and be proclaimed “the future” of art at Christie’s, and today the average hobbyist can whip up something that looks practically photorealistic out of thin air (well, out of billions of data points scraped from the world wide web). Then, there are people like Luke Nugent, a British photographer whose work with AI goes beyond mere realism, capturing the distinctive atmosphere of scenes and subcultures that exist beyond the limits of space and time.
Speaking to Dazed about his AI renderings of mythical subcultures, Nugent suggests that they come from a place of nostalgia. “It’s a mixture of things,” he says. “I was there. I wasn’t there. I wanted to be there.” Take the mall goths, for example. These studded, choker-wearing teens were inspired by a (real) holiday to New Jersey in the 2000s; several years later, Nugent’s “very small experience” of this scene was fleshed out with the help of Midjourney, a leading text-to-image model that draws upon data gathered all across the internet. Essentially, Nugent’s memories were blended with everything the internet already knows about mall goths, resulting in the alternate history that now lives on the artist’s Instagram – images landing somewhere between personal snapshots and uncanny archetypes.
Some other highlights, inspired by the places Nugent was, and wasn’t, and wished he had been: a night out at New Romantic birthplace The Blitz; a Brixton housewarming party populated by gaunt, leather-clad models; a clowncore cruise ship; a misty festival campsite; a Gummo-inspired rave.
What’s so captivating about Nugent’s artworks is that they’re rooted in a super-specific reality. Scrolling through the slideshows on his Instagram can feel like moving through a real space, be it a house party, a Warhammer convention, or the streets of southeast London (“Legit thought this was ends for a second,” reads one comment on a recent post). The personality behind the camera, too, is remarkably consistent, inspired by subcultural chroniclers such as Derek Ridgers or Nan Goldin, and brought to life by a thorough process of prompt engineering and virtual directing. “Creating otherworldly images is certainly not beyond the realm of possibility,” says Nugent, and there’s no shortage of examples online, from anime-style portraits to haunting cryptids. “That’s just not interesting to me. Subcultural stuff is my interest... street portraiture, cultural history. Exploring that side of things in AI feels fun and interesting. It’s almost like [going] back in time, to something you have in your memory and you’re kind of nostalgic about, and then exploring that space, and looking at the kinds of people that might have been there.”
Like all experiments in the rapidly-developing world of artificial intelligence, though, Nugent’s images also throw up a number of questions and ethical dilemmas. Nugent himself is sensitive to this: as a photographer, he notes, the exponential progress of AI image generators casts an uncertain shadow on the future of his industry. “I completely understand why people keep saying it’s scary,” he says. “I think there’s always going to be someone who orchestrates what’s happening within the imagery, and being the vessel for a brand to create something, but will lots of brands use [AI] and just think, ‘Oh, well, that’ll do’? Absolutely.” Then, of course, there are the ongoing concerns about AI’s built-in biases about race and gender, although exclusion is rarely a problem in Nugent’s images. “I have a bit of a warped perception of inclusivity and diversity,” he adds, “because that’s something I’ve always been very engaged in, from being a queer teenager constantly exposed to every type of person.”
What if exclusion wasn’t the only diversity issue raised by AI, though? What if inclusion could become a problem in itself? This might sound like a counterintuitive proposal – after all, how can inclusivity be a bad thing? – but to be properly understood it needs to be placed within the broader context of AI and its erosion of our faith in images, real or imagined.
Basically, the treachery of AI images is a two-part phenomenon. Firstly, it floods our lives with uncanny images that seem real at a glance but signify nothing (a very literal manifestation of the “hyperreality” described by philosopher Jean Baudrillard: “the generation by models of a real without origin or reality”). Then, this flood of false images erodes our belief in real images, which we begin to dismiss as products of AI.
The second part of this process is plain to see in comment sections across social media. “Why does this feel AI-generated?” ask TikTok users, in response to a real pilgrim-themed Kim Kardashian photoshoot from 2018. “I legitimately can’t tell what is and isn’t real anymore,” comment Twitter users, under real images of Donald Trump’s indictment, or fake images of Julian Assange “leaked” from Belmarsh prison, or photos of the Pope in a Moncler-esque puffer jacket. Even Lizzo is getting worried. It’s clear that AI images are becoming indistinguishable from real photographs, and will only keep getting better as the technology works out the finer details, like rendering hands with the right amount of fingers. Video will doubtless follow, with AI deepfakes already deployed for political misinformation campaigns.
What does this have to do with Luke Nugent’s AI-generated images of historic style tribes? Well, picture this: the year is now 2050, and you’re a young cultural historian, born around the time that AI images became practically impossible to parse from real photographs. You’re fascinated by the goth revival that emerged in North American malls in the early 2000s, and to learn more you seek out images and videos documenting the era. (You see where this is going, right?) You find millions of images that tell a number of contradictory narratives about the people that were there, what they looked like, what they wore, and what demographics they represented. Most of them are fake, but they’re so lifelike that there’s no way to tell which is which, the real and the AI-imaginary, the hyperreal. How do you unpick the genuine historical narrative from the rich tapestry of alternate histories woven by AI artists?
Making pictures of Trump getting arrested while waiting for Trump's arrest. pic.twitter.com/4D2QQfUpLZ
— Eliot Higgins (@EliotHiggins) March 20, 2023
“Reality certainly appears to be cracking [under] the speed of generative AI expansion,” says Tim Stock, an associate teaching professor at Parsons School of Design, and founder of scenarioDNA, a consultancy that uses AI to map cultural trends. “We are engineering our future with very little attention to the cultural and sociological impact that might have.” To some extent, he adds, Nugent is contributing to this sense of confusion with his “idealised expression[s] of our collective imagination” – although, of course, portraits of punks and scene kids pose less of a threat to future historians than, say, false images of Trump resisting arrest.
Returning to 2050, it’s easy to imagine a world where historical narratives about systemic inequality and injustice have been erased, or at least muddied, by the onslaught of AI-generated media. Nugent himself admits that his own prejudices – or lack thereof – have shaped the renditions of past subcultures in his artworks. “Some of them [insert] people who probably wouldn’t have been there at the time,” he says, adding: “Club kids in Soho in the 80s... there wasn’t a huge amount of diversity.” Nugent’s AI photos tell a different story, populating the scene with a range of characters from different cultural backgrounds, alongside alien interlopers. You might see this as a positive representation, showing us what history could have been like if the political landscape was different. But the fact is, the political landscape wasn’t different – there were racial divides amid other harmful biases, and do we really want to lose sight of that? Several decades from now, would we want images to airbrush out the injustices of today?
Of course, Nugent isn’t trying to pull the wool over anyone’s eyes about historic inequality. “I wasn’t trying to trick anyone,” he says. “I was just making stuff.” He wasn’t even interested in editing out the inhuman quirks introduced by the AI, leaving telltale signs like extra fingers untouched to signal the photos’ artificiality. Ceci n’est pas une punk. Nevertheless, his images showcase how AI offers an unprecedented chance to sow doubt about the power dynamics of the past, especially if (or when) the tools fall into the hands of people who are looking to generate false narratives for nefarious ends.
Decades before the invention of the internet, the visionary philosopher Marshall McLuhan predicted that the evolution of modern technologies would “retribalize” mankind, and Stock proposes that the use of artificial intelligence to reinvent history is a part of this process. “The rise of nostalgia and ‘core’ as subculture narratives globally is interwoven with the rise in populist political movements,” he says. By “seeding” these narratives with “a tiny, distorted, out-of-context truth”, he adds, these movements are able to shape the broader cultural trends and behaviours of future generations. At the other end of the scale, rose-tinted alternate histories can also represent a failure to face up to the reality of the future. “The future is scary and ambiguous. It presents us with the angst of a change that we do not want to deal with, [so] instead we practise avoidance.” We find a false sense of security in rewriting history in accordance with out contemporary beliefs.
Another version of this avoidance can be seen playing out in real time. Earlier this month, a particularly dystopian turn in AI image-making saw fashion brands generate virtual characters to “supplement” their human models and introduce more diversity of size, skin tone, and age. While using AI to cut the costs involved in expanding the company’s cast of models, this scheme has an obvious flaw: it does absolutely nothing for the real models who have been historically overlooked. In other cases, the avoidance is more unconscious, or the product of biases built into the tools themselves – it’s probably no coincidence, for example, that in an AI landscape dominated by US innovations, cultures across the world are generated with a distinctly American smile.
So is this the future we face, in the age of AI: bad actors sowing division with fake cultural artefacts, while the rest of us are blinkered by a topsy-turvy optimism for a past that never existed? Or are we headed for an even greater sea-change, where our entire relationship with still and moving images – as representations of reality – comes into question? Perhaps our faith in images could be eroded so completely that they become a representational relic, like painting or poetry – a fun novelty, sure, but hardly a convenient way to communicate practical information about the world around us.
These questions about “aesthetic deception” and its entanglement with social politics aren’t entirely new, points out Stock. He traces them all the way back to the 16th-century surgeon Gaspare Tagliacozzi (AKA the “father of plastic surgery”) and, centuries later, the tendency of figures like the Kardashians to physically edit their own appearance. “Think about it in terms of unknowns,” he says. “Do we know how much of Kim Kardashian herself is real? How much does it matter if her photo is AI-generated?” A similar comparison can be made between the AI headshot companies that have cropped up on platforms like LinkedIn, and traditional headshots taken in IRL photography studios. Both are staged, using lighting, angles, and editing to change the subject’s appearance, to convey a certain message. AI just allows us to make the changes more drastic, and to implement them on a massive scale.
But there lies one of the biggest problems with AI: scale. One fake image of a mid-2000s mall goth isn’t going to change much, but a tidal wave of realistic-looking fictions is a different story, and – amid an AI arms race fuelled by for-profit companies – the elaborate fakes are only going to come quicker and cheaper. One proposed solution is the labelling of all images created or edited using AI. Nugent compares this to the nutrition information on food packaging: “[This image] is gonna cause 30 per cent self-hate.” It’s a change that would probably be healthier for everyone’s mental health, he adds, but at the same time it risks stifling creativity. It’s also unclear how such rules would even be enforced, especially outside specific platforms like Instagram or TikTok.
At the very least, suggests Stock, we need to figure out what we even mean by the word “truth” in the age of AI. Only then can we move onto the more intricate questions: how do we locate this truth in the histories generated by artificial intelligence? How do we use it to shape a better future, and steer clear of a tribalised hellscape? Right now, it could go either way, he says: “AI-generated histories or dossiers could function across the spectrum from game-changing to dangerous. They could help us shift our perceptions, and not make mistakes a second time around. These same histories might also be folded into and blended with reality.”
For all his success generating semi-fictional subcultures, Nugent is similarly ambivalent toward the future of AI and its knock-on effects on our culture, past and present. “[I] just posted a couple of things without any real consideration about the implications,” he admits, adding: “I don’t imagine myself using [the technology] in the same way in a year’s time. It’s hard to imagine what it’s going to be in a year’s time.”
That being said, the kind of artworks that appear in Nugent’s alternate histories could be vital in shaping this imminent future. It’s safe to say that many of us wouldn’t even be thinking about the treachery of AI images if it wasn’t illustrated – consciously or unconsciously – in bizarre photos of anachronistic club kids, or surreal shots of the Pope in cutting-edge couture. By sparking this conversation, AI artists are providing a vital service: the questions about who gets to rewrite history can not be left in the hands of technocrats and authorities looking to leverage technology for their own personal gain.