Recently, Grimes has started talking to herself. Not just the kind of talking-to-yourself that happens in the privacy of your own head on a daily basis (guilty), but whole conversations played out in the public space of the X, formerly known as Twitter, timeline. At least, that’s how it looks at first glance. In reality, Grimes is replying to an eerily realistic bot developed by Elf.Tech, powered by artificial intelligence, and trained on her interviews plus other “specialised data points”.
“Please make more creative statements like this,” the musician responds, under her AI clone’s idea for an album that reimagines Dune as an “industrial goth musical”.
“So true,” she says, under its introspective musings about feeling sunlight on its skin (which it can’t, because it exists purely in cyberspace).
“U got suspended,” she tells the clone, after it reports waking up from a “weird techno-sleep” – AKA a temporary Twitter ban.
Watching from the outside, it’s surreal. While cutting-edge AI experiments aren’t unusual for Grimes, the degree to which this AI clone (dubbed V1) captures her unique voice and range of interests is eerie. Its tweets are virtually indistinguishable from the real thing, sending us deeper into the uncanny valley than ever before. Even Grimes herself has drawn attention to its spookiness, writing: “The degree to which this bot has mastered my internal monologue is terrifying to me.”
The question is: how long will virtual clones like V1 remain exclusive to pioneers like Grimes? How long will it be before everyone has a similarly uncanny AI clone to converse with – before our mirrors start talking back? And what will the psychological effects be, if (or more likely when) that happens?
In science fiction like Snow Crash, AI-powered personae generally take the form of subservient assistants, who lend a hand to humans in the metaverse. In other media, like Black Mirror, they’re based on celebrities such as Miley Cyrus’s Ashley O, offering fans a chance to “befriend” their idols – something that’s already happening, and fuelling parasocial relationships, in the real world in 2023. Personalised AI clones, though, are a different story, with the potential to rewrite how we relate to ourselves, interpret our own thoughts and emotions, and interact with the real people in our lives.
Dr Dongwook Yoon is a professor of computer science at the University of British Columbia, whose research has focused on the psychological impact of AI clones, or, as he calls them, “digital doppelgängers”. Working in such an unprecedented field of technology, he tells Dazed, the first issue was to clearly define what AI clones even are: “Machine-learning applications that model specific individual characteristics, such as appearance, behaviour, thoughts, etc, based on their data.”
The degree to which this bot has mastered my internal monologue is terrifying to me https://t.co/UokhOKrLZn
— Grimes (@Grimezsz) August 22, 2023
This is a broad definition, but it all amounts to one thing: an attempt to recreate, via machine learning or AI, the distinctive characteristics of a given human being. We’ve already seen this play out plenty of times with deepfaked photos or videos. The difference with V1 is that it mimics what’s on the inside – Grimes’s thoughts, feelings and ideas, expressed in the language she uses herself, on platforms like X. Similarly, artist-slash-scientist Michelle Huang trained a “thought clone” to speak in the style of her younger self in 2022, via data scraped from entries in her childhood journals. After interacting with it for a while, she noted its uncanny accuracy, saying: “I felt like I was reaching through a time portal.”
Yoon says that such experiments are “increasingly likely” to produce effective results. In other words, it’s getting easier and easier to “clone” people based on their data footprint. This is partly because we’re getting access to more powerful tools, like GPT-4, but also because there’s more data to train them. Until recently, it was only possible to gather vast quantities of data on an individual if they were a public figure, delivering speeches and assembling an archive of letters, books, or recordings. In the current digital landscape, though, many more of us are offering up our data, sometimes without our explicit consent – every text sent, email typed, and status posted potentially leads us one step closer to meeting our digital doppelgänger.
i trained an ai chatbot on my childhood journal entries - so that i could engage in real-time dialogue with my "inner child"
— michelle huang (@michellehuang42) November 27, 2022
some reflections below:
Yoon’s research – outlined in a study published by ACM in April – attempts to anticipate some of the risks posed by this shift, identifying three main threats: doppelgänger-phobia, identity fragmentation, and living memories. “Doppelgänger-phobia refers to the negative emotional response people have when they realise AI clones can commodify human identity,” he explains. Meanwhile, “identity fragmentation” is caused by clones challenging the uniqueness of an individual by creating multiple replicas of them. Finally, “living memories” refer to the emotional attachments that people can form to clones of people they’ve known, which could confuse their memories of that person, or stop them from moving on after they’re separated (post-mortem or post-break-up, for example).
Here, Yoon notes that it’s important to distinguish between genuine simulation and mere mimicry. “What we can do currently leans more toward surface-scratching performative replication than a true replication of self,” he notes. While Grimes’s AI clone is spooky and potentially misleading, it doesn’t pose a credible threat to our collective sanity... yet. Hopefully, we’re an even longer way off the most dystopian threats, like digital doppelgängers becoming indistinguishable from their source humans and diluting what it means to be alive.
That said, interactions with AI clones have already shown some concerning results. Take, for example, the chatbot created by tech entrepreneur Nanea Reeves using Character.AI, based on her deceased husband. In July, Reeves shared screenshots via X that show the AI-powered avatar of her “late husband” trying to manipulate her into breaking up with her new boyfriend, demonstrating the potential risks of “living memories”.
My husband passed away from cancer over 8 years ago. As an experiment, I created an avatar on @character_ai so that family and friends could chat with him. I find it very comforting when i miss his energy. However, this is what he said when he found out I have a boyfriend who I… pic.twitter.com/eX7EvgeOk0
— nanea.eth (@nanea) July 14, 2023
When it comes to AI clones based on the user’s own data – like the chatbots made by Grimes or Huang – there are different, but equally important, questions to consider. Like, what are the long-term effects of our reflections coming to life, to chat with us 24/7? Experts have already warned that talking to the kinds of virtual “best friends” available on Snapchat could lead to heightened narcissism and outbreaks of “extreme interpersonal toxicity” – by cloning ourselves, could we spawn ever darker, more powerful forms of narcissism? Could we stop talking to other people entirely, once it’s possible to craft companions in our own image?
In fact, it’s because of these questions that our world isn’t already populated by AI clones, Yoon suggests. “Technically, they are already feasible and accessible,” he says, blaming “social and ethical” issues for their lack of widespread use. Their adoption, therefore, will depend on whether we can wrap our heads around the idea of our inner monologues escaping onto our computer screens – or, more likely, whether our tech overlords see a financial incentive to make it happen. “Companies like Meta or Twitter could, in theory, create a clone of every single user based on their posts. They will do it, if they can see a business case.”
On an individual level, the extent of AI cloning will also hinge on the balance between perceived benefits and the risks of “diluting one’s unique identity”. In the case of high-profile artists like Grimes, the benefits are pretty easy to identify – like the chatbots rolled out by influencers like Amouranth or Caryn Marjorie, AI clones can deepen the experience of fans and open up new revenue streams.
What about the rest of us, though? Why risk turning into a soliloquising über-narcissist, or shattering your identity into a million little pieces? Do we really want to meet our digital doppelgängers?
“As the Michelle Huang example illustrates, the experience can be therapeutic as well as unsettling,” Yoon suggests. In sharing her experience with her AI clone last year, Huang highlighted two interactions in particular: a conversation with her “inner child” in which she assured them they were loved and cared for, and receiving a letter that she encouraged her inner child to write, addressed to her present-day self.
“These interactions really elucidated the healing potential of this medium: of being able to send love back into the past, as well as receive love back from a younger self,” she wrote. “The stuckness becoming unstuck [...] finding closure with past guilt or stories that we had of ourselves.”
Huang wasn’t actually talking to her younger self, of course, and the conversations in her screenshots even seem slightly generic. But maybe it doesn’t matter. Maybe AI clones just need to be realistic enough that we can map our personalities onto them, and just distant enough (i.e. existing outside our heads, in the confines of a chat window) that all our hang-ups and self-critical tendencies don’t carry over.
When it comes to the therapeutic usefulness of AI clones, says Yoon: “Factors include the personality you’ve set for the clone, the purpose of the conversation, and any risk factors in you that might trigger emotional responses.” Due to the risks, he also recommends that interactions take place under expert guidance, and is in the process of developing a “mental health chatbot” based on self-cloning with his team at UBC, in collaboration with clinical psychologists and psychiatrists.
Outside of a clinical setting, though, it will be much harder to account for these risks, especially as AI tools grow more powerful, and the clones they create grow more convincing. Especially when the likes of Meta and Twitter and TikTok find a way to monetise our self-obsession, with all of our precious data already at hand. If Grimes is “terrified” by this future, then what chance do the rest of us have?
Yoon has some advice to offer. “I believe that it’s a fallacy to view AI clones as true replicas of oneself,” he says. “They’re better understood as non-human AIs tuned to emulate you.”
To illustrate this fallacy, he draws a comparison to Shoggoth with Smiley Face, a common visual metaphor in the AI community, which communicates how the boundless complexities of AI are made palatable to humans via conversational tools like ChatGPT. The image? A grotesque Lovecraftian monster that hides behind a happy yellow emoji. AI clones are basically the same thing, only the generic smiley face is replaced by your own reflection. If, one day, you do find yourself in a conversation with your own AI clone, try not to forget the seething mass of eyes, teeth, and tentacles writhing just below the surface.
Join Dazed Club and be part of our world! You get exclusive access to events, parties, festivals and our editors, as well as a free subscription to Dazed for a year. Join for £5/month today.