Pin It
Cillian Murphy in Christopher Nolan’s Oppenheimer (2023)
Cillian Murphy as OppenheimerOppenheimer (2023)

How to kill Moloch, the demon driving humanity to the brink of extinction

From Oppenheimer’s atom bomb to the AI arms race, the future of humanity might depend on overcoming the influence of an ancient, malevolent god

In the early morning of July 16, 1945, a group of scientists and military officials gathered in a stretch of desert named the Jornada del Muerto, or Dead Man’s Journey. For years, they had been living with their families in a “secret city” in Los Alamos, New Mexico, recruited by physicist J. Robert Oppenheimer to work on the Manhattan Project. Finally, they hoped, they were about to observe the world’s first detonation of a nuclear weapon, codenamed the Trinity test. At 5:29am, they were proven right. The blast from the “Gadget” was felt over 100 miles away, and lit up every “every peak, crevasse and ridge of the nearby mountain range with a clarity and beauty that cannot be described”. The mushroom cloud reached seven and a half miles into the sky.

Images from the Trinity test have come to symbolise the dangerous pursuit of knowledge and power ever since. In film and TV, they appear everywhere from Akira, to Godzilla, to Twin Peaks, to Asteroid City. Christopher Nolan’s new film, Oppenheimer, out today, casts Cillian Murphy as the “father of the atomic bomb” himself. Humanity’s competitive drive, the melting skyscrapers and mushroom clouds remind us, might help us unravel the mysteries of the universe and secure our survival as a species, but it might equally end in our utter annihilation.

Oppenheimer’s reaction to the successful Trinity test is as famous as the images themselves. In a 1965 NBC documentary on the bombing of Hiroshima, he says: “We knew the world would not be the same. A few people laughed, a few people cried. Most people were silent.” Meanwhile, he was reminded of a line from the Bhagavad Gita. He recalls: “Vishnu is trying to persuade the Prince that he should do his duty and, to impress him, takes on his multi-armed form and says, ‘Now I am become Death, the destroyer of worlds’. I suppose we all thought that, one way or another.”

Oppenheimer didn’t just quote the Gita because it sounded cool. He had learned Sanskrit in 1933, and developed a deep connection with the Hindu scripture, later naming it as one of the most important influences on his life philosophy. Needless to say, he would have understood the context of the quote, which sees Krishna (an avatar of Vishnu) convincing the Prince of his duty as a warrior to fight in a war against his own kin. In this light, some see the lines Oppenheimer quoted as a justification of his work on technologies that killed an estimated 110,000 people in Hiroshima and Nagasaki, despite his moral misgivings. A different interpretation: Oppenheimer saw that he was just a pawn in a larger game, guided by forces beyond his control.

Could Oppenheimer have actually believed that he was carrying out the orders of a godlike figure when he worked on the Manhattan Project, or that the souls of those killed by the bombs would enter an eternal afterlife? It’s unlikely. However, when it comes to the development of nuclear weapons and the subsequent arms race (as well as other existential threats that were to follow) there’s another supernatural entity that serves as a fitting metaphor. His name is Moloch.

Like Vishnu, Moloch has ancient and poetic origins. Established in the Hebrew Bible as a god who rewards child sacrifice with victories in warfare, he arrived in the 21st century via films such as Fritz Lang’s Metropolis, and poems like Paradise Lost or Allen Ginsberg’s Howl. However, it’s a 2014 essay by the blogger Scott Alexander (which centres on a reading of Howl) that ties Moloch to some of today’s most pressing existential risks, including the development of superintelligent machines, the destruction of Earth’s ecosystems, and, yes, the ongoing threat of nuclear war. 

Titled Meditations on Moloch, the essay explains how the demon drives a number of humanity’s most harmful behaviours – specifically “multipolar traps”, i.e. scenarios where a group of competing individuals are incentivised to act against the long-term interests of the group. For example, think of the arms race that followed the invention of the atom bomb: after intelligence was leaked to the Soviet Union by spies working in Los Alamos, the US and the USSR essentially entered a competition to amass the largest and most deadly nuclear arsenal, in a display of military might. As a result, the number of nuclear warheads worldwide peaked in 1985, at an estimated 63,000. That’s more than enough to flatten every major city on Earth and plunge the rest of humanity into a nuclear winter.

Alexander describes this kind of scenario as a “race to the bottom”, where competitors have to make economic or ethical sacrifices in order to pull ahead (or remain even) with their opponents. The US and the USSR can’t use their nuclear weapons, because it would result in mutually assured destruction. At the same time, they’ve lost trillions of dollars and rubles – which could have been used for essential services, like education or healthcare – to weapons manufacturing, and we all have to live under the shadow of the nuclear bomb.

The science communicator Liv Boeree is dedicated to teaching “as many people as possible” about Moloch, spreading the word via video essays and her podcast, Win-Win, which draw on her expertise in game theory from a former career as a professional poker player. So far, this effort has seen her team up with everyone from Grimes to the Future of Humanity Institute, a research centre established by philosopher Nick Bostrom at the University of Oxford.

“Moloch is the demon of competition gone wrong” – Liv Boeree

To summarise her own understanding of the metaphor, Boeree says that Moloch is “the demon of competition gone wrong”. Importantly, it’s not about the individuals playing the game, but the game itself, which is designed by Moloch to shape selfish human behaviour through bad incentives. Boeree gives the example of a concert, where everyone is assigned seats with a roughly equal view of the stage. “Then a few people down the front want a slightly better view, so they stand,” she says. “And, because of the crappy design of the stadium, it forces everybody else behind them to stand up. So now everyone’s standing, no one’s got a better view than before.” There’s no overall benefit. In fact, everyone is worse off: for the next few hours, they’ll have to stand if they want to see the stage. Plus, the concert is loud, so there’s no easy way for them to coordinate with each other and sit back down.

This is just one example, with minimal consequences. Today, though, the “demon of competition gone wrong” may be seen as the force behind many societal ills, including cutthroat capitalism, fossil fuel extraction, social media’s attention economy and digitally-augmented beauty standards, and renewed nuclear tensions – all scenarios where the game is set up to encourage individual behaviours that have harmful effects on the world as a whole. 

Many of the experts who have warned about the apocalyptic potential of artificial intelligence have also adopted the demon as a kind of mascot for the dangerous race toward superintelligence, and the difficulty of aligning such intelligence with humanity’s best interests... like staying alive. Max Tegmark, a physicist and machine learning researcher, is one of these experts. Tegmark recently made the case that AI represents an existential risk to humanity in a high-profile public debate, and as founder of the Future of Life Institute (FLI), he was also one of the driving forces behind the March 2023 open letter that called for a moratorium on machine learning models more powerful than GPT-4, which was signed by the likes of Elon Musk, Apple co-founder Steve Wozniak, historian Yuval Noah Harari, and the head of the Doomsday Clock organisation.

In a March 24 essay for the New York Times, Harari – alongside Tristan Harris and Aza Raskin, founders of the Center for Humane Technology – suggested that humanity’s first encounter with AI came in the form of the algorithms that curate what we see on social media. “Humanity lost,” he wrote. “While very primitive, the AI behind social media was sufficient to create a curtain of illusions that increased societal polarisation, undermined our mental health and unravelled democracy.” Tegmark has since echoed this statement, suggesting it was Moloch that pitted the social media companies against each other, formulating a game where they had to adopt increasingly exploitative algorithms or else lose out on revenue and, eventually, disappear completely, dwarfed by their competition.

Even political polarisation and plummeting mental health don’t make AI an existential risk on par with an atom bomb, though. As defined by Nick Bostrom: “An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.” Could AI ever rise to this level of influence? Could it drive humanity to extinction? Yes, according to Tegmark – alongside other prominent researchers such as OpenAI’s Sam Altman and Geoffrey Hinton, the “Godfather of AI” – and it could happen one of three ways. One: a sentient, malevolent AI could emerge from sufficiently complex neural networks and wipe out humanity itself. Two: a malevolent human could harness a powerful AI to carry out their most destructive impulses. Three: a human could ask an AI to carry out a seemingly harmless task that accidentally results in humanity’s demise due to poor alignment (see: the “squiggle maximiser”, a thought experiment sparked by AI researcher Eliezer Yudkowsky, which sees a powerful AI chase a goal that seems alien to its creators – such as producing paperclips – and consume all the resources we need to survive in the process).

No one can agree on the likelihood that any of these scenarios will actually come true. For starters, we don’t even know how our own consciousness works, so we’ve no idea what it might take to conjure a conscious mind out of a machine, or what it might look like if that were to happen. What we do know is that the risk is not zero.

In Oppenheimer, Christopher Nolan takes us back to the moment of the first successful nuclear weapon test. The “Gadget” worked by setting off a number of smaller explosives around a plutonium core, triggering a chain reaction that would release massive amounts of energy in the blink of an eye. During the real Trinity Test, scientists were pretty sure that the chain reaction would stop before it ignited the atmosphere and brought the world to a fiery end – after doing the maths, they concluded that it was virtually impossible. But they couldn’t be 100 per cent certain. Nolan captures this uncertainty in a scene featuring Murphy’s Oppenheimer and Matt Damon’s Major General Leslie Groves, AKA the man who invited Oppenheimer to Los Alamos.

“Are we saying there’s a chance that when we push that button, we destroy the world?” asks Groves.

“The chances are near zero,” Oppenheimer replies.

“Near zero?”

“What do you want from theory alone?”

It seems eerily appropriate that this exchange is hitting IMAX screens amid a flurry of AI innovation and fears about emergent superintelligences with a non-zero chance of ending the human race. Nolan himself has acknowledged this fact. “When you talk to leaders in the field of AI, as I do from time to time, they see this moment right now as their Oppenheimer moment,” the director recently told the BBC. “They’re looking to his story to say, ‘What are our responsibilities? How can we deal with the potential unintended consequences?’ Sadly, for them, there are no easy answers.”

“Even if [we] think that this incredibly powerful technology is 100 years away, we should be putting some brain power into figuring out how to reduce that [risk]” – Liv Boeree

Generally, it’s agreed that the development of machines that surpass human capabilities should be done slowly and carefully, to avoid the kind of scenarios that Tegmark and others warn us about. Unfortunately, Moloch wrote the rules of the game. Regardless of their best intentions, tech companies are locked in a race to produce and share increasingly powerful technologies, edging ever closer to AI that can repeatedly improve itself and may kick off the singularity. If one company slows down, it will simply be overtaken by one of the others, unless everyone agrees to a pact like FLI’s six-month pause (which... they didn’t). Like Oppenheimer racing the Nazis to invent the bomb, or the US and the USSR building their nuclear stockpiles, each company might see itself as the good guy in the race, the only one capable of building a truly safe superintelligence, while sacrificing safety in the name of reaching the finish line first.

Even if we aren’t as close to an extinction scenario as many researchers predict, Boeree notes, we do need to be careful in our approach to AI. (For the record, she’s in favour of responsible AI development.) “Even if [we] think that this potential new life form, or incredibly powerful technology, is 100 years away, we should be putting some brain power into figuring out how to reduce that [risk],” she says. “If we looked up in the sky and there was a super powerful alien race travelling towards us, and they were going to land on Earth in 50 years, I would like to think anyone with a kid would be like: ‘We need to start thinking about this now.’ Right?” Molochian dynamics are certainly at play, she adds, “incentivising the whole world to throw out caution”, but this just makes it all the more important to make a conscious effort.

Again, the sole responsibility isn’t on any individual AI engineer or CEO, as much as we might enjoy pinning the blame on Mark Zuckerberg or Elon Musk. Ultimately, it comes down to misaligned incentives – the kind symbolised by Moloch’s misrule – which steer technologies in the wrong direction. “The question is,” says Boeree: “How do we design an incentive structure so that all the energy goes into the stuff that we actually want, and not the stuff we don’t?” In the case of AI: how do we make sure that we can reap the benefits, in fields like drug discovery or climate science, while avoiding fatal pitfalls?

The question is, how do we design an incentive structure so that all [our] energy goes into the stuff that we actually want, and not the stuff we don’t?” – Liv Boeree

Once you start spotting traces of Moloch, it’s hard to stop. He exists everywhere, from the media (where publications rely on clickbait headlines for their continued existence) to politics (where short-term, divisive discourse wins over voters) to social networks (where influencers have to turn themselves into hyperreal NPCs in order to keep viewers hypnotised). In many cases, these multipolar traps also feed into the greater risks. Heightened political posturing enhances the possibility of a nuclear strike. Social media companies capture our attention by training AI on humanity’s temptations and weaknesses, and we reinforce its ability to exploit us every time we tap a like button. It can seem like the whole world is built on bad incentive structures, with gods and monsters like Moloch lurking around every corner. So what are we supposed to do? Is it possible to pull out of the race to the bottom, if not to call it off completely?

By the start of the 1980s, the world had been living under the shadow of the nuclear bomb for decades. Tests of new, more powerful weapons happened year-on-year. In school, children practiced what to do if they saw a mushroom cloud on the horizon, while some families built fallout shelters in their backyards. Then, in 1983, a horrifying TV movie, The Day After, was screened to more than 100 million Americans (almost half the population at the time). According to ABC, the streets of New York emptied out as people gathered for mass viewings of the film, which dramatised a war that escalated into a full-blown nuclear exchange between the US and the USSR, and explores the devastating aftermath. Following The Day After, a panel of the country’s leading politicians and intellectuals featured in a televised debate on the pros and cons of nuclear proliferation. In 1987, US president Ronald Reagan – who wrote in his diary that The Day After was “very effective and left me greatly depressed” – met with Soviet Premier Mikhail Gorbachev to sign a treaty that resulted in the reduction of the nuclear arsenal on both sides.

Some experts, like the founders of the Center for Humane Technology, suggest that we need a similar public discussion between the heads of AI labs, leading safety experts, and politicians in 2023, to give “this moment in history the weight that it deserves”. Others say this is alarmist, but some experts also criticised The Day After as sensationalist at the time. In the end, do we really care, if it lessened the risk of human extinction?

Boeree presents another possibility for escaping Moloch’s grasp. When she first started thinking deeply about Moloch, she saw the situation as hopeless. Then, she began considering the inverse of the “demon of competition gone wrong”, whose games leave everyone worse off. In the end, she came up with another kind of deity: WinWin. (Scott Alexander gives us a similar example, the Goddess of Everything Else.) To be clear, WinWin isn’t “anti-Moloch”, but something living on a higher plane, she explains. Like Moloch, WinWin rewards competition, but only up to a point where it works to everyone’s benefit – in other words, WinWin knows where to draw the line. Unlike Moloch, she also rewards communication and collaboration, cutting through the noise.

“In my dark moments, it sounds strange, but I call on WinWin [and ask] what to do,” Boeree says. “There’s value in believing in something higher. I do think to an extent that belief creates reality.” 

Asking “what would WinWin do?” has another positive effect: it encourages us to imagine better futures, instead of going straight to the worst-case scenario. “It’s cheap to do dystopian because it’s very easy to imagine,” Boeree points out. “Utopian [or protopian] is harder.” Protopian thinking – where we strive to make each day a bit better than yesterday – might sound overly optimistic, but looking back on the grand scheme of human history, it’s true: WinWin has, for the most part, prevailed over Moloch. After all, the human race isn’t driven solely by selfish competition. Otherwise, we’d have wiped ourselves out long ago, and we haven’t... yet.

Join Dazed Club and be part of our world! You get exclusive access to events, parties, festivals and our editors, as well as a free subscription to Dazed for a year. Join for £5/month today.