Pin It
Terminator III (still)
Terminator III(Film still)

AI is an extinction-level threat, say industry leaders

Grimes, Sam Altman, and Geoffrey Hinton (AKA the ‘Godfather of AI’) are among 350 experts comparing the technology’s risks to pandemics and nuclear war

In 2023, we’re all too aware of the disruptive potential of AI, ranging from the deepfake disinformation that just helped reelect Recep Tayyip Erdoğan in Turkey, to the large language models that could put us all out of a job. Until now, though, the technology’s destructive potential has largely been downplayed, with speculation that we’re approaching the singularity – a hypothetical future where AI becomes untethered and out-of-control – dismissed as mere fearmongering.

That being said, there have been some warning signs that our fears about AI aren’t unfounded. At the start of this month, the “Godfather of AI” Geoffrey Hinton quit Google in order to speak freely about the “existential risk” of the technology. On May 16, the CEO of leading research lab OpenAI Sam Altman told US Congress that “regulatory intervention by governments will be critical to mitigate the risks”. Elon Musk, a co-founder of OpenAI, has also signed a letter warning that AI poses a “profound risk” to humanity.

Now, Hinton and Altman have joined hundreds of other industry leaders to issue a stark warning about the “severe risks” of advanced AI.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the short, ominous statement from the Center for AI Safety, which is signed by more than 350 engineers, executives, and researchers working in the AI field.

Alongside the OpenAI CEO and ex-Google engineer, the list of signatories includes the likes of Google DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, influential computer scientist Ilya Sutskever, the Center for Humane Technology’s Aza Raskin and Tristan Harris, and Grimes, whose recent experiments with AI have showcased how it could revolutionise the music industry.

Earlier this week, Altman, Hassabis, and Amodei also met with the UK prime minister Rishi Sunak to discuss the potential threats of AI, from disinformation and national security, to more existential threats like the one hinted at in the Center for AI Safety statement. On May 22, OpenAI also published a blog post outlining possible approaches to governing superintelligent AI systems.

Of course, it remains to be seen what the “societal-scale risks” of superintelligent AI actually are, and the recent statement doesn’t shed much light on the specifics. Nevertheless, Dan Hendrycks, the executive director of the Center for AI Safety, points out the significance of industry leaders “coming out” about the significant dangers, adding: “There’s a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things.”

Join Dazed Club and be part of our world! You get exclusive access to events, parties, festivals and our editors, as well as a free subscription to Dazed for a year. Join for £5/month today.

Download the app 📱

  • Build your network and meet other creatives
  • Be the first to hear about exclusive Dazed events and offers
  • Share your work with our community
Join Dazed Club