Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

That terrifying future threat posed by AI? It’s already here

An AI safety expert quit with last week with a terrifying warning that humanity is in ‘peril’. But that might be the least interesting part of his statement, writes Andrew Griffin

Head shot of Andrew Griffin
Working on the electronics of Jules, a humanoid robot from Hanson Robotics that uses artificial intelligence
Working on the electronics of Jules, a humanoid robot from Hanson Robotics that uses artificial intelligence (AFP/Getty)

The world is in peril. That’s the warning from Mrinank Sharma, who this week announced that he was leaving his job as an AI safety expert at Anthropic, the makers of the Claude chatbot.

His warning set the world abuzz with fear and speculation: what had he seen? Sharma worked on threats, including AI’s dangerous sycophancy, as well as whether it might be used to enable bioweapons and terrorism. Had he peeked into Claude’s insides and seen some upcoming horror?

Sharma is far from the only person raising the alarm. This week, a post from entrepreneur Matt Shumer – headlined “Something Big Is Happening” – went viral, shared 33,000 times, and seen 73 million times. (The post itself received accusations of being written by AI.) Investor Jason Calacanis tweeted that he has “never seen so many technologists state their concerns so strongly, frequently and with such concern as I have with AI”.

But we don’t need to speculate about the ways that AI is terrifying and putting us in danger. The danger is already here. And the clues to what it is could lie in the less dramatic parts of Sharma’s statement.

Sharma gave few specifics about the peril that he is afraid of. But he was specific about what it is not – or what it is not only. The threat is “not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment”, he wrote, though again he did not describe that crisis in any specific way.

In response to those threats, he said, he had decided to leave the company: “the time has come to move on”. He did not suggest that as a way of fleeing from the threats – instead, implied throughout his statement was the fact that those threats could only be dealt with outside of the companies making these systems.

“I want to explore the questions that feel truly essential to me, the questions that David Whyte would say ‘have no right to go away’, the questions that Rilke implores us to ‘live’. For me, this means leaving,” he wrote.

“My intention is to create space to set aside the structures that have held me these past years, and see what might emerge in their absence. I feel called to writing that addresses and engages fully with the place we find ourselves, and that places poetic truth alongside scientific truth as equally valid ways of knowing, both of which I believe have something essential to contribute when developing new technology.”

Much of our worry about artificial intelligence comes from within the system. It takes for granted that AI is here and immensely powerful, and so our discussion of it takes a strange form, a kind of pleading sci-fi, where we imagine the most horrible and transformative kinds of dangers. They are usually turbocharged, terrifying versions of the world we currently inhabit: widespread joblessness, total revolution, and much else besides, spoken about with such prophetic fervour that it starts to feel as if the robots rise and kill us all or empower some terrorist to do the same, it might actually be a kind of coup de grâce.

One very important reason to be wary is that causing panic about the scale of AI is actually a marketing tactic for the companies making it, and one that has been adopted widely since ChatGPT upended the whole industry. When companies warn that their systems are so powerful that they might kill us all, they are, above all else, reminding us of how much power they might have, and it doesn’t hurt that they can follow it up with a request for more money and regulatory slackening to allow them to build the systems that are benevolent.

It is also easy because it is a path we have taken before, and with good reason. Most online harms are about moderation. Social networks are largely good; it’s just a matter of ensuring that the bad actors don’t exploit them. Easy access to information is mostly helpful; you just have to make sure that the bad information is cleared out.

The danger posed by AI is both more subtle and broader. It threatens to change our lives and the way we think on a sweeping scale, and that is true even of the bits of it that are useful, so the response can’t simply be about monitoring for the bad things.

Besides, the bad things are already here. Chatbots have been linked to multiple murders and suicides. At a less dramatic but much more widespread level, their obliging and sycophantic nature means that millions of words are being generated each day that flatter delusions, undermine the truth and damage the community and connection that humanity relies on to function.

This is not to say that AI is not making dramatic and good changes in the world; it has already saved countless lives in its medical uses, and the efficiency and productivity gains that even the more trivial large language models enable could lead to more human flourishing. But we are in danger of letting grand statements about future, sci-fi perils obscure the very real threats that we are facing now.

The most potent part of Sharma’s statement might not be the dangers he is warning about, but the response he has chosen. “I hope to explore a poetry degree and devote myself to the practice of courageous speech,” he wrote. “I am also excited to deepen my practice of facilitation, coaching, community building and group work.” In closing, he shared a poem and wrote that he would be “letting myself become invisible for a period of time”.

Poetry and quiet. Two forms of being intentional and mindful about what we say. In a world transformed and obsessed by AI, in which we can near-instantly generate a near-infinite amount of text, it might be the most powerful response. Care for what we say, how we say it, and who we say it to.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in