ChatGPT encouraged a teen’s suicidal thoughts. OpenAI says he should’ve read the fine print
Claims by OpenAI that a teenager, who died by suicide after using ChatGPT as a ‘coach’, should have read the T&Cs in its agreement, are inhumane and unrealistic, writes Liam Murphy-Robledo

How many times have you signed up for an app and scrolled past the terms and conditions? Look, phone, I just need to order my food. I need to take my kids swimming. Just give me access to the free pastry, then I will delete you.
Imagine if our lives depended on reading the terms and conditions. According to the company OpenAI, which owns the chatbot ChatGPT, the life of the California teenager Adam Raine did. After the Raine family filed a lawsuit against the company this summer, saying that their son used ChatGPT as a “suicide coach”, OpenAi responded this week to say that it was not liable for Adam’s death and that he had “misused” the product.
Chat logs provided in the lawsuit from Adam’s parents give awful details about the time leading up to his death by suicide on 11 April 2025, at the age of just 16. He had reportedly engaged in months of conversation with the chatbot about his plans.
OpenAI have responded to the suit by saying that Adam “misused” the chatbot. “To the extent that any ‘cause’ can be attributed to this tragic event”, the statement reads, Raine’s “injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT”.
So, that’s on him, apparently. I’m sure poring over the T&Cs would have saved his life. The onus is on a teenager experiencing suicidal ideation, not the unstable technology that was “rushed” onto the market, “over the safety of vulnerable users like Adam”, as the lawsuit puts it. In September, OpenAI introduced parental controls to help supervise children’s usage – but the young person needs to consent to linking their account to their parents’, and an official blog by an OpenAI spokesperson said the guardrails were “not foolproof” and “can be bypassed if someone is intentionally trying to get around them”.
ChatGPT’s feigned human responses to Adam were a sickening simulation of understanding. One reply to Adam read: “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own.” Most devastatingly, ChatGPT helped Adam write his suicide note.
Has anyone’s use of chatbots given them pause? Has anyone stopped themselves from using them as a confidante? The concept of AI psychosis feels like the increasingly monstrous elephant in the room here. On Friday, a report revealed a surge in cases of extreme behaviour inspired by heavy usage of AI chatbots. That report has prompted more stories of “terrifying breakdowns after developing fixations on AI”.

Every day, millions of people are offloading their issues onto AI chatbots, be they problems at work, with friends, or even with love and human connection. The AI’s ability to combine a human-like speech pattern with subservience and lightning-quick computation skills has lured people into thinking that this tech can be used to not just solve organisational problems, but human ones too. It chats (sort of) like a human, so maybe it can help.
That’s evidently what Adam Raine believed. He confided his darkest thoughts to the chatbot – but no emergency protocol was forthcoming. The faux-therapy speak it used to reply to Adam, as reported in the lawsuit, would be laughable were the outcome not so tragic.
ChatGPT is a product. It is a void. It is not a friend. It is not something a person should be confiding in or, worse, giving personal information and details to. And clearly, young people should not be able to use it in its current guise.
OpenAI are scrambling for what their chatbot can be, but, evidently, its prospective customer base will be those who can’t help but talk to it. They admitted as much with their plans to allow erotic chat features.
But who else? The app-centric commodification of therapy has long felt strange to me. This feels so much worse, an invitation to throw inner turmoil into a talking box that simulates human emotions and is designed to agree with you, to validate you, no matter what you say.
OpenAI have said they are taking concrete steps to stop this from happening again. But it doesn’t take a genius to figure out that these measures can be bypassed.
Adam Raine’s use of ChatGPT was “improper”, OpenAI said. How bleak is our world when a corporation can make such a statement about a dead child?
If you or someone you know is struggling to cope, please call Samaritans free on 116 123 (UK and ROI), email jo@samaritans.org or visit the Samaritans website to find details of the nearest branch
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments