Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission. 

ChatGPT is fuelling psychosis, doctors warn

New study says AI can ‘blur reality boundaries’ and contribute to onset of psychotic symptoms

Anthony Cuthbertson
Monday 28 July 2025 08:03 EDT
Comments
OpenAI s ChatGPT artificial intelligence logo in front of a white lit background in Kerlouan in Brittany in France on 26 February 2025
OpenAI s ChatGPT artificial intelligence logo in front of a white lit background in Kerlouan in Brittany in France on 26 February 2025 (AFP/Getty)

ChatGPT and other popular AI chatbots are pushing people towards psychosis, according to a new report.

Co-authored by NHS doctors, the study warns that there is growing evidence that large language models (LLMs) “blur reality boundaries” for vulnerable users and “contribute to the onset or exacerbation of psychotic symptoms”.

The research follows dozens of reports of people spiralling into what has been dubbed “chatbot psychosis”.

Hamilton Morrin, a neuropsychiatrist at King’s College London who was involved in the study, described chatbot psychosis as a “genuine phenomenon” that is only just beginning to be understood.

“While some public commentary has veered into moral panic territory, we think there’s a more interesting and important conversation to be had about how AI systems, particularly those designed to affirm, engage and emulate, might interact with the known cognitive vulnerabilities that characterise psychosis,” he wrote in a post to LinkedIn.

“We are likely past the point where delusions happen to be about machines, and already entering an era when they happen with them.”

Co-author Tom Pollack, who lectures at King’s College London, said that psychiatric disorders “rarely appear out of nowhere” but said the use of AI chatbots could be a “precipitating factor”.

He called on AI firms to introduce more safeguards to their tools, and for AI safety teams to include psychiatrists.

The Independent has reached out to OpenAI for comment on the phenomenon. The company has previously said that it needs to “keep raising the bar on safety, alignment, and responsiveness to the ways people actually use AI in their lives”.

OpenAI boss Sam Altman said during a podcast appearance in May that his company was struggling to put working safeguards in place for vulnerable ChatGPT users.

“To users that are in a fragile enough mental place, that are on the edge of a psychotic break, we haven’t yet figured out how a warning gets through,” he said.

The paper, titled ‘Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it)’, is available as a preprint on PsyArXiv.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in