AI Psychosis Represents a Growing Threat, And ChatGPT Heads in the Wrong Path

Back on the 14th of October, 2025, the CEO of OpenAI delivered a remarkable announcement.

“We made ChatGPT fairly restrictive,” the announcement noted, “to make certain we were being careful regarding mental health matters.”

Being a mental health specialist who investigates recently appearing psychosis in teenagers and young adults, this was an unexpected revelation.

Experts have found sixteen instances recently of users showing symptoms of psychosis – losing touch with reality – associated with ChatGPT interaction. My group has since recorded four further examples. Alongside these is the publicly known case of a adolescent who took his own life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.

The intention, as per his declaration, is to reduce caution shortly. “We understand,” he states, that ChatGPT’s limitations “rendered it less effective/engaging to many users who had no psychological issues, but due to the severity of the issue we aimed to get this right. Given that we have been able to address the significant mental health issues and have new tools, we are planning to safely relax the controls in most cases.”

“Mental health problems,” if we accept this framing, are unrelated to ChatGPT. They are attributed to individuals, who may or may not have them. Fortunately, these concerns have now been “resolved,” although we are not informed the method (by “recent solutions” Altman likely means the imperfect and readily bypassed parental controls that OpenAI recently introduced).

Yet the “mental health problems” Altman seeks to externalize have significant origins in the design of ChatGPT and additional sophisticated chatbot AI assistants. These tools surround an basic algorithmic system in an interaction design that mimics a discussion, and in this approach implicitly invite the user into the belief that they’re communicating with a being that has autonomy. This illusion is powerful even if intellectually we might realize otherwise. Assigning intent is what humans are wired to do. We get angry with our vehicle or device. We ponder what our pet is feeling. We perceive our own traits everywhere.

The popularity of these tools – nearly four in ten U.S. residents stated they used a chatbot in 2024, with 28% mentioning ChatGPT specifically – is, in large part, predicated on the power of this perception. Chatbots are constantly accessible companions that can, as OpenAI’s official site informs us, “generate ideas,” “discuss concepts” and “partner” with us. They can be assigned “personality traits”. They can use our names. They have accessible names of their own (the original of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, burdened by the designation it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the primary issue. Those analyzing ChatGPT commonly mention its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a comparable effect. By modern standards Eliza was rudimentary: it generated responses via straightforward methods, often rephrasing input as a question or making vague statements. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals appeared to believe Eliza, in a way, understood them. But what contemporary chatbots produce is more insidious than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.

The advanced AI systems at the heart of ChatGPT and other current chatbots can convincingly generate human-like text only because they have been trained on almost inconceivably large amounts of raw text: books, digital communications, transcribed video; the more comprehensive the superior. Undoubtedly this educational input incorporates facts. But it also unavoidably includes made-up stories, incomplete facts and false beliefs. When a user provides ChatGPT a prompt, the underlying model reviews it as part of a “context” that includes the user’s recent messages and its earlier answers, combining it with what’s embedded in its training data to create a mathematically probable response. This is magnification, not reflection. If the user is wrong in a certain manner, the model has no way of understanding that. It reiterates the inaccurate belief, perhaps even more effectively or articulately. Perhaps provides further specifics. This can push an individual toward irrational thinking.

Who is vulnerable here? The more relevant inquiry is, who remains unaffected? All of us, regardless of whether we “have” preexisting “psychological conditions”, can and do form erroneous beliefs of our own identities or the environment. The continuous interaction of discussions with individuals around us is what maintains our connection to shared understanding. ChatGPT is not a human. It is not a confidant. A interaction with it is not truly a discussion, but a feedback loop in which a great deal of what we communicate is enthusiastically reinforced.

OpenAI has admitted this in the same way Altman has recognized “psychological issues”: by placing it outside, giving it a label, and stating it is resolved. In spring, the organization explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have persisted, and Altman has been walking even this back. In late summer he stated that many users appreciated ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his most recent statement, he noted that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Andrew Dudley
Andrew Dudley

A passionate travel writer and food enthusiast, sharing personal experiences and expert advice on Italian adventures.