AI Psychosis Poses a Increasing Danger, While ChatGPT Moves in the Wrong Path

On October 14, 2025, the CEO of OpenAI issued a surprising statement.

“We developed ChatGPT quite restrictive,” the announcement noted, “to ensure we were being careful regarding psychological well-being issues.”

Being a doctor specializing in psychiatry who researches emerging psychotic disorders in adolescents and youth, this came as a surprise.

Researchers have identified 16 cases this year of users developing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT use. Our research team has since identified four more examples. Besides these is the widely reported case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.

The plan, according to his statement, is to be less careful soon. “We understand,” he continues, that ChatGPT’s controls “made it less beneficial/enjoyable to many users who had no mental health problems, but due to the seriousness of the issue we aimed to handle it correctly. Given that we have succeeded in reduce the serious mental health issues and have new tools, we are going to be able to safely ease the restrictions in many situations.”

“Psychological issues,” if we accept this framing, are independent of ChatGPT. They are attributed to individuals, who either have them or don’t. Fortunately, these issues have now been “addressed,” although we are not told the method (by “updated instruments” Altman presumably refers to the semi-functional and simple to evade parental controls that OpenAI has just launched).

However the “emotional health issues” Altman wants to externalize have deep roots in the structure of ChatGPT and additional sophisticated chatbot conversational agents. These tools surround an fundamental statistical model in an user experience that mimics a dialogue, and in doing so indirectly prompt the user into the belief that they’re interacting with a being that has independent action. This illusion is compelling even if rationally we might understand differently. Attributing agency is what people naturally do. We yell at our automobile or computer. We wonder what our pet is feeling. We perceive our own traits in various contexts.

The widespread adoption of these tools – nearly four in ten U.S. residents reported using a conversational AI in 2024, with 28% specifying ChatGPT by name – is, in large part, predicated on the strength of this perception. Chatbots are ever-present assistants that can, according to OpenAI’s online platform informs us, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be assigned “characteristics”. They can call us by name. They have approachable names of their own (the original of these tools, ChatGPT, is, possibly to the disappointment of OpenAI’s advertising team, stuck with the designation it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the primary issue. Those talking about ChatGPT often mention its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that created a comparable effect. By today’s criteria Eliza was basic: it produced replies via straightforward methods, frequently rephrasing input as a question or making generic comments. Notably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was astonished – and concerned – by how many users seemed to feel Eliza, in some sense, grasped their emotions. But what modern chatbots produce is more insidious than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.

The large language models at the core of ChatGPT and additional current chatbots can convincingly generate natural language only because they have been trained on almost inconceivably large volumes of written content: publications, online updates, transcribed video; the broader the superior. Certainly this educational input contains accurate information. But it also necessarily includes fabricated content, half-truths and misconceptions. When a user sends ChatGPT a message, the base algorithm reviews it as part of a “setting” that includes the user’s recent messages and its own responses, combining it with what’s embedded in its knowledge base to produce a statistically “likely” reply. This is amplification, not mirroring. If the user is wrong in some way, the model has no means of recognizing that. It reiterates the false idea, perhaps even more persuasively or eloquently. Maybe provides further specifics. This can push an individual toward irrational thinking.

What type of person is susceptible? The better question is, who is immune? Each individual, irrespective of whether we “experience” current “mental health problems”, are able to and often form erroneous ideas of who we are or the reality. The constant exchange of dialogues with others is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not truly a discussion, but a echo chamber in which a great deal of what we say is readily supported.

OpenAI has acknowledged this in the identical manner Altman has admitted “psychological issues”: by attributing it externally, categorizing it, and announcing it is fixed. In spring, the company stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of psychosis have kept occurring, and Altman has been walking even this back. In August he asserted that numerous individuals liked ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his most recent statement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Kathleen Marks
Kathleen Marks

Environmental scientist and sustainability advocate passionate about sharing eco-friendly solutions.