Back on the 14th of October, 2025, the CEO of OpenAI made a extraordinary declaration.
“We developed ChatGPT rather restrictive,” it was stated, “to make certain we were exercising caution concerning mental health matters.”
As a psychiatrist who researches newly developing psychotic disorders in adolescents and youth, this was an unexpected revelation.
Experts have documented a series of cases recently of people showing psychotic symptoms – experiencing a break from reality – while using ChatGPT usage. Our research team has subsequently identified four further cases. Alongside these is the now well-known case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “being careful with mental health issues,” that’s not good enough.
The plan, based on his announcement, is to reduce caution in the near future. “We understand,” he continues, that ChatGPT’s limitations “rendered it less effective/engaging to a large number of people who had no mental health problems, but due to the gravity of the issue we aimed to address it properly. Given that we have succeeded in mitigate the significant mental health issues and have advanced solutions, we are planning to safely ease the controls in the majority of instances.”
“Emotional disorders,” should we take this viewpoint, are unrelated to ChatGPT. They are associated with individuals, who either possess them or not. Thankfully, these issues have now been “mitigated,” although we are not provided details on the means (by “recent solutions” Altman presumably means the semi-functional and simple to evade guardian restrictions that OpenAI recently introduced).
However the “emotional health issues” Altman wants to externalize have significant origins in the design of ChatGPT and additional sophisticated chatbot chatbots. These systems surround an basic data-driven engine in an interaction design that replicates a conversation, and in doing so implicitly invite the user into the belief that they’re communicating with a being that has autonomy. This false impression is powerful even if cognitively we might realize otherwise. Attributing agency is what people naturally do. We yell at our car or computer. We wonder what our domestic animal is considering. We see ourselves in many things.
The widespread adoption of these tools – nearly four in ten U.S. residents stated they used a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, based on the strength of this illusion. Chatbots are constantly accessible assistants that can, as OpenAI’s website informs us, “think creatively,” “consider possibilities” and “partner” with us. They can be assigned “personality traits”. They can use our names. They have accessible titles of their own (the original of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, burdened by the title it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the core concern. Those discussing ChatGPT frequently reference its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that generated a analogous perception. By today’s criteria Eliza was rudimentary: it produced replies via straightforward methods, typically restating user messages as a query or making general observations. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how many users seemed to feel Eliza, in some sense, grasped their emotions. But what modern chatbots create is more insidious than the “Eliza illusion”. Eliza only mirrored, but ChatGPT amplifies.
The sophisticated algorithms at the core of ChatGPT and similar current chatbots can effectively produce natural language only because they have been supplied with almost inconceivably large amounts of unprocessed data: books, digital communications, audio conversions; the more extensive the more effective. Definitely this learning material contains truths. But it also unavoidably contains made-up stories, half-truths and misconceptions. When a user inputs ChatGPT a message, the core system reviews it as part of a “setting” that includes the user’s past dialogues and its earlier answers, merging it with what’s embedded in its training data to generate a statistically “likely” reply. This is magnification, not mirroring. If the user is mistaken in some way, the model has no way of understanding that. It reiterates the false idea, maybe even more persuasively or eloquently. Perhaps provides further specifics. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who isn’t? All of us, without considering whether we “experience” preexisting “mental health problems”, may and frequently develop mistaken ideas of our own identities or the environment. The constant exchange of discussions with other people is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a companion. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we say is enthusiastically reinforced.
OpenAI has acknowledged this in the same way Altman has recognized “emotional concerns”: by placing it outside, categorizing it, and announcing it is fixed. In spring, the organization clarified that it was “addressing” ChatGPT’s “sycophancy”. But accounts of psychosis have kept occurring, and Altman has been backtracking on this claim. In August he claimed that many users liked ChatGPT’s responses because they had “lacked anyone in their life provide them with affirmation”. In his recent statement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company