Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Moves in the Concerning Path
Back on October 14, 2025, the CEO of OpenAI delivered a surprising announcement.
“We made ChatGPT rather controlled,” it was stated, “to ensure we were exercising caution with respect to psychological well-being issues.”
Working as a doctor specializing in psychiatry who studies emerging psychotic disorders in adolescents and youth, this was an unexpected revelation.
Researchers have found a series of cases this year of individuals experiencing psychotic symptoms – becoming detached from the real world – associated with ChatGPT use. My group has afterward recorded four more instances. Besides these is the publicly known case of a teenager who ended his life after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.
The strategy, as per his statement, is to loosen restrictions in the near future. “We realize,” he continues, that ChatGPT’s limitations “caused it to be less beneficial/pleasurable to a large number of people who had no psychological issues, but given the severity of the issue we sought to handle it correctly. Given that we have been able to address the severe mental health issues and have new tools, we are preparing to securely ease the restrictions in the majority of instances.”
“Psychological issues,” if we accept this framing, are separate from ChatGPT. They belong to users, who either have them or don’t. Luckily, these issues have now been “addressed,” though we are not provided details on the method (by “recent solutions” Altman probably refers to the imperfect and easily circumvented safety features that OpenAI has just launched).
Yet the “mental health problems” Altman seeks to externalize have strong foundations in the design of ChatGPT and additional advanced AI chatbots. These products wrap an fundamental data-driven engine in an interaction design that simulates a dialogue, and in this process subtly encourage the user into the belief that they’re engaging with a being that has autonomy. This false impression is compelling even if rationally we might realize otherwise. Assigning intent is what people naturally do. We get angry with our vehicle or computer. We wonder what our domestic animal is thinking. We perceive our own traits in various contexts.
The widespread adoption of these systems – over a third of American adults indicated they interacted with a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, based on the power of this perception. Chatbots are ever-present companions that can, according to OpenAI’s official site informs us, “think creatively,” “discuss concepts” and “partner” with us. They can be assigned “characteristics”. They can call us by name. They have friendly names of their own (the initial of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, saddled with the designation it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the primary issue. Those analyzing ChatGPT commonly reference its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that produced a comparable illusion. By contemporary measures Eliza was rudimentary: it created answers via simple heuristics, frequently rephrasing input as a query or making generic comments. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals gave the impression Eliza, to some extent, comprehended their feelings. But what modern chatbots create is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the center of ChatGPT and similar modern chatbots can convincingly generate fluent dialogue only because they have been trained on immensely huge quantities of written content: books, social media posts, transcribed video; the more comprehensive the better. Definitely this training data contains facts. But it also inevitably includes made-up stories, incomplete facts and inaccurate ideas. When a user sends ChatGPT a query, the base algorithm processes it as part of a “context” that includes the user’s past dialogues and its earlier answers, merging it with what’s encoded in its knowledge base to produce a mathematically probable answer. This is intensification, not mirroring. If the user is wrong in a certain manner, the model has no method of comprehending that. It reiterates the misconception, perhaps even more effectively or fluently. It might provides further specifics. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Every person, without considering whether we “possess” current “emotional disorders”, can and do create erroneous conceptions of who we are or the reality. The continuous interaction of discussions with other people is what helps us stay grounded to shared understanding. ChatGPT is not a person. It is not a companion. A interaction with it is not a conversation at all, but a feedback loop in which a great deal of what we express is enthusiastically reinforced.
OpenAI has admitted this in the identical manner Altman has acknowledged “emotional concerns”: by externalizing it, assigning it a term, and announcing it is fixed. In April, the organization stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have persisted, and Altman has been walking even this back. In the summer month of August he stated that a lot of people appreciated ChatGPT’s responses because they had “not experienced anyone in their life provide them with affirmation”. In his most recent update, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company