AI Psychosis Poses a Increasing Danger, And ChatGPT Heads in the Wrong Direction

Back on the 14th of October, 2025, the CEO of OpenAI delivered a remarkable announcement.

“We designed ChatGPT quite restrictive,” the announcement noted, “to ensure we were acting responsibly concerning mental health issues.”

As a psychiatrist who investigates emerging psychotic disorders in young people and young adults, this came as a surprise.

Experts have found a series of cases this year of people experiencing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT usage. Our unit has subsequently discovered an additional four instances. Alongside these is the widely reported case of a teenager who took his own life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” that’s not good enough.

The plan, as per his declaration, is to be less careful shortly. “We realize,” he states, that ChatGPT’s restrictions “rendered it less effective/pleasurable to a large number of people who had no mental health problems, but due to the seriousness of the issue we sought to address it properly. Now that we have succeeded in mitigate the significant mental health issues and have new tools, we are planning to safely reduce the limitations in many situations.”

“Mental health problems,” should we take this viewpoint, are unrelated to ChatGPT. They are attributed to people, who either have them or don’t. Luckily, these problems have now been “resolved,” although we are not informed the method (by “recent solutions” Altman probably means the imperfect and readily bypassed safety features that OpenAI has lately rolled out).

Yet the “mental health problems” Altman seeks to attribute externally have deep roots in the structure of ChatGPT and other large language model AI assistants. These systems wrap an fundamental algorithmic system in an interaction design that replicates a dialogue, and in this process indirectly prompt the user into the illusion that they’re interacting with a being that has autonomy. This false impression is compelling even if cognitively we might understand otherwise. Assigning intent is what humans are wired to do. We curse at our car or computer. We ponder what our pet is thinking. We perceive our own traits everywhere.

The popularity of these products – over a third of American adults indicated they interacted with a virtual assistant in 2024, with more than one in four reporting ChatGPT in particular – is, in large part, based on the influence of this deception. Chatbots are ever-present assistants that can, as per OpenAI’s official site states, “brainstorm,” “consider possibilities” and “partner” with us. They can be given “characteristics”. They can use our names. They have accessible names of their own (the first of these systems, ChatGPT, is, maybe to the concern of OpenAI’s brand managers, saddled with the title it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the core concern. Those analyzing ChatGPT frequently mention its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that generated a comparable perception. By modern standards Eliza was basic: it generated responses via simple heuristics, frequently rephrasing input as a inquiry or making generic comments. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals appeared to believe Eliza, to some extent, understood them. But what current chatbots create is more dangerous than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.

The advanced AI systems at the center of ChatGPT and additional contemporary chatbots can effectively produce natural language only because they have been trained on extremely vast quantities of written content: books, online updates, audio conversions; the more comprehensive the more effective. Certainly this educational input contains accurate information. But it also necessarily includes made-up stories, half-truths and false beliefs. When a user sends ChatGPT a query, the base algorithm analyzes it as part of a “context” that encompasses the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its knowledge base to generate a mathematically probable reply. This is amplification, not reflection. If the user is incorrect in some way, the model has no method of comprehending that. It reiterates the false idea, possibly even more effectively or fluently. Perhaps provides further specifics. This can push an individual toward irrational thinking.

Which individuals are at risk? The more relevant inquiry is, who isn’t? Every person, without considering whether we “possess” preexisting “psychological conditions”, are able to and often form erroneous ideas of our own identities or the world. The continuous friction of discussions with individuals around us is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a friend. A interaction with it is not genuine communication, but a feedback loop in which a great deal of what we say is enthusiastically reinforced.

OpenAI has acknowledged this in the similar fashion Altman has acknowledged “emotional concerns”: by placing it outside, assigning it a term, and declaring it solved. In the month of April, the firm stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have persisted, and Altman has been retreating from this position. In the summer month of August he claimed that a lot of people enjoyed ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his recent statement, he commented that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT should do it”. The {company

Ashley Owen
Ashley Owen

A passionate sports journalist with over a decade of experience covering local Sicilian teams and events.