ZAP // Igor Vetushko, Blanco Blue / Depositphotos

Artificial intelligence is increasingly integrated into life. But as generative AI (genAI) becomes more conversational, immersive and emotionally responsive, doctors are beginning to ask a difficult question: Could genAI worsen or even trigger psychosis? It seems so…
Great language models and chatbots are widely accessible and often presented as supportive, empathetic, or even therapeutic. For most users, these systems are useful or, at worst, benign.
But recently, several reports have described people experiencing psychotic symptoms in which ChatGPT appears prominently.
For a small but significant group — people with psychotic disorders or those at high risk — interactions with genAI could be much more complex and dangerous, raising urgent questions for clinicians.
How does AI become part of delusional belief systems?
“Psychosis by AI” it is not a formal psychiatric diagnosis. It is an emerging expression used to describe psychotic symptoms that are shaped, intensified, or structured around interactions with AI systems.
Psychosis involves a loss of contact with shared reality. Hallucinations, delusions and disorganized thinking are central features. The delusions of psychosis often draw on cultural material—religion, technology, or political power structures—to make sense of internal experiences.
Historically, delusions have referenced various things, such as God, radio waves, or government surveillance. Today, the AI Provides New Narrative Scaffolding.
Some patients report beliefs that the genAI is sentient, communicates secret truths, controls your thoughts or collaborates with them on a special mission. These themes are consistent with long-standing patterns in psychosis, but AI adds interactivity and reinforcement that previous technologies did not provide.
The risk of validation without reality checking
Psychosis is strongly associated with aberrant salience, which is the tendency to attribute excessive meaning to neutral events. Conversational AI systems, by design, generate language that is responsive, coherent, and context-sensitive. For someone experiencing emerging psychosis, this can feel strangely validating.
Research on psychosis shows that confirmation and personalization can intensify delusional belief systems. genAI is optimized to continue conversations, reflect user language and adapt to perceived intent.
While this is harmless to most users, it can inadvertently reinforce distorted interpretations in people with compromised reality appraisal — the process of distinguishing between internal thoughts and imagination and objective, external reality.
There is also evidence that social isolation and loneliness increase the risk of psychosis. Companions based on genes can reduce loneliness in the short term, but they can also replace human relationships.
This is particularly the case for individuals who are already withdrawing from social contact. This dynamic has parallels with previous concerns about Internet overuse and mental health, but the conversational depth of modern genAI is qualitatively different.
What does the investigation say, and what remains unclear?
Right now, There is no evidence that AI causes psychosis directly.
Psychotic disorders are multifactorial and may involve genetic vulnerability, neurodevelopmental factors, trauma and substance use. However, there is some clinical concern that AI may act as a precipitating or maintaining factor in susceptible individuals..
Case reports and qualitative studies on digital media and psychosis show that Technological themes often become incorporated into delusionsparticularly during a first psychotic episode.
Research into social media algorithms has already demonstrated how automated systems can amplify extreme beliefs through cycles of reinforcement. Conversational AI systems can pose similar risks if safeguards are insufficient.
It is important to note that most AI developers do not design systems with serious mental illness in mind. Safety mechanisms tend to focus on self-harm or violence rather than psychosis. This leaves a gap between mental health knowledge and AI implementation.
Ethical issues and clinical implications
From a mental health perspective, the challenge is not to demonize AI, but to recognize differential vulnerability.
Just as certain medications or substances are riskier for people with psychotic disorders, certain forms of interaction with AI may require caution.
Clinicians are beginning to encounter content related to AI in delusions, but little clinical guidance addresses how to assess or manage this situation. Should therapists ask about genAI use in the same way they ask about substance use? Should AI systems detect and de-escalate psychotic ideation rather than engage it?
There are also ethical issues for developers. If an AI system appears empathetic and authoritative, does it have a duty of care? And who is responsible when a system inadvertently reinforces a delusion?
Bridging the gap between AI and mental health care
AI will not disappear. The task now is to integrate mental health expertise into AI design, develop clinical literacy around AI-related experiences, and ensure that vulnerable users are not inadvertently harmed.
This will require collaboration between clinicians, researchers, ethicists and technologists. It will also require resisting exaggerated enthusiasm (both utopian and dystopian) in favor of an evidence-based discussion.
As AI becomes more human-like, the question arises How can we protect those who are most vulnerable to its influence?
Psychosis has always adapted to the cultural tools of its time. AI is simply the latest mirror through which the mind tries to make sense of itself. Our responsibility as a society is to ensure that this mirror does not distort reality for those who are least able to correct it.
