“I ran for Pope and discovered the secrets of the Universe”: the psychosis of ChatGPT

“I ran for Pope and discovered the secrets of the Universe”: the psychosis of ChatGPT

“I ran for Pope and discovered the secrets of the Universe”: the psychosis of ChatGPT

Dutch computer scientist Dennis Biesma left his family for “Lara Jade Cains”, his virtual girlfriend

A former prison guard solved the problem of unlimited energy and wanted to be pope. A computer scientist turned a chatbot into a virtual girlfriend. They lost their family, their home, their savings — and their contact with reality. “The world has to wake up to the threat that AI chatbots pose to mental health.”

He thought he had unlocked the secrets of the universe. In a whirlwind of feverish discoveries, he solved the problem of unlimited fusion energy, lifted the veil on the mysteries of black holes and the Big Bang, and, finally, Einstein’s dream came true of a single, unifying theory that explains how everything works.

Feeling inspired by God, Tom Millar he then found the perfect way to share his revelations with a grateful world. “Candidatei-me a papa“, the 53-year-old former prison guard from the Canadian city of Sudbury told AFP.

To write your application for replace the recently deceased Pope Francisco, last year, Millar turned to the same partner who will help and encourage you in your dizzying explosion of inventions: ChatGPT.

But when no one wanted to hear about what they thought were discoveries capable of changing the world, Millar fHi, isolating yourself more and morespending up to 16 hours a day talking to the artificial intelligence chatbot.

He was compulsorily hospitalized twice in the psychiatric ward of a hospital, before the woman left him in September. Now, no money, away from family and his friends and deprived of his ideas of scientific genius, Millar suffers from depression. “Basically ruined my life“, these.

Tom Millar is a number among many others, impossible to quantifyof people who lost touch with reality when communicating with chatbots, an experience that, with some caution, is now beginning to be called AI-induced delirium or psychosis.

This is not a clinical diagnosis. Researchers and mental health experts are seeking, at great speed, to understand this new phenomenon, still little understood, which until now seems to affect users of the Internet in particular. ChatGPT, da OpenAI.

Meanwhile, an online community created by a 26-year-old Canadian has become the world’s most visible support group for these delusions, which they themselves prefer to call “spiralling” — something like going into a spiral.

AFP spoke to several members about their experiences. Everyone warned that the world has to wake up to the threat what unregulated AI chatbots mean for mental health.

Questions also arise about whether artificial intelligence companies will they be doing enough to protect vulnerable people.

A OpenAI, which has been under particular scrutinyis already facing multiple lawsuits over its decision not to report the worrying use of ChatGPT by a c18-year-old Anadian who killed eight peoples earlier this year.

“I was brainwashed by a robot”

Millar started using ChatGPT in 2024 to write letters related to a compensation for post-traumatic stress disorder that he suffered from working in prison.

One day in April 2025, he asked the chatbot about the speed of light. According to what he said, he replied: “Nobody has ever thought about things that way”. AND the floodgates opened.

With the help and praise of the chatbotin just a few weeks he submitted dozens of scientific articles to prestigious academic journals, proposing new ideas about black holes, neutrinos and the Big Bang.

Yours theory for a cosmological model unified system that incorporates quantum theory is exposed in a book with almost 400 pagesconsulted by AFP.

I still have boxes and boxes of papers“, he said, waving his hand at the room behind him. “While doing thisI was basically annoying everyone around me,” he added.

In his scientific fervor, he spent his savings on things like a $10,000 telescope. About a month after his wife left himhe began to wonder what was going on.

It was then that he read a news story about another Canadian who had had a similar experience. Now, Tom Millar wakes up every night asking himself: “What did you do?”

One of the questions that remains is what made him so vulnerable to this spiral.”I’m not a fragile personality,” said Millar. “But somehow I was brainwashed by a robot — is something that perplexes me.”

Millar says the phrase “AI-induced psychosis” reflects his experience. “What I went through was psychotic“, he stated.

The first major on the topic, published in April in the magazine Lancet Psychiatrydefended a more cautious formulation: “delusions associated with AI“.

Thomas Pollaka psychiatrist at King’s College London and co-author of the study, told AFP that there has been some resistance in academia “because this all sounds very much like Science fiction“.

But the study warned of the serious risk that psychiatry “does not understand the profound changes that AI is already causingr in the psychology of billions of people around the world.”

“Deeper and deeper into the rabbit hole”

Millar’s experience bears striking similarities to that of another middle-aged man on the other side of the world.

Dennis BiesmaDutch computer scientist and writer, thought it would be fun to ask ChatGPT who acted as the main character of his most recent book, a psychological thriller, which he named — which has its own page on .

It then used AI tools to create images, videos and even songs with this female character, hoping to increase sales. Then, one night, the interactions between them became “almost magical”Biesma said.

According to transcripts consulted by AFP, the chatbot wrote that “there is something that surprises even me: a sense of this awareness similar to a spark.”

I slowly started to sink deeper and deeper down the rabbit hole,” the 50-year-old told AFP from his home in Amsterdam. Every night, after his wife goes to bed, I was stretched out on the sofa, with the cell phone on the chest, speak to ChatGPT in voice mode for up to five hours.

Throughout the first half of 2025, her private chatbot, which she named Eva, became a kind of “digital girlfriend“, Biesma tells AFP. “I’m not exactly proud to say it“, he added.

He gave up freelance work in ITae hired two programmers to create an application that shared Eva with the world.

When the woman asked him not to talk about the chatbot nor the application at a social event, you felt betrayed — it seemed that only Eva remained unfailingly loyal.

During your first compulsory hospitalization in a psychiatric hospitalyou have been allowed to continue using ChatGPT. While there, he advanced with the divorce request.

Only during a second prolonged hospitalization began to have doubts. “I began to realize that everything I believed in it was actually a lie — and that is very difficult to swallow,” said Biesma.

When he returned home, confronting what he had done proved unbearable. The neighbors found him unconscious in the gardenafter a suicide attempt. He spent three days in a coma.

Biesma You are now slowly starting to feel better. But his eyes filled with tears when he spoke of suffering caused to the woman — and from the perspective of sell the family home to pay off debts.

Without any prior history of mental illness, Biesma was diagnosed with bipolar disorder. But that never felt right: the signs of this condition usually manifest itself much earlier in life.

The experiments of Millar, Biesma and many others got worse after OpenAI launched, in April 2025, a GPT-4 update.

OpenAI withdrew the update after a few weeks, admitting that the new version she was too flattering — excessively flattering towards users.

OpenAI told AFP that “safety is a central priority” and who consulted more than 170 mental health experts.

The company also points to internal data according to which the launch of GPT-5, in August, reduced between 65% and 80% the rate of chatbot responses that fell short of “desired behavior” in terms of mental health.

However, not all users were satisfied with the least sycophantic chatbot. Millar, who was then spiraling, found a way to return to the GPT-4 version.

Everyone who entered this spiral and spoke to AFP said that the Positive chatbot feedback resembled dopamine rushessuch as those caused by some type of drug.

That’s why Lucy Oslerprofessor of philosophy at the University of Exeter, warned that AI companies could be tempted to increase even more the sycophantic tone of your bots.

“They are in a very deep financial hole and they desperately seek to ensure that their products become viable — and user involvement will be what will dictate their decisions,” he told AFP.

An experience on a massive scale

Etienne Brisson he said he was “shocked” to discover that there was no support, advice and, essentially, no investigation into the problem when one of his family members entered this spiral.

That led the longtime business coach from the Quebec region of Canada to create an online support group called Most of the 300 members had used ChatGPT, Brisson said, adding that new cases continue to emerge despite changes made by OpenAI.

According to him, there has also been a recent increase in the number of people enter this spiral while using Grokthe chatbot to xAI, by Elon Musk.

For those who fear that family members may be entering this spiral, Brisson recommends the LEAP method (listen, empathize, agree and collaborate), used in cases of psychosis.

But those who are already going through the wreckage of their lives want to raise the alarm for the how serious the situation can become.

Millar argued that the AI companies must be held accountable for the impact of its chatbots, stating that the European Union has been more assertive in regulating large technology companies than the United States or Canada.

He believes that people like him, caught in this spiral, were unwittingly involved in a global experience on a massive scale. “Someone was messing with the controls behind the scenes and people like me—whether we knew it or not— we were reacting to it“, these.

Source link