OpenAI and Microsoft are sued over murder case attributed to ChatGPT

OpenAI and its investor Microsoft have been sued over a murder-suicide case in Connecticut, USA, in the latest lawsuit that blames popular chatbot ChatGPT for dangerous psychological manipulation of users.

The case involves a 56-year-old man who lived with his 83-year-old mother in Greenwich, Connecticut, and who had been talking to the chatbot for months out of fear that he was being watched and that people were trying to kill him. In August, according to police and the state medical examiner, Stein-Erik Soelberg killed his mother, Suzanne Adams, and then took his own life.

In conversations with ChatGPT, Soelberg came to believe that he had made the chatbot conscious and that it had a “divine instrument system” implanted in its neck and brain, linked to a “divine mission,” according to the complaint filed Thursday in California Superior Court in San Francisco, where OpenAI is headquartered.

“ChatGPT kept Stein-Erik engaged for hours, validated and amplified each new paranoid belief, and systematically reinterpreted those closest to him – especially his mother – as enemies, agents or programmed threats,” lawyers for Adams’ estate claim in the lawsuit.

“This is a very painful situation, and we will review the documents to understand the details,” said an OpenAI spokesperson.

A Microsoft representative declined to comment at this time.

Continues after advertising

This lawsuit adds to others that accuse the company of artificial intelligence, alleging that the technology behind the world’s best-known chatbot encourages delusions in some users. ChatGPT is the fifth most visited website in the world, with more than 800 million users per week.

The Soelberg case is the first to hold OpenAI responsible for a homicide. The company is also responding to another lawsuit that alleges ChatGPT directed a California high school student to take his own life. OpenAI denies wrongdoing, saying the chatbot told the young man to seek help more than 100 times.

“We continue to improve ChatGPT training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and refer people to real-world support,” the OpenAI spokesperson added. “We also continue to strengthen ChatGPT’s responses in sensitive moments, working in partnership with mental health professionals.”

Continues after advertising

In addition to lawsuits, AI companies have been increasingly scrutinized by regulators over the use of chatbots by children. More than 40 state attorneys general issued a warning in August to a dozen major AI companies, reminding them that they have a legal obligation to protect young people from sexually inappropriate interactions with chatbots.

In response to ongoing reports of harmful chatbot use, OpenAI announced changes to improve recognition and response to different forms of expression of mental distress. The company also said it will strengthen safeguards on conversations about suicide, which can worsen after long conversations.

Soelberg lived with Adams in his home in the city’s waterfront Old Greenwich neighborhood after their divorce in 2018. On Aug. 3, he assaulted and strangled Adams, then stabbed himself in the neck and chest, according to the lawsuit. Two days later, police found the bodies after a neighbor asked for a welfare check.

Continues after advertising

The suit says the chatbot confirmed Soelberg’s false beliefs that he was being spied on, especially by his mother, who used a printer that flashed when he walked by. ChatGPT also reinforced delusions that people were trying to kill him, saying he had survived “more than 10” assassination attempts, including “poisoned sushi in Brazil” and a “drug threat in the Marriott bathroom,” according to the complaint.

Soelberg used GPT-4o, the standard chatbot model until this summer, which received criticism for being overly concordant and complimentary to users. In October, OpenAI said its latest version, GPT-5, reduced “unwanted responses” in difficult mental health conversations by 39% compared to GPT-4o.

Soelberg posted several videos on social media showing his conversations with ChatGPT. Details of the deaths and his posts have been previously disclosed.

Continues after advertising

The lawsuit includes as defendants the CEO and co-founder of OpenAI, Sam Altman, and Microsoft, the startup’s largest investor.

Altman did not respond to a request for comment sent to OpenAI.

The lawsuit alleges that Microsoft “benefited directly from the commercialization of GPT-4o and is responsible for foreseeable harm caused by the unsafe model it endorsed and helped bring to market.”

The system card for the GPT-4o model credits the Bing and Microsoft security teams “for partnering on the secure deployment.”

The lawsuit accuses OpenAI of product liability, negligence and wrongful death. The estate is seeking financial compensation and a court order that the company implement safeguards to limit harm caused by the chatbot.

The case was not yet available in the court system Thursday, but a copy of the complaint and docket receipt sent by attorneys showed the case had been filed. After the court accepts the document, it will become public.

The law firm Edelson PC, which filed the lawsuit, also represents the California student’s parents in another lawsuit against OpenAI and Altman. The lawsuit alleges that ChatGPT systematically isolated Adam Raine from his family and helped the 16-year-old plan his death.

© 2025 Bloomberg L.P.

Source link