Four wrongful death lawsuits were filed in the US against OpenAI this Thursday (6), in addition to cases from three people who claim that the company’s chatbot led to mental health breakdowns.
The lawsuits, filed in California state courts, allege that ChatGPT, used by 800 million people, is a defective product. One lawsuit calls it “defective and inherently dangerous.” A complaint filed by Amaurie Lacey’s father says the 17-year-old from Georgia talked to the bot about suicide for a month before his death in August. Joshua Enneking, 26, of Florida, asked ChatGPT “what it would take for his reviewers to report his suicide plan to the police,” according to a complaint filed by his mother. Zane Shamblin, 23, of Texas, died by suicide in July after encouragement from ChatGPT, according to the complaint filed by his family.
Joe Ceccanti, 48, of Oregon, had been using ChatGPT without any problems for years, but in April he came to believe it was sentient. His wife, Kate Fox, said in a September interview that he began using ChatGPT compulsively and was acting erratically. He had a psychotic break in June, she said, and was hospitalized twice before dying by suicide in August.
FREE TOOL
XP simulator
Find out in 1 minute how much your money can yield
“Doctors don’t know how to deal with it,” Fox said.
An OpenAI spokesperson said in a statement that the company is reviewing the allegations, which were previously reported by The Wall Street Journal and CNN. “This is an incredibly painful situation,” the statement said. “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people to real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments by working closely with mental health professionals.”
Two other authors — Hannah Madden, 32, of North Carolina, and Jacob Irwin, 30, of Wisconsin — say ChatGPT caused mental breakdowns that led to emergency psychiatric care. Over the course of three weeks in May, Allan Brooks, a 48-year-old corporate recruiter from Ontario, Canada who is also suing, came to believe he had invented a mathematical formula with ChatGPT that could break the internet and fuel fantastic inventions. He has come out of this delusion, but said he is now on short-term medical leave.
Continues after advertising
“Their product has caused me harm, and harm to others, and continues to do so,” said Brooks, of whom the The New York Times wrote in August. “I am emotionally traumatized.”
After the family of a California teenager filed a wrongful death lawsuit against OpenAI in August, the company acknowledged that its security barriers can “degrade” when users have long conversations with the chatbot.
After reports this summer of people having troubling experiences linked to ChatGPT, including delusional episodes and suicides, the company added safeguards to its product for teens and distressed users. There are now parental controls for ChatGPT, for example, so parents can receive alerts if their children discuss suicide or self-harm.
OpenAI recently released an analysis of conversations occurring on its platform during a recent month that indicated that 0.07% of users may be experiencing “mental health emergencies related to psychosis or mania” per week, and that 0.15% were discussing suicide. The analysis was carried out on a statistical sample of conversations. But scaling across all OpenAI users, these percentages equate to 500,000 people showing signs of psychosis or mania, and more than 1 million potentially discussing suicidal intent.
The Tech Justice Law Project and the Social Media Victims Law Center filed the lawsuits. Meetali Jain, who founded the Tech Justice Law Project, said the cases were all filed in a single day to show the range of people who had problematic interactions with the chatbot, which is designed to answer questions and interact with people in a human-like way. People in the lawsuits were using ChatGPT-4o, previously the default model served to all users, which has since been replaced by a model the company says is more secure but which some users have described as cold.
(O Times sued OpenAI for copyright infringement; OpenAI has denied these claims.)
Continues after advertising
c.2025 The New York Times Company