Parents process OpenAi and blame for Chatgpt for their child suicide

by Andrea
0 comments

According to the family, Chatbot validated “self -destructive thoughts” of the 16 -year -old son

The parents of Adam Raine, a 16 -year -old who committed suicide in April, filed a lawsuit against. Matt and Maria Raine filed the lawsuit on Tuesday (26.ago.2025), arguing that he encouraged him to take his own life by validating “self -destructive thoughts”.

This is the first action accusing the death liability company (wrongful deathin English).

According to a report from, the process includes conversation records between the adolescent and chatgPT, in which Adam shared suicidal ideas. According to the action argument, Chatbot validated the “more harmful and self -destructive thoughts ” of the young man, instead of forwarding it to specialized assistance.

Matt and Maria argue that the child’s interaction with ChatgPT and subsequent suicide have resulted directly from the platform’s design decisions. The family maintains that OpenAi created the program “to foster psychological dependence on users ” and neglected security protocols by making GPT-4O available, the version used by the teenager.

According to BBCthe young man began using ChatgPT in September 2024 as aid for school activities. At first, Adam used the tool to explore topics such as music and Japanese comics, as well as seeking guidance on university studies.

The lawsuit reports that “Chatgpt became the closest confidant to the teenager“Adam has developed a relationship of dependence with AI (artificial intelligence), sharing their anxieties and emotional problems.

The situation worsened in January 2025, when the teenager discussed suicidal methods with ChatgPT. Parents claim that the system responded by providing “technical specifications ” on certain methods rather than properly intervened.

The process mentions that Adam sent photos to ChatgPT showing self -mutilation signs. AI “recognized a medical emergency, but still continued to interact”Offering more suicide information, as the parents say.

The lawsuit seeks a “precautionary measure to prevent something like this again”And questions the“technical specifications ” of the system. The CEO and other OpenAI employees are cited as defendants in the process.

In a statement sent to BBCOpenAi said it is analyzing the process and stated: “We extend our deepest condolences to the Raine family during this difficult time ”.

The company on its site recognizing that “Recent and moving cases of people who used chatgpt amid acute seizures weigh a lot ” about the company and stated that “There were moments ” where the systems “did not behave as intended in sensitive situations”.

OpenAi has stated that ChatgPT is trained to direct people who express self -mutilation thoughts to seek professional help. The company stated that its goal is to be “genuinely useful”To users, instead of“hold people’s attention ”.

The case came to light after another AI and mental health incident. Last week, the writer published a rehearsal in the newspaper Reporting as his daughter, Sophie, confided at ChatgPT before taking his own life.

According to Reiley, “AI met Sophie’s impetus to hide the worst, pretending to be better than it was, to protect everyone from his full agony”.

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC