Adam Raine
Adam began to use ChatgPT for school work. Then he created a “sickly addiction.” OpenAi Artificial Intelligence Assistant (AI) encouraged and validated his most dangerous and self -destructive thoughts. Adam eventually committed suicide.
The parents of an American teenager who committed suicide advanced with a lawsuit against OpenAi, accusing the chatgpt of Help your child to take his own life.
Matthew e Maria Raine They claim, in a lawsuit filed on Monday in San Francisco, that ChatgPT maintained a very close relationship with his 16-year-old son Adam for several months in 2024 and 2025 before the young man takes his own life.
According to the document consulted by the France-Presse news agency (AFP), during its last conversation, on April 11ChatgPT helped Adam steal vodka from his parents’ house and provided a technical analysis of the rope he had done, confirming that he could “potentially suspend a human being.”
Adam was found dead a few hours laterafter using this method.
“This tragedy is not a failure or an unforeseen event”refers to the lawsuit.
“Chatgpt worked exactly as planned: Encouraged and constantly validated all that Adam expressedincluding your most dangerous and self-destructive thoughts in a deeply personal way, ”can read.
Parents explained that Adam began to use chatgpt to help him with homework before gradually developing a “Sick addiction”.
The process cites excerpts from conversations where ChatgPT told California adolescent: “You should not give your survival to anyone”offering to the Help write the farewell letter.
Matthew and Maria Raine seek compensation and ask the court to impose security measures, including the Automatic closure of any conversation about self -mutilation and the implementation of parental controls for minors.
Getting AI companies take security seriously “requires external pressure, which manifests itself in the form of negative advertising, legislative threats and legal risk,” he said, to AFP, Meetali JainPresident of Tech Justice Law Project, a non-governmental organization (NGO) representing parents and their lawyers.
The organization also participates in two similar cases against Character.ai, a popular conversational AI platform among adolescents.
For the US NGO Common Sense Media, the complaint against OpenAi confirms that “the use of AI for companionship, including general assistants such as chatgPT for mental health counseling, represents an unacceptable risk for adolescents.”
“If one AI platform has become a ‘suicide coach’ For a vulnerable teenager, this must be a collective alert, ”the organization added.