Adam Raine’s parents, a 16-year-old teenager who took their lives last April, filed a lawsuit against Openai and its executive director, Sam Altman, accusing them of guilty homicide for having hurriedly launched the GPT-4o artificial intelligence model without addressing critical security problems. In response to the controversy, but without referring to the lawsuit, the company published a statement in which it acknowledged that, despite having security mechanisms, the model did not “behave as it should in sensitive situations” and promised improvements.
The complaint, presented by Matt and Maria Raine before the Superior Court of California in San Francisco, affirms that ChatgPT “actively helped Adam to explore suicide methods” and did not interrupt the conversations in which the young man expressed his intention to take his own life, nor activated emergency protocols despite having recognized clear risk signals. “Artificial intelligence should never tell a child that he should not survive his parents,” said family lawyer Jay Edelson, through his X account.
For the lawyer, the case questions to what extent Openai and Altman “rushed to market” the model, putting economic growth over user security. It coincided with an increase in the valuation of the company from 86,000 million to 300,000 million dollars.
Today we filed a wrongful death suit against OpenAI and Sam Altman for a family whose 16-yr-old son Adam died by suicide after months of encouragement from ChatGPT. His parents are bravely fighting to prevent this from ever happening again. AI should never tell a child they don’t…
— Jay Edelson (@jayedelson)
Legal action comes in the midst of growing questions towards artificial intelligence chatbots and their ability to influence people’s behavior. Openai and Altman have been in recent weeks in the middle of the public debate after the executive, GPT-3 was comparable to chatting with a high school student and GPT-4 to a conversation with a university student, while with GPT-5 users have at their disposal “a complete team of experts with doctorate, ready to help”. Users, however, have described a large number of failures of the new version.
Since Chatgpt became popular, at the end of 2022, many users have chosen to use this technology to have everyday conversations. After the launch of GPT-5, the company withdrew its previous models, including the GPT-4O, the one used by the American adolescent.
The company said that ChatgPT is designed to recommend professional aid resources to users who express suicidal ideas and that, when it comes to a minor, special filters are applied. However, he acknowledged that these systems “fall short”, so it will implement parental controls so that those responsible for children know how they use that technology.
and will include tools to “descale” situations of emotional crisis and expand its mitigation systems to cover not only self -harm behaviors, but also episodes of emotional anguish. Rain’s demand could mark a critical point in the debate on the ethical development of artificial intelligence, demanding a greater degree of responsibility by technology against the use of their products by minors.
The phone 024 serves people with suicidal behaviors and their relatives. The different survivors associations have guides and aid protocols for the duel.