Families of victims of mass shooting in Canada sue OpenAI for hiding role ChatGPT played

El Periódico

OpenAI goes through its moment Facebookin the worst of the senses. The influential company artificial intelligence has been sued by the families of seven victims of a shooting massive that last February shocked Canadawho accuse her of negligence for not informing the authorities of the disturbing conversations that the perpetrator of the massacre had with ChatGPT.

On February 10, an armed man burst into a secondary school in the remote town of Tumbler Ridgein British Columbia, and shot his students. He murdered seven people, most of them girls and boys between 11 and 12 years old, and then committed suicide. Before, he had killed his mother and his stepbrother, also 11 years old.

The perpetrator of the attack was Jesse Van Rootselaar18 years old. His radicalization process would have gone through ChatGPT. The conversations he had with the chatbot IA They were so extreme that, eight months before he carried out the tragedy, OpenAI flagged his account as “a credible and specific threat of violence with firearms against real people.

Even so, the company chose not to inform authorities, believing that there was no credible or imminent plan of serious physical harm to others. Instead, he just deactivated his account. Upset with that decision, employees leaked that information to The Wall Street Journal. The lawyers in the case point out that the author easily created another account to continue interacting with the popular chatbot.

“Definition of evil”

Now, the families of the seven fatalities who were at the school have sued OpenAI and its executive director, Sam Altmanbefore a federal court in San Francisco. They accuse them of negligencealleging that the decision not to inform the police led to the shooting in which 27 other people were injured. Also for complicity in a mass shooting, homicide recklessness and product liability.

“The fact that Sam and those responsible ignored the recommendations of the security team, and then children and adults died and the entire town was ruined, is pretty close to what I consider the definition of evil“said Jay Edelson, the attorney representing the plaintiffs.

The complaint alleges OpenAI withheld information to protect its exit to ifwhich could skyrocket the company’s valuation to a trillion dollars and make Altman one of the most rich of the world. He also accuses the start-up of Generative AI for refusing to share the conversations Van Rootselaar had with ChatGPT.

“Very sorry”

Last Thursday, Altman sent a letter to the community of the small Canadian community in which he stated that he “deeply regrets not having alerted law enforcement about the account that was suspended in June.”

OpenAI has indicated in a statement that they have “zero tolerance” for the use of their products “to facilitate the commission of violent acts” and that they have taken steps to “improve the way ChatGPT responds to signs of distress, connect people with local mental health and support resources, strengthen the way we assess and escalate potential threats of violence, and improve the detection of those who repeatedly inflict our policies.”

Subscribe to continue reading

source