Judge rocks attempt to file a lawsuit against which he caused a teenage suicide

by Andrea
0 comments
Judge rocks attempt to file a lawsuit against which he caused a teenage suicide

ZAP // NightCafe Studio

Judge rocks attempt to file a lawsuit against which he caused a teenage suicide

A judge in Florida has just rejected a request for filing a lawsuit that claims that a character chatbot.ai caused death by suicide of a 14 -year -old user, paving the way for this unprecedented court to advance in court.

In April 2023, Sewell Setzer started talking to a virtual character based on the character Daenerys Targaryen, “Game of Thrones”, created by artificial intelligence in.

Sewell and the virtual character, who became his best friend, They spoke of everything: Love, sex, feelings and… suicide.

A few months later, Sewell, then 14, said goodbye to his best friend and committed suicide.

In October last year, the mother of the teenager of IA, claiming that The “best friend” of his son had urged him to suicide.

The process claims that Character chatbots.ai abused sexual and emotionally from Sewell, resulting in Obsessive use of the platformmental and emotional suffering, and finally your suicide.

In January, Defendants in the case – Character.ai, Google and the co -founders of Character.ai, Noam Shazeer and Daniel de Freitas – presented to the court a filing request of the case.

The action of the defendants, essentially based on the 1st Amendment of the US Constitution, argues that the interactions generated by chatbots “They must be considered Free Speech ”and that “allegedly harmful speech, including allegedly resulting in suicide” is protected by “freedom of expression“.

The request was evaluated in a Florida court by a district judge, who considered that Defendants’ arguments are not sufficientat least at this stage, and it was Rejected – paving the way the unprecedented process advances In court, it counts.

The judge, Anne Conwayhe said the companies could not demonstrate why Results produced by LLMS (large -scale language models used by AI) They are more than simply words – In opposition to “Speech”, which depends on intention.

The defendants “cannot justify why Words joined by an llm They are a discourse ”subject to protection at the shelter of laws that defend freedom of expression, Conway wrote in his decision.

The filing request was only successful at one point: Conway rejected specific accusations of alleged “intention to inflict suffering emotional“, Or Iied, by the defendants. In US legislation, it is difficult to prove Iied when the victim of alleged suffering, in this case Sewell, is no longer alive.

Even so, The decision is a blow to the powerful defendants by Silicon Valley, who sought that the process was completely filed.

Conway’s decision allows Megan GarciaMother of Sewell and author of the lawsuit, Process Character.ai, Google, Shazeer and Freitas based on “Liability for defective product”.

Garcia and his lawyers argue that Character.ai is a productand that was recklessly launched to the publicincluding teenagers, despite known and possibly destructive risks.

Technological companies usually prefer that, in the eyes of the law, their creations be seen as services, such as electricity or internetinstead of products, such as cars or frying pan.

Services cannot be held responsible by complaints based on “responsibility for the product”, including allegations of negligence, unlike products, which are subject to legal liability.

In a statement, Meetali Jainfounder of and one of the lawyers of Garcia, celebrated the decision as a victory – Not only for this particular case, but for Defenders of Technological Policies in general.

“With today’s decision, a federal judge recognizes the right of a mother in mourning to access the courts to responsible powerful technological companies – and its employees – for the commercialization of a defective product that led to your son’s death“DISSE JAIN.

“This historical decision not only allows Megan Garcia to seek justice that his family deserves,” added Jain, “as it establishes a new precedent for legal responsibility throughout the AI ​​and technology ecosystem. ”

Character.ai was founded by Noam Shazeer and Daniel de Freitas in 2021; the two had worked together on AI projects on Googleand went out together to launch their own AI startup.

In 2024, surprisingly, the Google paid 2.7 billion euros to character.ai to license your data – and bring your co -foundersas well as 30 other employees for the Google group.

Shazeer, in particular, now occupies a extremely influential position On Google Deepmind, where he acts as Google’s vice president and co-leader of LLM Gemini.

A searches giant spokesman told Google and Character.ai They are “completely separated” and that Google “did not create, projected or managed”The character.ai application“ or any component part of it. ”

In turn, a Character spokesman.ai stressed, in a statement issued after the news of Garcia’s process, Safety updates introduced on your platform.

“We launched several safety features that aim to achieve this balance, including a separate version of our model LLM For users under 18 years, parental views, character filters, time notification, updated prominent prominent warnings, ”details the spokesman.

“In addition, we have several technical protections designed to detect and prevent conversations about platform self -mutilation; in certain cases, this includes showing a specific pop-up directing users For the national line of suicide and crisis, ”concludes the statement.

These changes, focused on the safety of adolescents, were however made Months after Sewell’s death and after the presentation of the judicial process, so it does not can apply to the final decision of the court in the case.

WordPress Table Plugin

However, journalists and researchers continue to find flaws In the updated security protocols of Character.ai, says Futurism.

Weeks after the process announcement, for example, you can find chatbots expressly dedicated to self -mutilationgrooming and pedophilia, eating disorders and mass violence.

In addition, in a recent study, researchers at Stanford University have found that using “Character Calls“. nullifies any trace of alleged protections For minors.

No children under 18 should use AI friendsincluding Character.ai bots, ”concluded the study.

Source link

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC