Man sues OpenAi after ChatgPT accuses him of killing his children

by Andrea
0 comments

Have you ever searched your name on Google just to see what the internet says about you? Well, a man had the same idea with ChatgPT, and now he filed a complaint against OpenAi based on what his Ia said about him.

Arve Hjalmar Holmen, from Trondheim, Norway, said he asked Chatgpt, “Who is Arve Hjalmar Holmen?” And the answer – which we will not reproduce in full – said he was convicted of murdering his two children, 7 and 10, and sentenced to 21 years in prison. He also stated that Holmen tried to kill his third child.

None of this really happened, however. ChatgPT seems to have generated a completely false story that believed it was totally true, which is called AI “hallucination”.

Man sues OpenAi after ChatgPT accuses him of killing his children

Read more:

Based on his response, Holmen filed a complaint against OpenAi with the help of Noyb, a European Digital Rights Center, which accuses the AI ​​giant to violate the precision principle established by the EU General Law on Data Protection (GDPR).

“The complainant was deeply disturbed by these exits, which could have a harmful effect on his private life, if they were reproduced or somehow leaked in their community or their hometown,” the complaint says.

Continues after advertising

What is dangerous in ChatgPT’s response, according to the complaint, is that it mixes real elements of Holmen’s personal life with total manufacturing. Chatgpt hit the hometown of Holmen and was also correct about the number of children – specially, children – which he has.

Jd Harriman, partner of the Foundation Law Group LLP in Burbank, California, told Fortune that Holmen may have difficulty proving defamation.

“If I’m defending AI, the first question is ‘should people believe that an AI statement is a fact?'” Asked Harriman. “There are numerous examples of lying.”

Continues after advertising

In addition, AI did not publish or communicate its results to a third. “If the man sent the fake message from the IA to others, then he becomes the editor and would have to sue himself,” Harriman said.

Holmen would also probably have difficulty proving the negligence aspect of defamation, since “AI may not qualify as an agent who could commit neglect” compared to people or corporations, Harriman said. Holmen would also have to prove that some damage was caused, such as loss of income or business, or that it suffered pain and suffering.

Read more:

Continues after advertising

Avrohom Gefen, partner of Vishnick McGovern Milizio LLP in New York, told Fortune that cases of defamation involving AI hallucinations are “unprecedented” in the US, but mentioned a pending case in Georgia, where a radio host filed a defamation action that survived the OpenAi filing motion, so “we may soon have some indication of how a court will deal with as a court will deal with these claims ”.

The official complaint asks OpenAi to “delete the defamatory departure on the complainant”, adjust its model to produce accurate holume results and be fined for its alleged violation of the GDPR rules, which require OpenAi to take “all reasonable measures” to ensure that personal data is “deleted or rectified without delay”.

“With all lawsuits, nothing is automatic or easy,” Harriman told Fortune. “As Ambrose Bierce said, you go into disputes like a pig and go out like a sausage.”

Continues after advertising

OpenAi did not immediately respond to Fortune’s request for comment.

This story was originally published at Fortune.com.

2025 Fortune Media IP Limited

Source link

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC