Deloitte’s AI has gone crazy. Money paid for error-filled report returned to Australian government

Deloitte's AI has gone crazy. Money paid for error-filled report returned to Australian government

Deloitte's AI has gone crazy. Money paid for error-filled report returned to Australian government

Deloitte Australia will reimburse part of the 250,000 euros paid by the country’s government for an AI-generated report — which, unsurprisingly, was full of errors, inaccuracies and simply invented information.

Deloitte’s refund to Australia due to a report full of errors I’m famous ‘fabricated’ interview by Michael Schumacher, in both cases involving AI, are examples of a trend that increases as technology advances.

Recently, in Portugal, Google had to correct false content about who was Anabela Natárioafter the journalist and writer complained to the technology company, but the errors about the wrong or false information resulting from artificial intelligence already have a history in several parts of the world.

The most recent, from the beginning of October, shows that the Deloitte Australia will reimburse part of the 440,000 Australian dollars, approximately 250 thousand eurospaid by the Australian Government for a report full of errorswhich apparently were generated by AI.

Among the errors pointed out is a false quote from a court ruling federal and references to non-existent academic research articlesnews to .

The trend of generative AI systems manufacture information is called hallucination.

After reporting the errors, the consultant published a revised versionof the report, in which the information that a generative AI language system, had been used in the preparation of the report — although, according to those responsible, the “substance” of it remained.

One of the best-known cases is that of , published by the German magazine Current em 2023.

The magazine published on its cover, in April 2023, the phrase: “Michael Schumacher, the first interview!”, further writing: “It looks deceptively real“, with Schumacher’s alleged AI-generated statements.

More than a year later, on May 23, 2024, the pilot’s family announced that he had won the case against the magazine’s publisher. Family spokeswoman Sabine Kehm confirmed to the AP that the legal action had been successful.

The compensation was said to be 200 thousand euros and the German publisher Funke Magazines, in addition to apologizing to the family, .

One of the first cases to make news dates back to April 2023, when an Australian mayor announced that he would pursue a defamation lawsuit due to false information shared by ChatGPTin what Reuters classified at the time as one of the first court cases of its kind with content from the OpenAI AI model.

According to, Brian Hoodpresident of Hepburn Shire, in Victoria, Australia, stated that the IA falsely claimed he had been detained for bribery while working for a subsidiary of Australia’s National Bank, when in fact was a whistleblower and was never accused of any crime.

This case drew attention to the problem of hallucinations in generative AIwhere the system produces icredible but factually incorrect informationand raised legal questions about the responsibility of the companies that own these AI models and for their content.

More recently, in April this year, conservative activist Robby Starbuck interposed a defamation lawsuit against Metaclaiming that social media giant Facebook and Instagram’s artificial intelligence ‘chatbot’ disseminated false statements about yourselfincluding that he participated in the riot at the United States Capitol on January 6, 2021.

On October 22nd, Starbuck also filed a defamation lawsuit against Google’s AI tool that linked him to sexual assault allegations and other falsehoods, according to The Wall Street Journal.

The case of the Schumacher interview however, it differs from the rest in a fundamental aspect. In the latter, false information emerged resulting from AI hallucinations, without any human premeditation in the propagation of errors.

In the first one, a publisher understood that it was smart to use AI to generate a fake interview. It would be true to say that AI even seems to have done its job well this time — and that the publisher hallucinated.

Source link