Librarians are fed up with being accused of hiding “secret books” invented by AIs

Librarians are fed up with being accused of hiding “secret books” invented by AIs

Librarians are fed up with being accused of hiding “secret books” invented by AIs

Never heard of the “Journal of International Relief” or the “International Humanitarian Digital Repository”? The reason is simple: they don’t exist. AI chatbots are creating fake headlines and quoting from made-up books and articles — which some people insist are real.

Everyone knows that AI chatbots like ChatGPT, Grok and Gemini often “hallucinate” their sources. But for those tasked with helping the public find books and scientific articles, this AI-manufactured nonsense is becoming a heavy burden.

According to , librarians are frankly exhausted with requests for titles that don’t exist.

The magazine spoke to Sarah Fallsresponsible for liaison and support for researchers at the Library of Virginia, in the USA, who estimates that around 15% of all orders reference received by email are about titles generated by AI chatbots like ChatGPT.

And often, these requests include questions about fake quotes.

Furthermore, Falls suggests that people don’t seem to believe librarians when they explain that a certain record does not exist, a trend also reported by . Many end up trust the chatbot more than in a professional whose job is, every day, to find reliable information.

A recent one from the International Committee of the Red Cross (ICRC), titled “Important notice: AI generated archival reference”, adds further evidence that librarians are simply fed up with this.

“If a reference cannot be found, this does not mean that the ICRC is withholding information. Various situations may explain this, including incomplete citations, documents preserved in other institutions or, increasingly, AI-generated hallucinations”, states the organization.

“In such cases, it may be necessary to examine the administrative history of the reference to determine whether it corresponds to a genuine archival source”, adds the ICRC note.

The year seems to have been marked by examples of fake scientific books and articles created using AI. Recently, a Chicago Sun-Times freelancer sent the paper a summer reading list with 15 recommendations. But, says , 10 of these books didn’t exist.

In May, the Health Secretary’s so-called “Make America Healthy Again” commission Robert F. Kennedy Jr.released its first report. A week later, the Commission went through the conclusions of the commission’s report — which cited at least 7 studies that didn’t exist. We would like to believe that it was just someone’s incompetence that they didn’t review the hallucinations of the AI ​​they used.

But why do some users trust AI more than people?

On the one hand, part of the AI’s “trick” is to speak with an authoritative tone. Who are you going to believe: the chatbot you use all day or a random librarian on the phone? The other problem may be related to the fact that people think that found tricks to make AI more reliable.

For example, there are those who innocently think that adding instructions such as “don’t hallucinate” and “write clean code” alone guarantees that AI will only give us results of the highest quality. If this really worked, you can imagine that Google and OpenAI would have automatically added it to all requests from their chatbots.

But, of course, this is ask a compulsive liar that, as a special favor, just this once, don’t lie to us.

Source link

News Room USA | LNG in Northern BC