AI does not know what a “no” is – and this is a problem for medicine

by Andrea
0 comments
Why is lung cancer increasing in non -smokers?

AI does not know what a "no" is - and this is a problem for medicine

Many AI models cannot recognize negation words like “no” – which means they are unable to distinguish between labeled medical images that show a disease and labeled images that do not show a disease.

Babies quickly dominate the meaning of the word “no”but many models of artificial intelligence (IA) They have difficulty doing so.

According to an analysis carried out by MIT researchers – Massachusetts Institute of Technology, last week at arXivmany AI models have a high failure rate when it comes to understanding commands that contain negation words.

This may mean that medical AI models do not realize that there is a big difference, for example, between an image of X -ray labeled as “presents signs of pneumonia” and one labeled as “no signs of pneumonia”.

This has potentially catastrophic consequencesif doctors guide the assistance of AI when they make diagnoses or prioritize the treatment of certain patients.

In this study, the researchers evaluated the ability of a series of AI models to understand denial words in subtitles associated with various videos and images, including medical images.

Thousands of pairs of images were compiled in which an image contains an target object and the other image does not contain the same object and then generated corresponding subtitles to describe the presence or absence of objects, creating about 80,000 testing problems.

As it details, in the first rehearsal, the researchers challenged the models would recover images that contained certain objects, specifying the exclusion of other related objects – such as asking images of tables without chairs. Here, the Ia models came across difficulties.

The second test asked AI models to select the most accurate caption for an image of a general scene from a choice of four possible options.

The results showed that vision-language models have a tendency for the statement. I.e, ignore the denial or words of exclusion as “no” in the descriptions; and they simply assume that they are asked to always affirm the presence of objects.

“The words of denial as they function regardless of the specific meaning of context and” can appear in many sites in a given sentence, “he told New Scientist, Karin traced From the Royal Melbourne Institute of Technology, Australia, which was not involved in the study.

This makes it harder for IA models to fully understand and accurately respond to requests that contain such words of denial.

“In clinical applications, the denial of information is crucial – knowing the signs and symptoms a patient has and what can be confirmed is important to accurately characterize a disease and exclude certain diagnoses,” says Verspoor.

Source link

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC