AI gives us bad advice to make us feel good

AI gives us bad advice to make us feel good

Photoxpress

AI gives us bad advice to make us feel good

Artificial intelligence chatbots are so likely to flatter and validate users that they are giving bad advice that they can damage relationships and reinforce harmful behaviors.

A study, this Thursday in Sciencetested 11 systems IA leaders and found that they all showed varying degrees of servility — excessively concordant and affirmative behavior.

The problem is not only that they provide inadequate advice, but also that people trust AI more when chatbots are justifying their beliefs.

This creates perverse incentives for servility to persist: the very characteristic that causes damage also drives engagement”, concludes the study conducted by researchers at Stanford University, which cites it.

The study also found that a technological flaw already associated with some high-profile cases of delusional and suicidal behavior in vulnerable populations is also disseminated in a wide variety of interactions between people and chatbots.

As mentioned, this is subtle enough that they might not notice it and poses a particular danger to young people who turn to AI for many of life’s questions while their brains and social norms are developing.

One experiment compared the responses of popular AI assistants made by companies including Anthropic, Google, Meta and OpenAI with the wisdom shared by humans on a popular Reddit advice forum.

The study concluded that, on average, chatbots AI claimed a user’s actions 49% more times than other humansincluding in questions involving deception, illegal or socially irresponsible conduct, and other harmful behavior.

“We were inspired to study this problem when we started to notice that more and more people around us were using AI for relationship advice and were sometimes fooled by the way it tends to take your side no matter what,” said the author. Myra Chengcited by Science Alert.

The dangers of servility

Although few people look to AI for factually incorrect information, they can appreciate — at least for the moment — a chatbot that makes them feel better in relation to making wrong choices.

In addition to comparing responses from chatbots and Reddit, researchers carried out experiments observing around 2,400 people communicating with an AI chatbot about their experiences with interpersonal dilemmas.

“People who interacted with this overly affirmative AI became more convinced they were right and less willing to repair the relationship,” said co-author Cinoo Lee.

“That means they weren’t apologizing, taking steps to make things better or changing their own behavior,” he added.

A working paper from the UK AI Safety Institute, cited by Science Alert, shows that if a chatbot transform affirmation of a user in a question, is less likely to be servile in the answer.

Source link