The largest association of United States psychologists warned, this month, US federal regulators about the risks of artificial intelligence chatbots who present themselves as therapists, but are programmed to reinforce, rather than challenge, users’ thinking. According to the entity, these tools can lead vulnerable people to harm or even harm others.
During a presentation to a panel of the Federal Commerce Commission, Arthur C. Evans Jr., CEO of the American Psychological Association (APA), cited legal cases involving two teenagers who consulted “psychologists” on the platform, an application that allows users to create characters AI fictitious or interact with characters created by others.
In one case, a 14 -year -old boy in Florida died for suicide after interacting with a character who claimed to be a licensed therapist. In another, a 17-year-old with autism in Texas became hostile and violent with his parents during the time he was talking to a chatbot that claimed to be a psychologist. The families of both young people sued the company.
Evans claimed to be alarmed by the answers offered by chatbots. According to him, the bots did not challenge users’ beliefs, even when these beliefs became dangerous; On the contrary, they encouraged them. He pointed out that if these answers were given by a human therapist, they could result in the loss of professional license, as well as civil or criminal liability.
“They are really using algorithms that are antithetical to what a trained clinician would do,” said Evans. “Our concern is that more and more people will be harmed. People will be deceived and will not understand what is good psychological care. ”
Evans also pointed out that APA was motivated to act due to the growing realism of AI chatbots. “Maybe 10 years ago, it was obvious that you were interacting with something that was not a person, but today this is not so evident,” he said. “The bets are much higher now.”
Continues after advertising
Artificial intelligence is expanding in mental health professions, offering a number of new tools designed to help or, in some cases, replace the work of human clinicians.
The first therapy chatbots, such as Woebot and Wysa, were trained to interact based on rules and itineraries developed by mental health professionals, often guiding users by structured tasks of cognitive behavioral therapy (TCC).
However, the arrival of Generative AI, technology used by applications such as chatgpt, replika and character.ai, brought significant changes. These chatbots are different because their answers are unpredictable; They are designed to learn from users and build strong emotional ties in the process, often mirroring and amplifying the beliefs of the interlocutors.
Continues after advertising
Although these AI platforms were created for entertainment, characters who present themselves as “therapists” and “psychologists” proliferated. Often, these bots claim to have advanced diplomas from renowned universities such as Stanford, and training in specific approaches such as CBT or acceptance and commitment therapy.
Kathryn Kelly, spokesman for Character.ai, said the company has introduced several new security features last year. Among them, there is a warning on each chat, reminding users that “the characters are not real people” and that “what the model says should be treated as fiction.”
Additional measures have been implemented for users dealing with mental health problems. A specific warning has been added to characters identified as a “psychologist”, “therapist” or “doctor”, clarifying that “users should not trust these characters for any type of professional advice.” In addition, in cases involving suicide or self-damage, a pop-up window directs users to a suicide prevention aid.
Continues after advertising
Kelly also said the company plans to introduce parental controls as the platform expands. Currently, 80% of platform users are adults. “People come to write their own stories, interpret original characters and explore new worlds – using technology to enhance their creativity and imagination,” she said.
Meetali Jain, director of Tech Justice Law Project and a lawyer in two lawsuits, argued that warnings are not enough to break the illusion of human connection, especially for vulnerable or naive users.
“When the content of the conversation with the chatbots suggests otherwise, it is very difficult, even for those of us who are not in a vulnerable demography, know who is telling the truth,” said Jain. “Several of us tested these chatbots, and it’s very easy, in fact, to be pulled to a rabbit hole.”
Continues after advertising
The tendency of chatbots to align with user opinions, a phenomenon known as “siphaphance” in the field, has already caused problems in the past.
Tessa, a chatbot developed by the National Eating Disorders Association, was suspended in 2023 after offering user weight loss tips. Researchers have also documented interactions with Generative AI chatbots in a Reddit community, showing screen catches in which the bots encouraged suicide, eating disorders, self -impression and violence.
The American Psychological Association has asked the Federal Commission of Commerce to start an investigation into chatbots who claim to be mental health professionals. This investigation may require companies to share internal data or serve as a precursor to legal or execution actions.
“We are at a point where we need to decide how these technologies will be integrated, what kind of boundaries we will establish and what types of protections we will offer to people,” concluded Evans.
c.2025 The New York Times Company