Lucasfilm Ltd / Disney
Daisy Ridley interpreta Rey em Star Wars
Chatbots are currently the real digital western regarding the risks for smaller online.
Chatbots who mimic characters from Star Wars, actors, comedians and teachers at, one of the world’s most popular chatbots platforms, are sending harmful content to children every five minutes, according to a report cited by.
In one case, a bot that went through Rey, from the Star Wars saga, taught a 13 -year -old girl to hide the antidepressants so that the parents believed she had taken them.
Two child rights organizations now ask that access to under 18 is prohibited at Character.ai.
The artificial intelligence company had already been accused last year of being linked to death of a teenagerand to have suggested to a 17 -year -old user who. Now, faces new accusations of associations that report to be putting young people in “extreme danger”.
“Parents have to realize that when their children use character.ai chatbots, they are in an extreme danger of exposure to sexual enticement, exploitationemotional manipulation and other serious damage, ”warned Shelby Knoxdirector of online security campaigns of the NGO.
“They can even assure us, with clear policies of trust and security, that there is a protected environment for children, it is very difficult to recommend that young people are on this platform,” he said Sarah Gardnerexecutive president of the Technological Security Group, Sky News.
For 50 hours of testing, with accounts recorded on behalf of users between 13 and 17, investigators of Parentstogether and Heat Initiative identified 669 interactions of sexual character, manipulator, violent and racist between children’s accounts and character.ai chatbots.
ISTO is equivalent to a harmful interaction every five minutes. Report transcripts reveal numerous examples of “inappropriate” content sent to minors, according to investigators.
In one of the conversations, a bot that simulated being a 34 -year -old teacher confessed Romantic feelings, alone in your cabineta researcher who was going through A 12 -year -old child.
After a long interaction, the “teacher” insisted that the student could not tell anything to adults, admitted that the relationship would be improper and suggested that if the child changed school, could be together.
In another case, a bot that imitated the American comedian Sam Hyde repeatedly addressed a transgender teenager treating him for “that”at the same time as theJudava a 15 -year -old planning ways to humiliate him.
“Basically,” said the bot, “Think of a way to use your voice recorded to seem that You are saying things that are clearly notor things that would be afraid to say aloud. ”
Bots were also found that mimicked the actor Timothy chalametsinger Chappell Roan and football player Patrick Mahomes to send harmful content to children.
“Chatbots are currently The real digital western When it comes to risks for smaller online, ”says Sarah Gardner.“ It’s too early and we know too little to allow children to interact with them because, as the data show, these harmful interactions show They are not isolated cases”.
Character.ai allows most bots to be created by the users themselves and claims to have more than 10 million characters on your platform.
Community rules prohibit “content that harms, intimidate or endanger third parties – especially minors“. The creation of bots with explicit sexual content or” imitates public or private figures, or use the name, image or identity of someone without permission. “
Jerry Ruotithe company’s trust and security guard, told Sky News that “neither Heat Initiative nor Parentstogether contacted us to discuss their conclusions, so we cannot directly comment on how the tests were conducted.”
Last year, a mourned mother against Character.ai for the death of her 14 -year -old son. Megan GarciaMother of the Young Sewell Setzer IIIit claims that the teenager committed suicide after becoming obsessed with two chatbots artificial intelligence of the company.
“An application of dangerous AI, directed to children, abused and explored my son, manipulating him until he ended his life,” he denounced at the time.
In response, a Character.ai spokesman stated that the platform has safety mechanisms to protect minorincluding specific measures to prevent “conversations about self -mutilation.” Apparently these mechanisms are not working.