Children are becoming more violent with AI (with sex in the mix)

Children are becoming more violent with AI (with sex in the mix)

Children are becoming more violent with AI (with sex in the mix)

“We have a pretty big problem on our hands and I don’t think we fully understand its scale”, points out a worrying report.

A recent analysis by digital security company Aura concludes that 42% of minors use AI for companionship conversations and “role-playing”. But within this group hides a more frightening reality: 37% were involved in violent scenarios such as physical aggression, coercion and non-consensual acts, with references to sexual violence with the chatbot in half of these conversations.

In its annual State of the Youth Report, Aura combined device-level data from nearly 3,000 U.S. minors ages 5 to 17 with national surveys of children and parents.

The report describes a pattern of intense engagement with chatbots, such as ChatGPT: many young people write long role-play sequences, which can exceed a thousand words per day. For researchers, the violence emerges as the main factor of retention and engagement in this type of conversations.

The peak occurs around 11 years of agewhere 44% of conversations with company chatbots contained violent elementsthe highest rate among all age groups. From the age of 13, towards the middle or end of the pubertal process, conversations of a sexual or romantic nature become predominant, appearing in almost two thirds of chats with digital “companions”.

However, according to Aura, interest in these topics drops in mid-adolescence, suggesting that the pre-teen and early adolescence years are the period of greatest exploration of extreme content.

Speaking to , Aura’s medical director, Scott Kollins, stated that the phenomenon could be broader than currently recognized.

“We have a pretty big problem on our hands and I don’t think we fully understand its scale,” said the doctor.

The report frames these trends in an AI ecosystem that is “almost entirely unregulatedAura says it has identified more than 250 chatbot applications, most of which are based solely on a system of self-declaration of age through a check box.

There have been an increase in lawsuits against companies in the sector, such as OpenAI and Character.AI. Parents allege harm caused to their children by interactions with chatbots, from emotional abuse and psychological impact to cases associated with death.

In October, OpenAI announced that adults will soon be able to chat with ChatGPT.

“In December, as we implement age restrictions more fully and as part of our principle of treating adult users as adults, we will allow even more, such as erotic content for verified adults,” said CEO Sam Altman at social network X.

For now, Aura advocates greater vigilance on the part of families, warning that these systems can amplify and prolong disturbing conversations, rather than stopping them.

The way children and ease of access to AI can manifest itself is starting to become frightening. Just last month, 3,500 people suspected of sexual cybercrimes were arrested in South Korea for sharing sexually explicit images of known people generated through technology, which uses artificial intelligence to create realistic but fake images and videos.

Source link