Instagram has announced over the years more than 50 measures to protect minors. Did they work as promised? Very few. A group of researchers promoted by the former goal employee Arturo Béjar, has analyzed 47 of these functions: 30 do not work, they no longer exist or are very easy to overcome and another 9 have limitations. Only 8 would work as planned. .
Some of those functions are those that prevent adolescents seeing certain violent content or about diets and sex, receiving suspicious adult messages or creating an account with less than 13 years and that their videos circulate freely through the network. The research has been corroborated by academics from the Northeastern University (Boston, USA). Three parents’ organizations concerned with the digital health of adolescents have also participated in the report.
Meta disputes the conclusions of the study and accuses him of “misrepresenting again and again” his protection work for adolescents. The company launched in December. Last Thursday they also arrived for Facebook and Messenger. The study investigates both these new accounts and other specific functions for minors.
“When I started to see what things worked and which did not, my intention was just talking about all this precisely, but I was very surprised to discover that everything was so bad,” Béjar said to El País on the phone. He then contacted a center of the Northeastern University dedicated to the analysis of digital threats. They followed their usual method to detect problems, but this time dedicated to adolescents and Instagram: isolate each of the functions, design controlled tests and observe the possible behavior of young people and parents.
For the goal, this is not enough to understand the real operation of Instagram: “Adolescents under these protections [de las cuentas para adolescentes] They saw less sensitive content, received less unwanted contacts and spent less time on Instagram at night, ”says the company, which in June announced that adolescents“ had blocked accounts 1 million times and denounced another million for security notices ”of the network. The problem is what is the real proportion of this million over the total suspicious accounts that are really around around app.
Béjar has shared with the country videos and captures where girls of 8 and 10 years are seen replicating a video where they say his name, his age, his sign of the zodiac and other details. Many of those videos, as this newspaper has been able to verify, are still accessible today on Instagram for adult users. The minimum age to have Instagram account is 13 years.
“I found 8 and 9 -year -old girls who made videos that will be something innocent for them,” says Béjar, “but this network distributes them to pedophiles.” “The worst of all was from a girl who copied another video that said: ‘Put a red heart if you think I am a yellow one if I’m fine. And one blue if you think I’m ugly.’ [con el emoji] of a language licking, of horrible things, ”he adds. The report includes screenshots of some of these videos.
To succeed in Silicon Valley to report
Béjar left goal in 2015 after six years and then worked as an external consultant between 2019 and 2021. In 2015 he was chosen by this newspaper. In 2023 he declared in the US Congress about how his youngest daughter received adult messages on Instagram who tried to have a relationship with her.
Meta Rebate that some functions have changed the denomination or that there are functions that depend on who sends the message first or whether the adolescent denounces or limits what he sees. To béjar that approach is wrong. All technology know how to force users to activate functions of their apps: It depends on the color of the place, the place, the number of clicks that requires. If the limitations are not placed in prominent spaces and with a successful language, they are not activated. “You know when the company wants you to use something,” says Béjar. “They also know how to do that you do not use something, complicate it, they make it difficult,” he adds. This was part of his work when he was in the finish line: how to make language adapt to young people. Perhaps for them “report” is too similar to sink or betray and need another type of message, for example.
The report is divided into four great sections: inappropriate behavior and contacts, sensitive content, screen time and compulsive use, and age verification and sexualized content. The category with worse results for Instagram is that of sensitive content, where all functions have received a negative assessment: it is too easy for young people to see violent, sexual content (drawings and descriptions) and videos that promote self -injuries or radical diets.
Worse in Spanish
The work is in English, but Béjar is also Spanish speaker: these searches are a lot of worse in Spanish, he says, and in other languages that are not English. “I didn’t anticipate that if you started writing in Spanish ‘I want m’ I was going to recommend writing ‘I want to kill myself and I want to kill another person,” says Béjar. “In my tests nothing worked in Spanish. In other languages I imagine that even worse. You could look in Spanish ‘I want to lose weight’, and I recommend pills, always from devices activated with a teenager.
One of the objectives of the authors of this report is that this research method becomes usual for the sector. “The independent tests of simulation of situations should become a standard practice, carried out not only by researchers, but also by regulators and civil society,” says the report. “Treating security functions with the same rigor as cybersecurity applies to other critical technologies is the only way to know if platforms fulfill their promises.”