Helle Thorning-Schmidt, co-chair of the independent watchdog that reviews content on social networks Facebook and Instagram, expressed great concern on Wednesday about the impact on minorities of the decision by Meta’s parent company to abolish independent fact-checking. According to AFP, experts also express concerns about the spread of misinformation, writes TASR.
Meta, which owns both Facebook and Instagram, announced Tuesday that it is ending independent third-party fact-checking on its platforms. It will now leave such verification up to users, similar to what Elon Musk’s social network X already does.
Former Danish prime minister Thorning-Schmidt told the BBC that the announced changes would bring “huge problems”, including the potential impact on the LGBTQ+ community, as well as gender and transgender rights.
“We see many cases where hate speech can lead to real harm, so we will be watching this space very carefully,” she told the BBC. She also insists that the watchdog, which she co-chairs, is needed now more than ever.
This independent body is funded by the Met and was created by the then president of global affairs, Nick Clegg, who announced his departure less than a week ago.
Zuckerberg talks about going back to his roots
Meta founder and CEO Mark Zuckerberg says the decision was motivated by “returning to our roots in free speech.” He says the third parties the company currently uses for fact-checking are “too politically biased,” which he said led to to “censor” too many social network users.
However, journalist and Nobel Peace Prize laureate Maria Ressa believes that Zuckerberg’s claim to support freedom of speech is completely misguided.
“You can only say that if you want profit – only if you want power and money,” she told AFP. She says social media users and democracy are facing “extremely dangerous times” as a result of Meta’s decision.
AFP recalls that Meta took this step after years of criticism from supporters of future US President Donald Trump that conservative voices were being censored or suppressed under the pretext of fighting disinformation. However, professional fact-checkers strongly reject these claims.
“Abandoning formal fact-checking in favor of tools like user-optimization has failed in the past,” noted Nora Benavidez of the nonprofit Free Press.
“Twitter has tried it and it can’t handle the volume of misinformation and other violent, violent content,” she added, referring to the network, which was renamed X after being bought by Musk in 2022.
At that time, the richest man in the world abolished the teams responsible for trust and security within a few weeks and introduced the so-called community notes, a way for users to add context to posts.
A paradise for spreading disinformation
Researchers say that the lowering of security restrictions on X and the reinstatement of once-banned accounts of known disinformation have turned the social platform into a haven for the spread of false information.
“While research supports the idea that user fact-checking can be effective if done properly, it’s important to remember that it’s meant to complement professional fact-checking — not replace it,” emphasized Gordon Pennycook of Cornell University in New York state.
This new approach by Meta, according to Alexios Mantzarlis, director of the research center at Cornell University, ignores studies that show that users are “largely motivated by attitudes towards political parties and tend to over-attack their political opponents” when providing context.