In a demonstration at the (Federal Supreme Court) in November last year, he used a tone opposite to that now used by , the company’s CEO, to talk about his moderation activities.
Instead of talking about “censorship” or accusing the occurrence of a high number of errors and excessive restrictions, in the course of action that it took, Meta defended its proactive action.
With statements such as that the “content moderation carried out by Meta is effective” and that the application of its policies “encompasses a coherent and comprehensive approach”, the company sought to refute the idea that there would be inaction on its part in combating content harmful.
The tone is quite different from Zuckerberg’s. “We built a lot of complex systems to moderate content. But the problem with complex systems is that they make mistakes,” he said on the 7th, when announcing a change in the company’s stance. “We’ve reached a point where it’s just a lot of mistakes and a lot of censorship.”
He also announced that he would stop using automated filters for low-severity breaches: “The problem is that the filters make mistakes and remove a lot of content that they shouldn’t,” said the company owner.
In the document filed with the STF, just two months earlier, the company highlighted that its moderation activity was based on “detection of violations based on user reports, technology (using artificial intelligence) and human analysis” and that ” The results of these efforts are overwhelming.”
It also said that this “demonstrates that, for objective situations and foreseen in the terms of use, the tools exist and are effective in combating the dissemination of harmful content. It should be noted that 98.30% of this content was removed through proactive action”.
A Sheet questioned Meta as to which facts caused the change of view on the company’s own moderation action in this short interval. He also asked why, in previous demonstrations, estimates of moderation errors were not disclosed by the company.
Meta responded that it would not comment.
–which is part of Meta, along with Instagram, Threads and WhatsApp– is one of the parts of the action that .
Its article 19, the main point of the court’s discussion, says that networks are only subject to pay compensation for content posted by a third party if, after a court decision ordering its removal, they keep the content on the air.
At the time, the rule was approved with the concern of ensuring freedom of expression. One of the justifications was that networks would be encouraged to remove legitimate content for fear of being held responsible.
Critics say the rule disincentivizes companies from combating harmful content and wants to expand the chances of liability.
Meta defends the constitutionality of the current rule, but, at the same time, seeks to shield itself from criticism that it would only act to remove problematic posts after a court order.
“Article 19 of the MCI [Marco Civil da Internet] does not make for an anarchic environment. As already mentioned, it does not prevent the proactive action of providers with the aim of mitigating the risk of the internet being used for illicit purposes”, says the company in the same statement.
The tone reserved for the Judiciary also has relevant differences. In the action before the Supreme Court, when defending the importance of the Brazilian Internet Civil Framework model, the Judiciary is described by the company as “the body constitutionally designated to carry out this balancing judgment, ensuring that fundamental rights in conflict are harmonized in a fair and balanced way” .
Since they can order companies to remove content silently.”
In and released by Meta in December, the company, marked until that moment by sessions with strong criticism of the networks.
Under the title “our proactive work to protect the integrity of municipal elections in Brazil in 2024”, the company stated that “there is no inertia from Meta to detect and act on harmful content, contrary to what has been heard in the public debate”.
“We want to be clear: we do not tolerate violence and other harmful behavior on our services. And we do not act on these types of content only when we receive court orders, on the contrary.”
“More than 95% of bullying content was identified by Meta itself and removed before any report was made. In the other types of violating content mentioned, this percentage was above 99%”, the note also said.
Furthermore, in the electoral report published at the time, the company presented, as a highlight of its action to combat false information, the .