Electoral disinformation, weapons and hate speech may remain outside Meta’s surveillance – 01/15/2025 – Power

by Andrea
0 comments

Just included one new category in the list of topics in which it states that it will continue to use automated systems to detect possible violations of its own rules, one of the many topics in the company’s recent announcement about changes to its content moderation policies.

In addition to “terrorism, child sexual exploitation, drugs, fraud and scams”, cited in a previous company statement as “highly serious”, the only topic added to the list was content encouraging suicide and self-harm.

The company, however, has so far not disclosed or provided more information about what it will consider “low severity” violations, which will therefore depend on a report to undergo moderation.

Topics such as electoral and vaccine misinformation, hate speech, bullying and harassment; restrictions on weapons and incitement to violence are some of those that may end up being left out of this automated monitoring.

Questioned by Sheetas to which of these categories, in fact, will no longer have automated detection, said he would not comment.

While the policy on each topic defines what is allowed or not on the platform, the way this moderation is done – whether proactively or after reporting – sets the tone for how much will be removed.

Experts consulted by the report point out that the decision is both a change in the company’s operations, which had been investing heavily in automation, and a problem, given the volume of content that is posted on Meta’s platforms (Instagram and Threads) at all times. .

They also question how much staff the company will have to analyze complaints without automation.

On the 7th, the company’s founder, Mark Zuckerberg, released a video announcing changes to content moderation policies, in what was seen as a nod to US president-elect, Donald Trump. The decision included the end of the fact-checking program.

Posteriormente, Zuckerberg .

Disclosing the size of the moderation teams for each language, which includes outsourced work, is a matter on which the company does not provide transparency.

One topic that the company now highlights more clearly that should no longer rely on automated detection is hate speech. By changing these rules, Meta now allows, for example, .

In a report released by Meta about its performance in Brazil during the 2024 election period, it says it removed more than 2.9 million pieces of content that violated its policies on bullying and harassment, hate speech and violence and incitement.

He also disclosed that, in the case of bullying, more than 95% of them were identified by the company itself and removed before any report was made. In the others, this percentage was above 99%.

The company does not specify whether it removed content based on its electoral misinformation policy. Another 8.2 million pieces of content were flagged as false information based on fact-checks carried out with partners.

Andressa Michelotti, who is a doctoral student at UFMG (Federal University of Minas Gerais) and member of Governing the Digital Society at the University of Utrecht, in the Netherlands, highlights that the change transfers the onus of identifying what is a possible violation to the user.

For her, who has already worked in the technology sector, an important question is how this approach can affect the circulation of content in non-English-speaking markets. “Are they going to adopt manual review for all languages? Historically not [adotavam]. There are many questions in the air”, he says.

Andressa also questions what would be considered terrorism in the case of Brazil and highlights that, in the way it was disclosed by the company, it remained open, for example, whether or not neo-Nazi groups would be maintained among violations with automated detection.

In its response to the (Attorney General’s Office) presented on Monday (13), Meta only mentions terrorism, a topic that is part of its policy on “dangerous organizations and individuals”, and which includes other aspects, such as, ” hateful ideologies”, which includes Nazism, white supremacy and white nationalism.

Clarice Tavares, research coordinator at InternetLab, assesses that, although automation of moderation does indeed incur errors as it is unable to understand a complete context, the option to remove it from use for part of the policies is not the appropriate path.

For her, it is necessary to think about how to improve it and at the same time invest in robust human review systems.

“Not having this proactive detection by automated tools is really a challenge,” he says, “We’re talking about millions and millions of pieces of content, from many languages.”

Sources heard by Sheet under reservation and who have already worked on the platform highlight that, in addition to the lack of transparency for the external public regarding the extent of the announced changes, it is possible that the company’s own teams do not yet have this well defined, considering that there is no sign that the theme has undergone internal construction before being announced.


Meta Content Policies
Defines what is allowed or not allowed on the platform regarding each topic

Moderation process
Until then, Meta says it used automated systems to track violations of all policies, doing proactive moderation

What changes
The company will now focus the use of these systems only on detecting “high severity” breaches. “Low severity” violations will depend on user reporting

What will still be automated
To date, Meta has disclosed that terrorism, child sexual exploitation, drugs, fraud and scams, as well as encouraging suicide and self-harm, are on the list of serious violations

What will depend on the complaint
Meta has not yet disclosed what it considers to be “low severity”

source

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC