Big techs go on the offensive amid STF action on networks – 12/03/2024 – Power

by Andrea
0 comments

On the eve of the resumption of the (Federal Supreme Court), two of the main big techs, and , released positions on the rule that is being debated in the court and defended the content moderation work that is already carried out by them.

Each of the companies is part of the two different actions that guide the , which began last week and will have its third session this Wednesday (4).

On the one hand, the notes demonstrate an attempt to refute the line of argument that they do nothing, as proclaimed by the ministers and representatives who spoke on the first two days of the sessions. Neither of the two notes, however, cites the judgment directly.

On the other hand, they indicate that they understand that the scenario on the table is that there will be some type of change to the rule currently in force by the Supreme Court, at the same time that they defend the importance of the current model demarcated by article 19 of the Civil Rights Framework.

This occurs at a time when there is still no clarity on which side the minister, rapporteur of one of the actions that discuss the civil liability of internet companies, will take, and who adopted a heavy tone against big techs throughout the beginning of his vote in last Thursday (28).

Furthermore, there was , who stopped defending an intermediate path through the Supreme Court and spoke out for the overturn of the current rule, something that would have a greater impact on companies. Even though the line of interpretation proposed by the AGU (Attorney General of the Union) in the process already created broad exceptions to the regime in force today.

The two days of trial were marked by . One of the most vocal was delivered by the ministers, who stated that the coup attacks of January 8, 2023.

Companies, in turn, defend content moderation that they carry out proactively and point out data seeking to show that they operate on a large scale.

The Goal, “contrary to what has been heard in the public debate”. And it points out that it proactively removed 2.9 million pieces of content from its platforms during the election period for violating its policies.

Google states that it “removes, efficiently and on a large scale, content that violates the rules of each of its platforms” and that “hundreds of millions of pieces of content are removed per year by the company itself.”

Although it is a fact that companies act to apply their own rules, the functioning of different platforms is highlighted as one of the significant numbers presented by them in their different reports.

According to article 19 of the Marco Civil, treated by Toffoli as an “immunity” for platforms, they are only subject to pay compensation for something posted by third parties if, after a court decision ordering its removal, they keep the content up.

Google’s position criticizes a more extreme position, saying that “abolishing rules that separate the civil liability of platforms and users will not contribute to the end of the circulation of unwanted content on the internet”.

At the same time, he says that the Marco Civil “can and should be improved, as long as procedural guarantees and criteria are established that avoid legal uncertainty and the encouragement of censorship.”

Meta’s note defends the importance of the Marco Civil, while admitting that “the debate on updating internet rules is important, including regarding article 19”.

The current rule would aim to protect freedom of expression and avoid censorship, as it would not encourage companies to remove content for fear of being sued. It does not, on the other hand, prevent platforms from applying their own rules to remove content. Nor, however, do they create incentives for them to act.

While a decision stating that article 19 is constitutional would keep the scenario as it is, the declaration of its unconstitutionality would overturn it, taking Brazil to the pre-2014 scenario. An intermediate line would be the “interpretation according to the Constitution”, in which the article is maintained, but receives a new interpretation by the Supreme Court.

On the part of the companies, one of the main concerns, as the lawyers who made the oral arguments on behalf of Google and , made clear, is to restrict the scope of any intermediate path.

Currently, the Marco Civil da Internet regime already has exception mechanisms, in this case, for copyright infringement and non-consensual nudity content. An intermediate path through the Supreme Court, providing for the expansion of this list, would be a path defended as one with less legal uncertainty for big techs.

Both defended the constitutionality of article 19. They pointed out, however, in general terms, that any intermediate path should provide for the need for notification so that companies could be held responsible.

And they also defended a more restrictive thematic list, providing for crimes such as child sexual exploitation, terrorism, racism, and the crimes of violent abolition of the democratic rule of law and coup d’état.

Concepts seen as more open like disinformation and crimes against honor, on the other hand, are seen as red flags.

TikTok, in turn, organized an event together with the Vero Institute this Tuesday (3) in Brasília on digital safety for minors. According to Sheet however, the information is that the date is not related to the trial. The issue involving children online is one of the aspects raised in the Supreme Court trial.

Read Google’s positioning

Abolishing rules that separate the civil liability of platforms and users will not contribute to the end of the circulation of unwanted content on the internet. The Civil Rights Framework for the Internet can and should be improved, as long as procedural guarantees and criteria are established that avoid legal uncertainty and incentives for censorship.

Google efficiently and on a large scale removes content that violates the rules of each of its platforms. Hundreds of millions of pieces of content are removed each year by the company itself, in line with the public rules for each product.

However, good content moderation practices by private companies are unable to deal with all controversial content, in the variety and complexity with which it appears on the internet, reflecting the complexity of society itself. Judicial action in these cases is one of the most important points of the Marco Civil da Internet, which recognizes the role of the Judiciary to act in these situations and draw the line between illicit speech and legitimate criticism.

As the numbers below on content moderation attest, there is no inertia from Meta to detect and act on harmful content, contrary to what has been heard in the public debate. Furthermore, our business model does not thrive in a toxic online environment: advertisers do not want to see their brands linked to harmful content. (…)

The work to guarantee the integrity of our platforms is ongoing. In the election period between August and October this year, we removed more than 2.9 million pieces of content on Facebook, Instagram and Threads in Brazil for violating our policies on bullying and harassment, hate speech and violence and incitement. (…)

The debate on updating internet rules is important, including regarding article 19 of the Marco Civil da Internet. The rule establishes a system for holding application providers responsible for content posted by third parties, prioritizing freedom of expression, while allowing platforms to moderate content posted on them. (…)

source

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC