Countries discuss big techs, and Brazil could be ‘jabuticaba’ – 12/07/2024 – Power

by Andrea
0 comments

Brazil is not the only country discussing proposals to increase the responsibility of email platforms.

In the US, Republicans are trying to enact changes to punish platforms that moderate “too much” content, something they consider to be censorship.

In , in the United Kingdom, in and in Brazil itself, the objective is to increase incentives for platforms to remove illegal content — the perception is that big techs do not do enough.

But if the country follows the AGU (Attorney General of the Union), defended in the judgment of the no (Supreme Federal Court), Brazil will have a “jabuticaba” regulation, non-existent in the great democracies of the world.

The discussion was heading towards the STF amending article 19 of the Marco Civil to arrive at a model that would be similar to that of the European Union. There, the “notice and action” regime is in force.

Once an internet platform is notified by a user about potentially illegal content, it may be civilly liable if it is sued and loses – unless it acted quickly to remove the content and review the report. A court order is not necessary, just an extrajudicial communication.

Today, in Brazil, platforms can only be held civilly liable if they do not remove content after a court order, except in cases of copyright infringement and non-consensual nude images. In this case, extrajudicial communications, such as user complaints, will suffice. The tendency was for the Supreme Court to include legal violations of the democratic rule of law, racism and homophobia in this list.

But Toffoli, in his thesis, and the government, in the AGU statement, for internet companies.

Under this regime, companies can be held responsible if there is illegal content, even before receiving a court order or extrajudicial notification, such as a user report. As a result, they would have to actively monitor all content posted on their networks and remove potentially infringing posts.

The STF minister and the AGU proposed a list of prohibited content that could generate this objective liability, including: crimes against the democratic rule of law, acts of terrorism or preparatory acts, inducement to suicide or self-mutilation, racism, violence against children and women and opposition to health measures.

“The monitoring obligation is an excrescence, it does not exist in any of the major democratic jurisdictions,” says Beatriz Kira, law professor at the University of Sussex, in the United Kingdom.

Both European Union legislation and the United Kingdom’s Online Safety Act (which came into force in 2023 and is in the implementation phase) establish that there is no liability for content without extrajudicial notification and do not presuppose monitoring.

“There is a global trend toward regulating platform duties because these liability exemption models like Section 230 [dos EUA]assume that these companies were neutral actors that did not interfere in content management”, says Laura Schertel Mendes, professor of digital law at UnB and IDP.

She considers that the scenario today is very different, because “everyone is clear that platforms intervene in the flow of information through algorithms that control the scope and prioritization of content.”

Mendes, like other experts, also defends a systemic approach to regulation. Under European law, platforms have to report on how they intend to deal with systemic risks such as pedophile content, hate speech, effects on democratic processes, public and mental health.

In addition, they must publish transparency reports explaining what content they removed and what changes to their algorithms they made to mitigate risks. Auditors evaluate the reports and, if there is widespread non-compliance, the networks can be fined. There is no punishment for unique content.

In the same vein, UK law requires the largest platforms to report on how they are applying the companies’ own terms of use and supervising their compliance, in the so-called “duty of care”.

“The pressure for social networks to do a better job of moderating illegal content has increased in several countries. But a ‘legislative’ decision by the STF, creating rules instead of just delimiting immunity, could end up creating legal uncertainty”, he says Ivar Hartmann, law professor at Insper.

In India, there is also a movement to restrict platform immunity. Local law exempts big tech from liability for content posted by third parties, as long as they comply with court or government orders to remove content.

But Prime Minister Narendra Modi’s government has been increasing companies’ responsibilities through amendments and announced last year that it is finalizing a new Digital India Act.

In the US, the debate follows ideological lines. Democrats are pushing for exceptions to the immunity granted to platforms. Republicans, on the other hand, understand that big tech acts as censors, especially of conservative voices.

The pressure has intensified since Donald Trump was banned from YouTube, Facebook and X (then Twitter), accused of inciting violence in the invasion of the Capitol, on January 6, 2021. There are several “anti-censorship” bills being processed in the Senate and House which should be reintroduced next year — with the Republican majority in both Houses, the chances of approval increase.

Furthermore, Brendan Carr, appointed by the president-elect to preside over the Federal Communications Commission (FCC), has already stated that the body should work together with Congress so that “internet companies no longer have carte blanche to censor legitimate speech.”

source

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC