Civil society organizations, parties and ) see what they see as gaps in the regulation of the use of social media this year and in the actions of influencers.
They sent recommendations for the propaganda resolution that will be published, with the suggestions that are accepted, by March 5th. THE Sheet had access to ten of these contributions.
This year, the biggest challenge is expected to be influencer networks, fake and rented profiles promoting negative or positive propaganda on the margins of electoral legislation and the use of AI, including chatbots and deepfake, in an attempt to influence the election in an illegitimate way.
One of the biggest concerns is a paragraph included by the rapporteur, minister Kassio Nunes Marques, in . The sole paragraph of article 3-B determines that criticism of the public administration, carried out by a natural person, does not characterize negative early electoral propaganda, even if there is a contract for promotion.
Under current law, only parties and candidates can pay for boosting, hiring directly from the application provider, and only for positive advertising. Expenses must be declared to .
In its recommendations, DataPrivacyBR states that the paragraph may open the door for the use of paid boosting as an indirect instrument of negative early electoral propaganda. The organization cites as examples, and the influencers who carried out negative propaganda against the Central Bank in the case of Banco Master.
In the suggestions sent to the TSE, UFRJ’s NetLab and . The PL celebrated the paragraph, saying that it “strengthens freedom of government criticism and reduces the risk of indirect censorship in the pre-campaign”.
Most entities criticize the lack of regulation for the use of generative AI in campaigns.
According to Article 19, the resolution covers “deep fakes”, with prohibitions and labeling obligations. But it does not regulate “the information generated by these models when used by voters as a source of information on political-electoral content.” “Reports of distorted, incorrect, fictitious or inaccurate information about candidacies and parties have become frequent.”
Netlab and call for a ban on chatbots recommending applications to users.
“Today there is no clear prohibition on the use of chatbots to guide voters, although we already know that these tools influence perception, opinion and behavior”, says Andressa Michelotti, researcher at UFMG.
The PT also asks for the prohibition of content that misleads voters by simulating journalistic content.
IDP professor Laura Schertel proposes a mandatory preventive system for artificial intelligence companies with mechanisms for identifying and marking synthetic content. It also recommends the obligation to implement safeguards that prevent the generation of realistic images of candidates or electoral authorities in contexts of violence, nudity or illicit acts, in the same vein as DataPrivacyBR
“The damage to the electoral process is typically irreversible, so the regulatory architecture needs to be more preventive”, says Bruno Bioni, founding director of DataPrivacyBR
