Concerns that have been raised with the (Superior Electoral Court) about the advancement of the use of AI () and its impacts on the election include issues such as the dissemination of fake nudes, the liability of artificially created influencers and the use of smart glasses when voting.
Suggestions for improving the rules governing the October dispute were made by research centers, experts in digital law, members of the PGE (Electoral Attorney General) and even by ministers who are not part of the electoral court’s main composition.
The TSE is also considering signing agreements with companies that develop and supply AI and set up a task force of experts to speed up the identification of manipulated content and provide more technical security to ministers’ decisions during the campaign.
This recommendation was made by , substitute minister of the court, in the session that opened public hearings on the topic, on Tuesday. The Dean of the (Federal Supreme Court) said that AI “poses new challenges for the integrity of ” and that cooperation measures are essential to curb the abusive use of AI.
The TSE’s first regulation on AI occurred for the 2024 municipal elections, when rules were established for the propaganda of parties, coalitions, federations and candidates. The so-called , and restrictions were set on the use of robots in contacting voters.
Electoral court technicians, however, understand that technological devices are increasingly ingenious and that, given this, it is necessary to improve the TSE’s surveillance actions.
In the midst of this, other notes on the subject began to reach the electoral court, still in 2024. The CNCiber (National Cybersecurity Committee), a body linked to the GSI (Institutional Security Office), for example, delivered to the president of the TSE, a memorial warning about the actions of —created by AI.
Court technicians interviewed by the Sheet assess that there is a legislative vacuum, since these characters are neither natural nor legal persons. There are also doubts as to whether liability for any irregularity committed by an influencer created in this way, such as hate speech, would lie with the AI developer or just the person who contracted the service.
The president of the TSE has been talking to her team about the impacts of AI on the spread of , an issue dear to her management. At last Thursday’s hearing, the Brazilian Institute of Education, Development and Research addressed the use of deepfakes to discredit candidates through the propagation of fake nudes.
The TSE has also received questions from society on related topics. The questions received by the ombudsman channels over the last few weeks include doubts about how to identify one and the possibility of entering the voting booth using smart glasses, which have a camera and other features.
The first response was that the jingle must have a warning that it was produced artificially. To the second question, the TSE responded that, just as cell phones and other electronic devices are prohibited when voting, the same occurs with smart glasses.
In the public hearings, more concrete suggestions for improvements to the draft resolutions were made. The PGE asked, for example, that the value of the fine foreseen for those who use artificial intelligence to propagate fake news be made explicit — from R$5,000 to R$30,000.
The Foundation of Criminalistics Experts Ilaraine Acácio Arce asked that the resolutions for the 2026 elections make it clear that the use of AI to promote mere technical improvements in the media cannot be punished — for example, in relation to sound quality.
Assistants to the president of the TSE and the vice-president and rapporteur of the resolutions, minister Kassio Nunes Marques, state that all notes will be evaluated. The court ended the public hearing phase on Thursday. Now, the draft resolutions will be adjusted based on the suggestions and voted on in plenary by March 5th.
