Until robust safeguards are in place, the public sector and, by extension, publicly funded election campaigns should be prohibited from broadcasting generatively produced audio and video. Today, these synthetic contents confuse authorship, distort facts and increase power asymmetries between the State, candidates and citizens. In periods of , this imbalance creates a serious systemic risk.
Why? Because the State is an institutional source of information and a parameter of authenticity for the citizen. When it starts adopting artificial representatives or cloning its authorities, the line between public communication and fiction dissolves — and the reputational damage falls on the entire administration. This occurs even when the intention is good. In 2024, Arizona’s Secretary of State used an AI-generated video of himself to train and warn election staff about deepfakes. Although clearly identified as a simulation, the initiative highlighted the problem: authority itself became indistinguishable from a synthetic creation. If this has already caused strangeness in a training environment, imagine in mass communication.
There are two other even more serious threats. The first is emotional and aesthetic manipulation. Generative AI facilitates the fabrication of authority and memory to influence political behavior. We saw this many times last year: in Indonesia, with the viralization of a deepfake of the dictator Suharto, dead for decades, asking for votes; in India, with fake images of Bollywood actors declaring electoral support; in the USA, with automated phone calls using President Joe Biden’s cloned voice to discourage voters from going to the polls; and in Mexico, with fraudulent videos featuring a synthetic version of then-candidate Claudia Sheinbaum promoting “financial opportunity” in the middle of the electoral cycle.
The second threat is that of unfair competition. How many will fall into the temptation of using taxpayer money to micro-target and flood the internet with cheap pieces of propaganda, boosting the official machine (or party machines) and crushing dissent? Who will control the volume, frequency and tone of these artificial messages? Is it viable to ensure the right to reply in real time when dealing with true lie assembly plants? How to guarantee equal conditions, parity of arms, democratic balance?
Some may counter by highlighting benefits of generative AI, such as efficiency, accessibility and cost containment. These are legitimate gains, yes. But they don’t rely on generative AI. They can be achieved with other AI tools and auditable human processes — until there are regulatory frameworks, oversight and a civic culture capable of preventing large-scale abuses.
“But the public sector already uses actors and staging. What’s the difference?”, others may counter. The difference is essential. Actors and performances follow consolidated cultural and legal codes (credits, unions, image rights). And there is a physical limit: no actor is, in fact, the character/authority he represents. AI simulates identity, clones voices and recreates faces of the authority itself — or invents “representatives” with apparent official status. On small screens, everything ends up looking true. Or worse: ambiguity strengthens the “liar’s dividend” and, in the near future, even authentic audio and videos could be discarded as fake. This prospect, of complete demoralization of public communication, is terrifying. Taking such a risk is unacceptable, especially if financed with public funds.
This is a timely and urgent agenda for , which has been acting with such firmness and diligence in defending our democracy. As political agents and institutions prepare for another electoral cycle, I invite the court — especially its presidents (until August) and Kassio Nunes (from now on) — to evaluate: 1) ban audiovisual content generated by generative AI with public resources, including party and electoral funds; 2) admit only assistive, non-generative functions (compression, subtitles, audio description without synthesizing real people, format conversions); 3) require a declaration of authenticity, records and storage of source files for all official audio or video pieces; 4) veto the use of generative AI in the creation and management of automated accounts (bots) for the dissemination of content; and 5) define sanctions and joint liability of suppliers and managers in case of non-compliance and create an agile reporting channel with forensic expertise specialized in AI.
This is not technophobia. Private companies, organizations and citizens must remain free to experiment and innovate with generative tools. I myself helped found a movement (the True Artificial Intelligence Institute) to promote this most important literacy since electricity and to show people positive ways to use AI to transform their work and lives.
The proposal here is clear, technical and republican. The State needs to shield the trust that supports its public voice. The integrity of the electoral process must be preserved. Brazilian democracy cannot serve as a laboratory for synthetic communication financed by the taxpayer.
