How are artificial intelligences being regulated around the world? See the divergences and convergences

by Andrea
0 comments

Read Jovem Pan’s columnist interview with the lawyer and master’s student at Florida Christian University, Dr. Bruno de Almeida Vieira, on the subject

Reproduction/Pixabay
As algorithms influence our daily lives, the pressure grows for governments to establish clear rules

In the legal arena, the debate on (IA) became the point of convergence between technological innovation, fundamental rights and market strategies. As algorithms influence our daily lives, pressure grows for governments to establish clear rules and capable of protecting citizens without choking creativity for development. It is precisely at this point of meeting between the law and the digital innovation that emerges today’s interviewee.

Dr. Bruno de Almeida Vieira is a lawyer, Date Protection Officer Exin Certificate, renowned Dutch institution and a world reference in qualification of professionals in the field, and Master in Florida Christian University, where research research of advanced digital technologies, with particular focus on AI regulation. Today, he talks to the columnist for Young PanDr. Davis Alves, to discuss a map of convergences and divergences of AI legislations around the world and what this means to companies and citizens from Brazil and abroad.

Dr. Davis Alves: Dr. Bruno, why regulate AI is no longer a trend and has become an urgent necessity?

Dr. Bruno Vieira: AI systems were no longer a laboratory curiosity and have deciding things that change people’s lives. They are complex, difficult to audit, and if they go wrong, they can cause great damage. That’s why countries are running to put rules before problems get larger than solutions.

Dr. Davis Alves: Among such varied legal systems, the doctor perceives any convergence in AI regulations?

Dr. Bruno Vieira: Undoubtedly. When we compare bills from places as different as European Union, Canada, Brazil or even in the US -published guidelines, we find a true “central axis” in all these proposals. First, there is the idea of ​​classifying each AI use according to the degree of danger: the higher the risk to society, the more strict the demands become. Then comes transparency: The user needs to know that he is talking to a machine and the inspection agencies have to be able to examine the system inside, as the hood opens a car. Then comes the requirement of human eye in the process; That is, really serious decisions cannot be without supervision of a person. This package is completed with clear protection to fundamental rights in order to prevent discrimination, invasion of privacy and risks to security, and finally with the identification of those responsible: who trained, who operates, who profits with the model must be registered, so that there is to resort to if something goes wrong. In short, despite cultural differences, there is an international consensus that I was going to be welcome only when it combines well-defined risk, open showcase, human supervision, respect for clear rights and clear responsibility.

Dr. Davis Alves: If there is so much consensus in some respects, where excerpts the proposals of each country really take opposite directions?

Dr. Bruno Vieira: When we compare the legislation, they begin to diverge from institutional design. The European Union has opted for a single comprehensive law, AI ACT, which is worth, without adaptations, to the 27 Member States and provides for sanctions that can reach 7 % of a company’s global revenues. In the United States, the picture is fragmented because there is no exclusive federal standard for AI, while each agency regulates its sector and the government has published only one executive order with general recommendations, leaving gaps that are filled with case by case. Already the United Kingdom has triggered an intermediate path, decentralizing supervision, where each sector such as health, transportation, financial market, etc., receive autonomy to apply the technical guidelines within their respective areas.

The second big difference is in the “High Risk” label. Europe adopts a rigid list of prohibited or highly controlled uses, as real -time facial recognition for mass surveillance. China does not prohibit this practice, but requires companies to record sensitive algorithms and open code excerpts for state audit, reinforcing government supervision. In Brazil, Bill 2338/2023 follows European logic, but Congress still discusses whether recruitment tools, for example, should be automatically framed as high -risk systems or receive more flexible treatment. Finally, there is the territorial reach. The European model adopts the market principle, where it is enough to offer an AI system within the European Union so that the company has to follow AI ACT, wherever it is. Canada, in the AIDA project, applies the test of the “real and substantial connection” with the country, which enables a larger margin for abroad companies to argue that their operations do not fit the Canadian law.

Dr. Davis Alves: Between the unique EU law and the sectoral guidelines of the United States, which points actually weigh for those who operate from Brazil?

Dr. Bruno Vieira: In the European Union, the mindset is preventive: before the company puts its system on the market, it needs to prove that it is in accordance with a series of predefined requirements. There are explicit lists of what is prohibited or strongly limited, mandatory technical tests that simulate risk scenarios and even a public register where these systems have to be registered to consult any interested party. The tone is to “authorize first, operate later.” On the other hand, the United States prefer to let innovation run and only intervene when a concrete damage arises. If an algorithm deceives the consumer or committing some kind of discrimination, for example, federal agencies enter the scene, process the company and require repair. For those who want to work in both markets, it is more recommended to adopt the European standard, which is more rigorous from the beginning, but keeps dossiers and impact reports ready to deliver to the American authorities if someone questions the system after it is already in operation.

Dr. Davis Alves: Chinese regulation is said to combine strong state intervention with exclusive technical requirements. How does this formula work in practice?

Dr. Bruno Vieira: In China, the logic that guides the regulation of IA part of an essentially state concern that aims to ensure national security and maintain control over the flow of information. From 2021, any company that provides algorithms considered “sensitive”, that is, capable of influencing public opinion or affecting critical infrastructures, should register them in an official bank, provide relevant parts of the code for government audit and incorporate a kind of “turning button” that allows authorities to suspend the service if they see a risk to public order. In theory, there is a point of convergence with the West: Resplain the supplier for the proper functioning of the technology. The difference is in the final purpose. While Europe and the United States focus on protecting the individual against private sector abuses, the Chinese approach is mainly to safeguard the state itself, ensuring that AI operates in tune with the government’s security and social stability objectives.

Dr. Davis Alves: When we talk about PL 2338, what exclusive elements does it bring to the debate about artificial intelligence here?

Dr. Bruno Vieira: PL 2338/2023 adopts the same risk classification of Europe, but adds three its own elements: the use of controlled test environments where companies can experience AI solutions; the creation of a “seal was responsible” for those who adopt good practices; and the reduction of requirements for companies with annual revenues of up to R $ 16 million. It is noteworthy that the project still makes vague who will be the real responsible for compliance with these rules, if to the ANPD- National Authority for Data Protection, to CGI- Internet Management Committee or the creation of a completely new agency.

Dr. Davis Alves: What is the best strategy for companies that operate in many countries deal with the different AI rules?

Dr. Bruno Vieira: The safest strategy can be summarized in three chained movements. First, adopt AI Act European as a base because it contains the hardest requirements available today, so that if the system is born compatible with these rules, it will hardly be below the standard in any other country. Then adjust the product to the local context, reviewing databases, translations, impact statements and internal policies to reflect the legal, linguistic and cultural particularities of each market where AI will be offered to avoid interpretation errors and punctual fines. Finally, maintain active presence in technical forums and regulatory laboratories (ISO/IEC, for example).

Dr. Davis Alves: Can the principles of Hiroshima, launched by the G7 in 2023, turn the moving common rail to unify AI laws?

Dr. Bruno Vieira: Although Hiroshima’s principles have no force of law, they act as a summary of good practices already present in various regulatory proposals, such as risk assessment, transparency, human supervision and accountability. The difference is that this document is not limited to measuring the inherent danger to the AI ​​model, but also takes into account the purpose of use. This criterion of “more intention” can be bridged between different legislations, allowing countries to adopt demands proportional to the context of application, even without agreeing in all technical details.

Dr. Davis Alves: To end our conversation today, what is the main message you would like to leave?

Dr. Bruno Vieira: Regular would not mean braking innovation, but ensuring that technological advancement brings benefits without generating invisible social costs. If Brazil establishes clear rules, stimulating research and preserving citizen rights, it will create a trusting environment that favors users, companies and the country.

Want to deepen the subject, have any questions, comment or want to share your experience on this topic? Write for me on Instagram: .

*This text does not necessarily reflect the opinion of the young Pan.

source

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC