Algorithmic racism: when can technology reinforce prejudices?

Racism is a social problem present throughout the world and affects black, Asian, indigenous people, among other groups. A prejudice rooted in societies also appears in current technologies, such as artificial intelligence tools.

Algorithmic racism is the term used precisely to describe how software, artificial intelligence and other technologies can reproduce prejudices, even though they are not real people.

Recognizing the flaws in technology and how it can reproduce prejudices becomes even more urgent as cities adopt facial recognition systems such as, which already have 40,000 smart cameras in the city of São Paulo.

What is algorithmic racism?

Algorithmic racism happens when automated systems, including algorithms and AI tools, reproduce or magnify racial inequalities in their output. This can happen because of the data used, development choices, system rules and the way the technology is applied.

This can also appear in candidate selection tools, risk analysis and other systems with automated decisions.

A case revealed by g1 tells the story of a man who ended up being detained four times by mistake, after being wrongly identified by the Smart Sampa cameras as a fugitive from justice in Mato Grosso. Despite presenting clear differences in relation to the wanted person, such as age and surname, he was approached by the police on all these occasions. The fact that the victim is a black man raises a warning about how racism can be fueled by technology.

The main point is that algorithmic racism does not always come from a direct intention to discriminate. Most of the time, it arises in a more indirect way.

For example, it can arise in the data used to train the system, in the choices made by developers, and in the objectives that the algorithm needs to fulfill. Therefore, even though it may seem technical or neutral, the system may end up creating “prejudiced standards”.

I.e, algorithmic racism does not depend on an explicit racist intention to happenit is enough for the system to produce unequal results repeatedly.

Where does racism in algorithms come from?

There is no single answer, but a common origin lies in the data itself. If society has produced inequalities, historical bases tend to record these patterns.

For example, when an AI model learns from data that records this discrimination, it may end up treating inequality as a rule and injustice as a “normal” signal.

There is also a problem linked to the representativeness and quality of the data. Studies show that some databases and commercial tools fail more with certain ethnic or minority groups, which increases the risk of potentially impactful errors.

When might algorithmic racism appear?

Algorithmic racism can arise in different ways, especially when systems are trained with biased data or use superficial criteria, among other factors.

Check out some of the ways in which algorithmic racism can appear:

  • Facial recognition: This is one of the best-known cases of algorithmic racism, as technologies of this type accumulate episodes of misidentification linked to racial minorities.
  • Candidate selection and employment: As has already occurred in some cases, AI tools used in hiring, evaluation and dismissal processes can generate discrimination.
  • Old data with errors or inequalities: Biases can arise from the data used to train these AIs, reproducing inequalities that already exist in the real world.
  • Predictive Policing: It’s not that popular yet, but biased systems that indicate where possible crimes might happen can also reinforce inequalities.

Smart Sampa and cases of algorithmic racism

According to research released in February 2026 by the Public Policy and Internet Laboratory (Lapin), the Peregum Black Reference Institute and Rede Liberdade, Smart Sampa has already been linked to cases of false positives and wrongful arrests.

“Smart Sampa deepens racial and geographic inequalities, reinforcing a public security model that criminalizes certain bodies and territories,” said the director of Areas and Strategy at the Peregum Black Reference Institute, Beatriz Lourenço, in an interview with Agência Brasil.

According to the research, the data indicates a possible racial and territorial bias in the system. The result shows that 25% of the people arrested were black, while 16.01% were white, and 58.9% of the records did not even report race. There was also a geographic concentration of the prisons analyzed in the city center and peripheral neighborhoods.

“These data suggest that Smart Sampa reinforces historical processes of racial segregation, unequal surveillance and selective policing, linked to racism and socioeconomic inequalities”, says an excerpt from the research.

The report also points out technical problems in facial recognition. Among the cases cited, at least 23 people were allegedly taken by mistake because of inconsistencies in the system, and another 82 were arrested but later released.

Is it possible to reduce bias in algorithms?

There are studies that show that the it can vary greatly from one system to another and improve with more careful data choices. So, it is possible to reduce this problem.

NIST, a United States body that studies standards and technology, points out that some algorithms perform better than others both in accuracy and in balance between different groups. The institute also indicates that using more diverse training bases can help reduce these differences.

The one on algorithmic bias, published in 2018, showed that Facial recognition systems made more errors when analyzing women and people of color. The study helps reinforce the importance of testing these technologies in different groups.

In any case, it is important to highlight that reducing bias is neither simple nor automatic; even more so in a society that increasingly uses AI systems.

It is not enough to just adjust machine learning models or any other technology used. It is also necessary to create rules and processes to assess risks before and after use, in order to avoid perpetuating racism or any other prejudice.

source