Some politicians and journalists in Brazil will now have access to a tool that tracks, based on a person’s appearance, content published on the platform where their face appears and has been potentially altered or created with .
Although the aim is to combat , which are increasingly realistic, YouTube’s own about the tool states that, as it is in the testing phase, the feature can also detect videos that show the real faces of these people.
From this initial scan carried out by the platform, the public figure will be able to review the content listed and request the removal of those generated or altered by AI. According to the company, the withdrawal will not be automatic and will depend on analysis and criteria defined in its privacy guidelines.
According to a statement from YouTube, access will be given to a “pilot group of government officials, journalists and political candidates.” However, the list of people who will be able to register to use the tool will not be published. According to the company, for privacy reasons.
In Brazil, the tool must be presented to the (Superior Electoral Court) as part of YouTube’s efforts related to . From this Tuesday (10), it will be available to politicians who are contacted by the platform in the country. This group should be progressively expanded, but there are no further details about this.
Those who agree to sign up will need to submit an official ID, as well as record a short video of their face, such as a selfie. You must have a registered channel on YouTube —even if unused— to use the tool.
At a press conference, company representatives were asked, for example, whether the president of , , would have access to the resource — which was not informed.
Nor was it disclosed in which other countries politicians and journalists would have access to the tool in this first round of expansion. In this interview, they were present, in addition to the SheetUS journalists only.
Last week, the TSE approved new . Among other items, it maintained the ban on deepfakes both to harm and favor candidates and banned the publication of AI content from 72 hours before the election.
The resolution on the topic also provides for the joint liability of social networks if they do not remove content considered “at risk”, one of which is the dissemination of content generated or altered by AI that is in violation of electoral restrictions.
For now, the YouTube tool only includes images, not audio. It will work in a similar way to Content ID, known for tracking copyrighted content on the platform, but looking for images that appear to be the face of the person who registered.
“It’s a way for us to have more people controlling, in some way, their image, having more management, more autonomy to carry out this inspection”, says Alana Rizzo, public policy leader at YouTube Brazil. “This tool does this search and detection on a large scale and then communicates the owner of that channel, of that image.”
Only cases that qualify as a violation of privacy guidelines may be reported by this mechanism, not including other practices, such as disinformation.
According to , among the factors considered by YouTube to evaluate the removal request is whether the content is synthetic or altered, whether it is realistic, or whether the material shows a public figure or well-known person practicing “sensitive activities, such as crimes, acts of violence and endorsements of products or political candidates”. In addition, it presents parody, satire or another aspect of public interest.
“If a video of a world leader is clearly a parody, it’s likely to stay up,” said Leslie Miller, vice president of government affairs and public policy.
The report questioned to what extent YouTube is able, based on its analysis, to assert that content is in fact synthetic or altered by AI, in order to prevent real videos from being removed under this claim. Alana Rizzo stated that each video is studied case by case. He also added that the company is constantly improving its content analysis technologies and policies.
Launched last year, the similarity detection tool had until then been made available to YouTube channels that are part of its , a number that would be around 4 million creators in the world (among the criteria for joining this group, which can monetize your videos, there is a minimum number of content published and subscribers to the channel). It is possible, therefore, that politicians and journalists in Brazil who fit these criteria have already activated the detection feature.
According to Amjad Hanif, vice president of products for YouTube creators, the intention is to expand access beyond the pilot group, but with a focus on government officials and journalists. He also highlighted that each country involves different issues to be resolved, linked to the processing of personal data.
The risk of using this biometric data to train AI systems was the subject of questions in the United States press last year, at the time of the launch for channel owners. The company denied such use and stated that it would make the wording clearer.
YouTube told Sheet that the data provided for registration is used “strictly for identity verification purposes and for the image detection feature” and that “it is not used to train Google’s generative AI models.”
The page about the tool states that, by signing up, the person will be able to “allow YouTube to use their face and voice models to develop and improve detection models.” According to the company, the detection model does not involve generative AI.