European Union will ban AI services that create sexual deepfakes

European Union will ban AI services that create sexual deepfakes

ZAP // Grok // Elon Musk / X

European Union will ban AI services that create sexual deepfakes

Member States and the European Parliament reached an agreement this Wednesday to ban artificial intelligence services in the European Union that can “undress” people without their consent.

The initiative comes in the wake of the introduction, a few months ago, of a feature in Grok, the social network X’s AI assistant, which allowed users to request the creation of nude adults and children from real photographs, without their consent.

This functionality even took some countries, such as France, United KingdomIndonesia, Malaysia and the Philippines, a .

The agreement states that rules governing high-risk AI systems, including biometrics, critical infrastructure, education, employment, migration, asylum and border control, will apply from December 2, 2027.

With regard to systems integrated into products such as elevators or toys, the rules will apply from August 2, 2028. This sequence will help ensure that technical standards and other supporting tools are in place before the rules begin to be applied.

The European Union also agreed to ban AI systems that create child sexual abuse material or that show the private parts of an identifiable person, or are involved in sexually explicit activities, without that person’s consent.

The ban applies to the placing on the EU market of AI systems intended to create this type of content, without security measures preventing such creation, as well as to users who use such systems for this purpose.

Companies will have up to December 2 to adapt your systems.

According to the Lusa agency, the agreement was reached at a time when concerns about the risks associated with AI have returned to discussion in the EU in recent weeks due to Mythos, a new model from the North American startup Anthropic.

Anthopic decided not to make Mythos available to the general public, but only to a restricted group of North American companies, due to the model’s ability to identify critical programming vulnerabilities, which could trigger a cybersecurity crisis.

Source link