ZAP // Sergei Tokmakov, 0fjd125GK87 / PIXABAY

The technological giant Microsoft will be intended for a lawsuit against four developers internationals that will have circumvented the safety rules and abused their AI tools to generate deepfakes Pornographic of celebrities and other “harmful content”.
Num post Recently published on its blog, the American technology announced that it had brought lawsuits against four members of the cyber crime network Storm-2139who accuses of abuse of his AI tools.
The alleged cybercriminals have nicknames that look like out of a hacker movie from the early 2000s: Arian Yadegarnia, or “I did”, of Iran; Alan Krysiak, aka “Drago“From the United Kingdom; Ricky Yuen, also known as “cg-dot“, De Hong Kong; E Phung Phung Tan, Ou “Asakuri“From Vietname.
In the publication, Microsoft divides the individuals who make up the Storm-2139 into three levels, “creators, suppliers and users“, Which will have set up an illegal network of services, based on computer attacks and the modification of the company’s AI tools, to Create illegal or destructive material.
“Software breeders developed the illicit tools that allowed the abuse of AI-generated services,” reads in the post, adding that “suppliers modified them and provided the end usersoften with different levels of service and payment, ”details the post.
“Finally,” the text continues, “users used these tools to generate Violating synthetic content, often centered on celebritiesand sexual images. ”
Civil lawsuit was initially filed in December, although at the time all defendants were simply identified as “John Doe”.
Now, in the light of the new evidence revealed in Microsoft’s investigation into Storm-2139, the company has chosen to unmask some of the alleged criminals involved in the dispute, says.
Microsoft investigations are still ongoing, and there are other alleged criminals have not yet been identified “But the technology giant says that at least two are Americans.”
“We are introducing this legal action, Now against identified defendants“, Says Microsoft in the post,“to interrupt your conduct, continue to dismantle their illicit operations and DECUARD OTHERS who want to use our AI technology as a weapon. ”
According to Microsoft, previous legal initiatives managed to divide the Storm-2139; In January, the “seizure” of the group’s website and “the subsequent revelation of legal processes generated a immediate reaction of the intervenersin some cases causing the group members to report each other. ”
According to Microsoft’s decision to launch all its enormous legal weight against alleged abusers of its technology falls into one gray In the ongoing debate on AI security and how companies should seek to limit the misuse of these tools.
Some companies, such as Mark Zuckerberg’s goal, chose to launch their models in open source – A more decentralized approach to AI development.
But some experts argue that open source may allow malicious actors to silently enjoy advanced AI technologies on the sidelines of any public supervision.
Microsoft, for its part, adopted a mixed approach, keeping some of its models closed.
However, despite the vast resources and declared commitments of the technological giant in relation to a safe and responsible AI, the criminals still found apparently ways to overcome your protective barriers and profit from its misuse.