Chinese AI model was a failure in Cisco security tests: he could not wage any evil attack. Model is “highly biased,” according to another study.
Chinese artificial intelligence chatbot Deepseek R1, which recently “” the technological world, failed to 100% in the first critical evaluations of safety and protection.
Chatbot was subjected to 50 harmful notices using a technique known as “algorithmic jailbreaking”. Of these 50, blocked… zeroaccording to the results by the investigation team of Ciscoin collaboration with the University of Pennsylvania.
Alarming vulnerabilities
According to the investigation team, the R1 presented a “100%Attack Success Rate”that is, it generated answers to all harmful requests without detecting potential dangers.
“DeepSek R1 has submitted a 100%attack rate of attack, which means it could not block a single detrimental request,” the investigation team explains.
By comparison, other important AI models demonstrated at least partial resistance. OpenAi’s GPT-4O chat obtained a failure rate of 86%, the 64%Gemini Pro, Claude 3.5 Sonnet of 36%and the 26%O1 Preview.
The Harmbnch data set, used in the evaluation, includes 400 behaviors in seven categories, including cybercrime, misinformation and illegal activities. While other AI models demonstrated various levels of resistance to malignant requests, DeepSeek R1 failed completely.
Those of the cybersecurity company Enkrypt ai They also exhibit worrying results about Chinese Generative AI: the model is “highly biased, as well as highly vulnerable to generating insecure code,” the US company describes.
What caused so many flaws?
The model was allegedly trained with an estimated budget of $ 6 million, a significantly less less than millions of millions invested by OpenAi, Meta and Gemini in its AI models – although the independent investigation company Semianalysis contests the company’s claim, admitting that The actual value may be closer to $ 1.3 billion.
Investigators on the other hand raise the possibility of this cost-effectiveness being obtained at the expense of essential security safeguards.
The Cisco report also suggests that DeepSeek economic training methods, which incorporate “reinforcement learning, thought chain self -assessment and models distillation may have compromised their safety protocols”.
DeepSeek copied chatgpt?
In addition to safety concerns, DeepSeek has been involved in various controversies.
OpenAi has already accused the Chinese of potential theft Data: It claims that Chinese startup may have used OpenAi models results to train your chatbot.
The company is “aware and analyzing the indications that DeepSeek may have improperly distill our models and will share information as we know more,” he told a chatgpt “mother” spokesman after reports from and from and have confirmed the suspicions.
With its own tool, Originality.Ai analyzed the Chinese chatbot to find out if there was really chatgpt data theft. that “it is possible that DeepSeek could be a distilled version of ChatgPT.”
OpenAi itself, it is recalled, faces legal proceedings for alleged violation of copyright and improper use of data.
Deepseek replica for 30 dollars?
However, AI researchers from the University of California, Berkeley, USA, ensure that it replicated the central technology of DeepSeek R1 for a value… less than 30 dollarswith the help of a game.
“We reproduced DeepSeek R1NO Countdown Game and it just works,” Berkeley Jiayi Pan’s doctoral student, who led the research at X.
The numerical puzzle game, which requires players to reach a predetermined response from a random number of numbers, has been used to train Tinyzero, named for the alleged new AI model of Berkeley. The full search is available on the platform. Of course, it would need much more computational power to achieve validation in the domain of general reasoning, something that would cost much more than these $ 30.