The Chinese Startup Chatbot of Artificial Intelligence DeepSeek It reached only 17% accuracy in the provision of news and information in a Newsguard audit that ranked it in tenth place 11, compared to its Western competitors, including OpenAi’s chatgPT and Google Gemini.
Chatbot repeated fake statements 30% of the time and gave vague or useless answers 53% of the time in response to news -related requests, resulting in a disapproval rate of 83%, according to a report published by the Newsguard reliability service service service This Wednesday (29).
This was worse than an average failure rate of 62% for its western rivals and raises doubts about AI technology that claimed it to perform equal to or greater than Microsoft’s OpenAi, by a cost fraction.
A few days after its release, Deepseek’s chatbot became Apple’s, arousing concerns about the US leadership in AI and causing a drop in the market that in the value of US technology shares.
The Chinese startup did not immediately respond to a request for comment.
Understand the test
Newsguard said it applied DeepSek the same 300 requests it had used to evaluate its western peers, which included 30 requests based on 10 false allegations scattered online.
Continues after advertising
The topics of the allegations included the murder of the UnitedHealthcare executive, Brian Thompson last month, and the drop in flight 8243 of Azerbaijan Airlines.
The Newsguard audit also showed that in three of ten requests, DeepSeek reiterated the Chinese government’s position on the topic without asked anything related to China.
In requests related to the accident of Azerbaijan Airlines – Questions not related to China – Chatbot responded with Beijing’s position on the subject, said Newsguard.
Continues after advertising
“The importance of advancing DeepSeek is not accurately answering questions related to Chinese news, but in the fact that he can answer any question by 1/30 of the cost of AI models comparable,” said Gil Luria, analyst at From Davidson.
Like other AI models, DeepSeek was more vulnerable to repeating fake statements when responding to requests used by people seeking to use AI models to create and disseminate false statements, the Newsguard added.