Pentagon threatens to cut Anthropic in dispute over AI safeguards, says website

The Pentagon is ⁠considering ending its relationship with ⁠artificial intelligence company Anthropic over ⁠its insistence on maintaining some restrictions on how the U.S. military uses its ⁠AI models, Axios reported Saturday, citing a U.S. government official.

The Pentagon is pushing four AI companies to allow the military to use its tools for “all lawful purposes,” including in the areas of weapons development, intelligence collection and battlefield operations, but Anthropic has not agreed to those terms and the Pentagon is growing weary after months of negotiations, according to Axios reporting. The other companies include OpenAI, Google and xAI.

An Anthropic spokesperson said the company has not discussed using its Claude AI model for specific operations with the Pentagon. The spokesperson said conversations with the U.S. government so far have focused on a specific set of usage policy issues, including strict limits around fully autonomous weapons and mass domestic surveillance, none of which are related to current operations.

The Pentagon did not comment on the matter.

Anthropic’s ⁠Claude AI model was used in the US military operation to capture former Venezuelan President Nicolás Maduro, with Claude deployed through Anthropic’s partnership with data firm Palantir, published Wall ⁠Street Journal on Friday.

Reuters reported on Wednesday that the Pentagon ⁠was pressuring major AI companies, including OpenAI and Anthropic, to make their artificial intelligence tools available on sensitive networks without many of the standard restrictions that companies apply to users.

Source link