WASHINGTON, March 3 (Reuters) – A senior Pentagon official said on Tuesday that artificial intelligence commercial contracts signed during Joe Biden’s administration contained sweeping operational restrictions that threaten to paralyze real-time U.S. military missions, including the ability to plan and execute combat operations.
Emil Michael, Under Secretary of Defense for Research and Engineering, described a moment of alarm when he reviewed the terms governing AI models already embedded in some of the U.S. military’s most sensitive commands. He did not name the AI vendor whose contracts he was reviewing.
The comments were made at the American Dynamism Summit in Washington, a gathering of technology companies interested in work related to space and national security. The summit came just days after a disagreement over how the Pentagon could use Anthropic’s powerful and widely used AI tools, leading President Donald Trump to ban the startup from doing business with the US government and classify it as a national security risk.
Continues after advertising
“I had a moment of ‘wow, what a surprise’,” Michael said at the American Dynamism Summit in Washington. “There are things… you can’t plan an operation… if it could lead to kinetic impacts” or explosions. He described dozens of restrictions built into the agreements covering commands responsible for air operations over Iran, China and South America.
Michael said the contracts were structured so that if an operator violates the terms of service, the AI model could theoretically “just stop in the middle of an operation.” Anthropic’s Claude is the only AI model available to the US Department of Defense on its classified systems at the time Michael conducted the analysis.
Their concerns intensified after a senior executive at an unnamed AI company raised questions about whether their software had been used in what Michael called one of the most successful military operations in recent history. Anthropic’s Claude was reportedly used to help plan the US government operation that captured former Venezuelan President Nicolás Maduro in January.
“What we’re not going to do is allow any company to dictate a new set of policies beyond what Congress has approved,” Michael said.
The revelations may help explain the dispute between Anthropic and the US Department of Defense. Defense Secretary Pete Hegseth declared the company a “supply chain risk” for refusing to budge in negotiations over restrictions on autonomous weapons and mass surveillance.
Hours later, rival OpenAI reached its own deal with the Pentagon. A statement from OpenAI chief executive Sam Altman suggested that the Department had agreed to similar restrictions on OpenAI’s models.