Pentagon expands use of AI in military networks and signs agreement with 4 more big techs

(Bloomberg) — The Pentagon has reached agreements with four more technology companies to extensively use advanced artificial intelligence tools on classified military networks, according to a Defense Department statement and two defense officials briefed on the matter.

Nvidia Corp., Microsoft Corp., Reflection AI Inc. and Amazon Web Services have entered into new agreements with the U.S. Department of Defense “for legitimate operational use,” according to the statement. The officials asked not to be identified to discuss internal conversations.

The companies join a growing list of technology giants that have recently committed to providing broader use of AI tools in classified Pentagon networks. Other technology companies that have recently reached similar deals include SpaceX, OpenAI and Google.

Continues after advertising

“These agreements accelerate the transformation toward establishing the United States Armed Forces as an AI-focused fighting force,” says the statement, which refers to all six companies and which also marks the Pentagon’s first official confirmation of a new agreement with Google, reported earlier this week.

“For more than a decade, AWS has been committed to supporting our nation’s military and ensuring our warfighters and defense partners have access to the best technology at the best value,” said Tim Barrett, AWS spokesperson. “We look forward to continuing to support the War Department’s modernization efforts by developing AI solutions that help them accomplish their critical missions.”

A Microsoft spokesperson declined to comment, while representatives from Nvidia and Reflection were not immediately available for comment.

The Pentagon negotiated its deal with Amazon Web Services until late Thursday night, according to two Pentagon officials briefed on the negotiations.

The effort to create a new coalition of technology companies for the maximalist military use of advanced AI comes as the Pentagon races against time to develop alternatives to Anthropic PBC’s Claude tool. A bitter rift between Anthropic and senior defense officials has exposed a recurring rift between the Pentagon and Silicon Valley over the imminent risks of AI in warfare.

During recent renegotiations, the Pentagon refused to comply with the red lines established by Anthropic, which sought to limit the use of AI by the US military in classified operations, and tried to exclude the company from all its defense supply chains. The agency gave itself six months to replace the Claude, which is being used in American military operations against Iran. The dispute is now drawn into a legal battle.

Continues after advertising

On Thursday, Defense Secretary Pete Hegseth described Anthropic’s leader as an “ideological lunatic” and defended his department’s use of AI.

“We follow the law and humans make decisions,” Hegseth told Congress. “The AI ​​is not making lethal decisions.”

Since the falling out with Anthropic, the Pentagon has accelerated its efforts to convince other AI companies to agree to expanded terms of use for their models and infrastructure in secret and top-secret networks. Additionally, defense officials are seeking to ensure that the U.S. military avoids reliance on a single company or set of limitations, according to one of the Pentagon officials briefed on the talks.

Continues after advertising

The Pentagon’s effort to equip the U.S. military with cutting-edge, classified-level AI will help “human-machine teams” capable of handling immense volumes of data, Cameron Stanley, the Pentagon’s director of AI and digital technology, said in a statement regarding the new agreements.

Although OpenAI signed a new agreement with the Pentagon earlier this year to expand the use of its models in classified networks, its tools have not yet been deployed in classified defense networks, according to an OpenAI spokesperson, who added that implementation is underway.

Several campaign groups have highlighted the risks of relying on unpredictable AI systems to support life and death decisions. AI systems can be error-prone and lead to automation bias, that is, the tendency to trust machine results more than human reasoning, critics argue.

Continues after advertising

Stanley did not specify the precise ways in which the Pentagon intends to use AI models in classified operations. He described them as digital tools that would make it easier for the Pentagon to process data, increase understanding in complex environments and enable “better, faster decisions.”

Claude is among the AI ​​tools used in the Maven Smart System, a digital platform used to support targeting and field operations during operations in Iran. US Central Command said it is using several AI tools to speed up processes.

© 2026 Bloomberg L.P.

Continues after advertising

Source link