Trump orders to cancel all public contracts with Anthropic after the fight over the safety of artificial intelligence | International

Donald Trump, president of the United States, has ordered the cancellation of all Federal Administration contracts with Anthropic, the company that develops the artificial intelligence tool Claude, and which maintains a fight with the Pentagon over the limits of AI security.

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT-WING AND ‘WOKE’ COMPANY TO DECIDE HOW OUR GREAT ARMY FIGHTS AND WIN WARS! That decision falls to ITS COMMANDER IN CHIEF and the extraordinary leaders I appoint to lead our Armed Forces,” the Republican president wrote this Friday through his social network, Truth. “I am directing ALL federal agencies of the United States Government to IMMEDIATELY CEASE use of Anthropic’s technology. We don’t need it, we don’t want it, and we will not do business with them again! There will be a six-month phase-out period for agencies like the War Department that use Anthropic products, at various levels.”

The reaction of the Republican president, who has described Anthropic executives as “unhinged Anthropic leftists,” is produced by the AI ​​company’s refusal to allow the indiscriminate use of its tool for military tasks.

The future of the limits of artificial intelligence is being played these days in the United States. The Pentagon claims Anthropic, the company that, Claude, the unlimited use of all its functionalities for military purposes. Defense Secretary Pete Hegseth threatens extraordinary sanctions if the company does not accept his conditions. What happens with this case of a technology that is destined to transform the world as we know it.

The situation was precipitated after a meeting at the Pentagon between Hegseth and Dario Amodei, CEO of Anthropic, last Tuesday. The Secretary of Defense urged the manager to eliminate the restrictions that the company maintains for the military use of its AI tool. He gave him until this Friday to reconsider his position. Otherwise, harsh sanctions will be imposed.

The threats from the Department of Defense are extraordinary. It is willing to declare Anthropic a risk to the military supply chain, a move that would prevent it from contracting with any company that works with Defense. Furthermore, Hegseth has threatened to intervene. He assures that he will resort to the Cold War Defense Production Act of 1950 to use the software of Anthropic despite the technology company’s reluctance. Some of these measures have been applied in the past to companies such as Huawei, due to its links with Beijing, and with the Russian company Kaspersky.

On Thursday, two days after the meeting between Hegseth and Amodei, the Pentagon sent a new contract with new conditions for using Claude. But the San Francisco-based company rejected the legal document because the two requirements it considers a red line are not met: its technology cannot be used for mass surveillance of citizens and it does not want it to be used in autonomous lethal attacks without human intervention. “The threats do not change our position: we cannot in good conscience accede to your request,” Amodei said in a statement.

In the artificial intelligence business things happen very quickly, as technology evolves. Anthropic and Defensa have gone from romance to disagreement in a matter of weeks. Last July, the startup and the Pentagon signed a $200 million (€169 million) contract to use the Claude tool on classified military files in the cloud. It was the Department of Defense’s first agreement to use AI in secure environments. The signing was quite an event because of what it meant for the AI ​​sector in military uses while armies around the world enter a dangerous race to transfer this technology to their operations.

A few weeks ago, a conversation between a company employee and another Palantir worker, a leading security group in data management, was leaked, asking if Claude had been used in the military operation to arrest the former Venezuelan president, Nicolás Maduro, in Caracas. The conversation reached the ears of senior Pentagon officials who interpreted it as a limit to the army’s ability to use AI as it wishes, according to reports. The Wall Street Journal. Since then concerns about the extent of Anthropic’s military use have soared in the Defense Department, renamed the War Department by Hegseth.

The discussion is so relevant that a break between the AI ​​company and the Pentagon would be a failure for both, with a $200 million contract up in the air. It would open a high-voltage legal battle and set a precedent that would question the role of Defense contractors. For this reason, Administration sources assure that they are open to prolonging the conversations with Anthropic beyond this Friday, according to Bloomberg.

Undersecretary of Defense for Research and Engineering Emil Michael said the Pentagon remains willing to continue its talks with Anthropic, despite what he called the company’s “unpredictable” behavior in a bitter standoff over AI safeguards. “As long as they are in good faith, we are open to negotiating,” Michael said.

The battle between the Pentagon and Anthropic fuels the global debate on the limits and guarantees on the wartime use of AI, a technology that would allow mass espionage of citizens and other practices currently unimaginable.

“The War Department has no interest in using AI for mass surveillance of Americans (which is illegal), nor do we want to use AI to develop autonomous weapons that operate without human intervention. This narrative is false and is spread by leftists in the media,” said Pentagon spokesman Sean Parnell.

Although Defense assures that it will not use this technology for purposes that are not strictly military and approved by law, they are reluctant to put it in writing. They argue that this is a philosophical question: A private contractor cannot decide how its tools will be used, any more than a weapons manufacturer decides where its missiles are launched.

“This is what we ask: allow the Pentagon to use the Anthropic model for all legal purposes. This is a simple and sensible request that will prevent Anthropic from endangering critical military operations and potentially putting our warfighters at risk,” added Parnell, who did not hesitate to recall the threats if Anthropic does not agree to raise the barriers.

Senior defense officials tried to convince Amodei to lift restrictions on its AI application by posing a question: What would happen if a nuclear-armed intercontinental ballistic missile was headed toward the United States with just 90 seconds’ notice and Anthropic’s AI was the only way to trigger a missile response to save the country, but the company’s security measures did not allow it? Traffic lights.

Although there are various versions of Amodei’s response, Anthropic officially assures that in this case Defense could use its AI tools for missile defense and cyber operations without restrictions, a response that reduces the limits for military use of Claude.

Emil Michael, Undersecretary of Defense for R&D, has been the Pentagon’s interlocutor with Anthropic. He has argued that it should be the government, and not technology companies, who should have the final say on the use of technology, according to The Washington Post. Michael has accused Amodei of wanting to interfere in Defense decisions. “He wants nothing more than to try to personally control the United States military and has no problem putting our nation’s security at risk,” he wrote on X.

Anthropic, founded by former OpenAI researchers, has positioned itself as one of the AI ​​startups most concerned about the security and ethical and moral limits of the technology. With a valuation of $380 billion, it was the first AI company to receive authorization from the Pentagon to handle classified material. In recent weeks, it has also received permission to operate with these classified documents while its rivals ChatGPT (OpenAI) and Gemini (Google) push not to be left behind.

As competition increases in the sector, pressure grows for the technology companies that promote AI to relax their criteria to advance faster and begin to make profitable the huge investments of money that the sector is allocating to this technology.

source