What is Anthropic, the “ethical” AI company that refuses to spy and create autonomous weapons for the US military

El Periódico

The company of artificial intelligence (AI) hottest of the moment is not OpenAIthe creator of ChatGPTis Googlethe pioneering giant in generative systems, but Anthropic. Unknown to many, the company is setting the technological course for 2026. And not only because of the good performance of its model, Claudebut because of its direct confrontation with the Army of USAa shock that confirms that the Government of Donald Trump perceive this technology like a arma.

The controversy has been simmering for months. The IA of Anthropic is vital to the management of the classified systems of the Pentagon. Until now, the US Department of Defense had only trusted the start-up for these tasks, an agreement that had resulted in a military contract worth $200 million.

The Pentagon threatens

However, everything began to go wrong due to the demands of the Army, which wants to have the power to decide whether to use Claude as much for the massive social espionage how to create autonomous weaponsthat is, machines capable of shooting and matar without human intervention. Anthropic has strongly opposed eliminating the guardrails built into its programming that prohibit such warlike use, a refusal to hand over the keys to its creation that has aroused the ire of the White House. Despite intense negotiations, the company has not budged.

Pentagon chief Pete Hegseth / Europa Press/Contact/Yuri Gripas/POOL

Fed up with the refusal, the US secretary of defense, Pete Hegsethhas chosen intimidation. The Pentagon has threatened to designate Anthropic as a “risk to security.” supply chain“. This sanction, usually reserved for foreign adversaries such as China Huaweiwould not only blow up that contract, but would force all Army suppliers to cut their ties with the start-up. You are either with us or against us.

The ultimatum of Washington It gives until this Friday as the deadline for Anthropic to give up. “It will be very complicated to untangle all of this, and we will make sure that they pay a price for forcing us to take this measure,” he said in statements to Axios a senior official in the agency renamed by Trump as the War Department.

Anthropic, the most powerful AI

Anthropic is a company from Generative AI founded in 2021 by seven former OpenAI employees, including brothers Daniela y Dario Amodei. The executive director and visible face of the company is not a typical businessman, but a physicist specialized in neural circuits who preaches about AI with the attitude of an idealist philosopher. It is related to Effective altruism (o), a movement that advocates prioritizing causes that benefit a greater number of people and that equalizes the risks of this technology.

Currently, the firm is valued at about $380 billion. The firm is known for Claude, its family of large language models (or LLMsfor its acronym in English), which leads the rankings of the most powerful in the world. Its high performance makes it particularly popular among computer programmers and engineers. Silicon Valleymecca of the American technology industry.

An ethical AI?

Anthropic’s history is closely linked to that of its main rival. In 2016, Amodei left his position at Google to become the vice president of research at OpenAI, a position from which he led the models GPT-2 y GPT-3the seed of ChatGPT. That jump fascinated him, but it also fueled his fears. For him, “there was no more important job than preventing superhuman AI from producing catastrophic results, even human extinction,” explains Karen Hao in The AI ​​Empire. Amodei prioritized the mitigation of these risks over the commercialization of its technology, but for the leader of OpenAI, Sam Altmanthe first thing was the business.

OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei refuse to shake hands during the AI ​​Impact Summit in India in February 2026. / Ludovic Marin / AFP

Fed up with a change of course that he interpreted as a betrayal of his principles, Amodei chose to illuminate an alternative. Anthropic was born with the motto of being a Ethical AI. In recent months, the company has bucked the Pentagon’s demands and published a framework — which it describes as a “Constitution” — for its technology to be useful, harmless and complies with human orders.

However, that founding myth is also full of holes. Last September, Anthropic paid a $1.5 billion fine to settle a legal case in which a judge ruled that it had downloaded more than 7 million digitized books “knowing they were pirated” to train Claude. The company bought millions of books, cut them up and scanned them, a project they tried to hide from the public. As if that were not enough, Anthropic announced on Tuesday that, like OpenAI, it is backing away from its precautionary and security strategy to continue competing with its rivals. Just that same day, Amodei had met with Hegseth. Will he give up?

Subscribe to continue reading

source