Court stops Trump’s attempt to ban Anthropic from the US government

A federal judge in San Francisco, USA, granted an injunction to Anthropic in a lawsuit against the Trump administration, suspending, for now, measures that sought to restrict the use of the company’s technology by federal agencies.

The lawsuit was filed by the artificial intelligence developer to reverse its inclusion on a Department of Defense (Pentagon) “supply chain risk” list and an order from President Donald Trump that prohibited federal agencies from using Claude models.

Judge Rita Lin’s decision temporarily prevents the Trump administration from implementing, applying or enforcing the presidential directive, in addition to limiting the Pentagon’s attempt to classify Anthropic as a threat to US national security.

Court stops Trump's attempt to ban Anthropic from the US government

The company argued that the measures were causing significant financial and reputational harm and constituted retaliation for criticism of the government’s position in contract negotiations.

In his ruling, Lin wrote that “punishing Anthropic for bringing public scrutiny to the government’s contractual position is a classic case of unlawful retaliation under the First Amendment.”

The judge also criticized the legal basis used by the government, stating that nothing in the legislation “allows an American company to be labeled as a potential adversary and saboteur of the US for expressing disagreement with the government”.

The conflict gained momentum after the Pentagon publicly declared Anthropic a risk to the supply chain, a classification historically reserved for companies from adversary countries, which forces large defense contractors — such as Amazon, Microsoft and Palantir — to certify that they do not use the company’s models in military work.

At the same time, Trump ordered on social media that federal agencies stop using the company’s technology, accusing Anthropic of being a “radical left-wing AI company”.

The dispute stems from failed negotiations over the use of Anthropic models in Defense Department AI platforms.

Continues after advertising

The government sought broad access to the technology for all uses permitted by law, while the company wanted guarantees that its systems would not be used in fully autonomous weapons or domestic mass surveillance.

Without an agreement, the impasse ended up in court, and a final decision on the legality of the government’s actions should still take months.

Source link