Anthropic’s Powerful AI Model Triggers Global Alarm and Race for Safety

When Anthropic informed the world this month that it had created an artificial intelligence model so powerful it was too dangerous to be widely available, the company named 11 organizations as partners to help mount a defense. All were from the United States.

In two weeks, the model, called Mythos, triggered a global race like nothing seen so far in the AI ​​era. Mythos, which Anthropic claimed was extraordinarily capable of finding and exploiting hidden flaws in the software that runs the world’s banks, power grids and governments, became a geopolitical play — and an American company owned it.

Also read:

Continues after advertising

World leaders have been trying to size up the scope of security risks and how to mitigate them, with Anthropic sharing Mythos only with the UK outside the US.

The governor of the Bank of England publicly warned that Anthropic may have found a way to “completely open up the universe of cyber risks”. The European Central Bank began discreetly questioning banks about their defenses. Canada’s finance minister compared the threat to the closure of the Strait of Hormuz.

For U.S. rivals like China and Russia, Mythos has highlighted the security consequences of falling behind in the AI ​​race. A pro-Kremlin outlet called the model “worse than a nuclear bomb.”

The responses illustrated a reality that AI researchers have long warned about, largely theoretically: whoever leads the creation of the most powerful AI models will gain disproportionate geopolitical advantages.

Major advances in AI are beginning to behave less like product launches and more like weapons tests, and most countries want to understand how these technologies work and what protections are needed.

As fundamental AI “models” become more relevant, access will have greater geopolitical implications, said Eduardo Levy Yeyati, former chief economist at the Central Bank of Argentina and regional growth and AI advisor at the Inter-American Development Bank. “I would interpret this episode as a warning for public policies. Governments can no longer ignore the issue.”

Continues after advertising

Even the United States government, which has been embroiled in a clash with Anthropic over the use of AI in warfare, has turned its attention to Mythos.

Dario Amodei, CEO of Anthropic, met with White House officials after members of the Trump administration pointed out the new model’s potential to cause major damage to computer systems.

San Francisco-based Anthropic told The New York Times that it is keeping access to Mythos restricted out of security concerns.

Continues after advertising

The company has prioritized sharing the model with more than 40 organizations that provide technologies used to maintain critical global infrastructure, such as the internet or electrical grids.

Anthropic cited 11 such organizations, including Amazon, Apple and Microsoft, that have committed to helping develop security fixes for vulnerabilities identified by the model.

The company said there is no immediate timeline for significantly expanding access, but that it will work with the US government and industry partners to define next steps.

Continues after advertising

It also said it has been inundated with requests from governments, companies and other organizations seeking access and information, but that these groups may have different levels of technical capability to safely evaluate such a powerful AI model.

Anthropic added that it expects other groups to launch AI models with similar cyber capabilities more broadly within 18 months, giving organizations a limited amount of time to make necessary security fixes.

Anthropic said it was investigating a report that unauthorized users gained access to a version of Mythos.

Continues after advertising

The race around Mythos comes at a time of minimal international cooperation in AI. Governments view each other with suspicion as companies race to outdo rivals.

There is no equivalent to the Nuclear Non-Proliferation Treaty, no shared inspections, no agreed-upon rules on how to deal with something like the Mythos.

When Anthropic announced the model, many experts praised the company’s caution in limiting who can test it, but expressed concern about the lack of international coordination to address the risk.

The United Kingdom was the only other country to gain access. Its Institute for AI Security, a government-backed organization, tested Mythos and published an independent assessment confirming that it is capable of carrying out complex cyberattacks that no previous AI model has been able to perform.

“This represents a breakthrough in AI cyber capabilities,” Kanishka Narayan, the UK’s AI minister, said on social media, asserting that the country is taking steps to protect “critical national infrastructure.”

Others received less information. The European Commission, the executive arm of the 27-nation European Union, has met with Anthropic at least three times since the launch of Mythos, an EU official said.

But the company has not provided access to the model because the two parties have not yet reached an agreement on how to share it with the commission, the official said.

In a note, the commission said it was “evaluating possible implications” of Mythos, which “presents unprecedented cyber capabilities”.

Claudia Plattner, president of Germany’s cybersecurity agency, known as BSI, said she has not been given access to Mythos, but recently met with Anthropic employees in San Francisco to gain “relevant insights” into how it works.

The capabilities point to “a paradigm shift in the nature of cyber threats,” Plattner said in a statement.

Among the United States’ rivals, the reaction has been more muted. Despite Anthropic’s recent clash with the Trump administration, Amodei made it clear that AI must be used to defend the United States and other democracies and overcome autocratic adversaries.

Mythos is the latest sign of a growing global divide in AI. Countries without robust computing infrastructure and advanced AI models risk becoming dependent on companies like Anthropic, Google and OpenAI, having little influence over how their products are designed and secured, Yeyati said.

“The idea that access to cutting-edge AI can be unilaterally restricted by a company, based on opaque criteria and without the possibility of challenge, should be a cause for great concern,” he said.

c.2026 The New York Times Company

Source link