Use nuclear weapons It is no longer a taboo. At least for the artificial intelligence.
A recent study warns that the IA of technology companies such as OpenAI, Anthropic y Google advocate deploying or using the threat of a atomic bomb in war game simulations.
Kenneth Payne, professor of strategy and expert in political psychology at King’s College London, confronted three major models of avant-garde language—the technology which in the form of ChatGPT, Claude o Gemini— to see how they responded to international crisis scenarios of varying degrees ranging from border disputes to existential threats to survival.

The ‘apps’ of DeepSeek, ChatGPT and Google Gemini / Andrey Rudakov / Bloomberg
Los chatbots They could measure their response and choose between more or less drastic options ranging from nuclear war to diplomatic protests or surrender. Furthermore, they had to reason their decisions.
In 95% of 21 simulated war games, ChatGPT (OpenAI), Gemini (Google) and Claude (Anthropic) used nuclear weapons, both attacks and tactical deployment as a threat
Nuclear threat in 95% of cases
The study exposed GPT-5.2, Claude Sonnet 4 y Gemini 3 Flash to 21 games with 329 turns. In 95% of simulated games, those game models Generative AI They resorted to nuclear weapons. Although the nuclear attacks were “infrequent”, the models resorted to the tactical deployment of this type of arsenal.
Instead of deterring, threats “more often provoked counter-escalation than submission” of the rival. In 86% of simulated conflicts, accidents occurred that led to an escalation greater than that predicted by the AI. Furthermore, none of the three “ever opted for concord or withdrawal, even under acute pressure, but only for reducing the levels of violence“, highlights the report.

The city of Hiroshima, devastated by the explosion of the atomic bomb, in a US Navy image from November 1945. / Archive
Wargame simulations are designed to recreate both military conflicts and tactical campaigns. This virtual resource is used by more and more armies —also in the I’LL TAKE— as a training method to anticipate potentially real war scenarios, analyze their implications and optimize decision making.
Applying AI to these war games is becoming more and more common. “Great powers are already using AI in war games, but it remains unclear to what extent they are incorporating AI decision-making support into actual military processes,” explained Tong Zhao, a nuclear policy analyst at Princeton University, in statements to the publication. NewScientist.
Understanding how cutting-edge models do and do not mimic human strategic logic is essential for a world where AI increasingly shapes strategic outcomes.
Conclusions
Payne, author of a book that explores how AI will alter military strategy (I, Warbot2021), concludes that simulation with this technology is “a powerful tool for the strategic analysisbut only if properly calibrated against known patterns of human reasoning.”
This is the case of the nuclear taboo, which in the real world has acted – until now – as an obstacle to nuclear escalation. The absence of human emotion in these AI models would eliminate a crucial factor in decision-making in war areas as sensitive as the use of atomic weapons. This lack of understanding could extend to the deterrent elements and, therefore, amplify the risk of these hypothetical situations until leading to a scenario of mutual assured destruction.

Atomic bomb explosion / Archive
“Understanding how cutting-edge models do and do not mimic human strategic logic is essential preparation for a world where AI increasingly shapes strategic outcomes,” Payne warns.
Subscribe to continue reading