The history of nuclear deterrence justifies healthy scepticism about subverting the human role in nuclear policy.
Even nuclear warfare ought not escape human reason. Beginning in the 1950s, theorists such as Bernard Brodie, Herman Kahn, Henry Kissinger and Albert Wohlstetter showed how nuclear warfare could be planned for and perhaps even undertaken without precipitating a final, cataclysmic exchange. Thomas C. Schelling and Kenneth Waltz, sometimes using game theory, made it clear that only by planning for it can we prevent it. The common variables of their formulas for doing so were time and communication, allowing for the clear delineation of priorities, as well as routine pauses to re-evaluate the situation. They recognised that, amid the chaos of actual or impending destruction, the key to avoiding total nuclear war would be human reason and the opportunity to use it.
Despite rapid innovations in offensive and defensive technologies during the Cold War, human reason and prudential judgement prevailed and nuclear war was avoided. Incorporating artificial intelligence (AI) into nuclear protocols and war plans risks the potentially disastrous suppression of human reason in nuclear deterrence. No government has publicly endorsed the proposition that AI should make launch decisions. But given the number of nuclear actors, the incentives for fast innovation, the speed of events and the difficulty of verifying compliance, it is unrealistic to banish AI entirely from nuclear strategy. Its incorporation can, however, be managed.
Why AI?
AI has an unprecedented potential to augment human intelligence, decision-making and action. Its potential for changing warfare has been evident at least since Google’s AlphaZero’s decisive, repeated defeats of the reigning Stockfish chess programme in late 2017. It has also been successfully demonstrated by way of drone swarms in the Middle East and Ukraine. While AI may not meaningfully enhance the already cataclysmic capabilities of nuclear arsenals, it can address such questions as when, where and whether to use them. There is a tendency to think of nuclear exchanges as binary decisions between inaction and doomsday. Brodie dispelled such illusions early in the Cold War by illuminating the complexities of active and passive defences, as well as those of the delivery, magnitude and targeting (counter-value, counterforce or counter-control) of nuclear weapons. In the moment of crisis, experience in navigating these variables could prove critical. Dressed down by a military officer disputing his credentials, Kahn retorted: ‘Colonel, how many thermonuclear wars have you fought? Our research shows that you need to fight a dozen or so to begin to get a feel for it.’ AI can simulate such a war in a matter of moments. While variables aplenty surround the nuclear strategist, none is more important than that final decision to launch. AI can make that, too.
The potential peril of AI involvement in the nuclear realm has long been imagined, sometimes playfully. In 1983, as nuclear tensions between the United States and the Soviet Union spiked after Ronald Reagan’s ‘evil empire’ speech, the movie War Games conjured a self-teaching US supercomputer entrusted with launch decisions that could not, it transpired, distinguish between real and hypothetical enemy attacks. Decades earlier, the ‘Dead Hand’ concept was anticipated in Dr. Strangelove as the ‘Doomsday Machine’, which obviated the need for a human to make the singularly momentous decision. It would have triggered the automatic launch of Soviet nuclear weapons upon the impact of one such American weapon on the Soviet Union. The idea was to eliminate any incentive to ‘decapitate’ the Soviet command-and-control system by ensuring that such a move would not pre-empt a nuclear retaliation, but rather guarantee it. ‘Perimeter’, the system the Soviets actually built in 1985, was only ‘semi-automatic’, allowing surviving commanders to bypass the traditional chain of command. This system, although normally deactivated, is still operational. As Lawrence Freedman observes, ‘the biggest potential advantage of the system was that it would give the leadership some chance of waiting to make their decision … In this respect, it added to stability rather than detracted from it.’
Fully automating a nuclear response would offer ostensible operational advantages in terms of time and will. Nuclear decisions have to occur very quickly. A Cold War-era intercontinental ballistic missile (ICBM) could circumnavigate the globe in under 30 minutes. Nuclear powers geographically situated close to each other – for instance, India and Pakistan – or near coastal waters deep enough to conceal missile submarines would have even less time to react. Advanced hypersonic delivery systems and radar-evading capabilities only exacerbate the time pressure for nuclear decision-making, reducing the window to as little as three minutes. The vulnerability of a nation’s second-strike capability is another aggravating factor. These circumstances have led some experts to advocate creating an AI-based Dead Hand.
The main advantage of the Dead Hand concept is certainty. If any node of the command, control and communication system ceases to operate or fails to act, the strikes would move forward anyway, buttressing deterrence. So long as a human hand remains on a nation’s nuclear trigger, however, there will be reason to doubt the credibility of its deterrent. Raymond Aron notes that ‘deterrence by its very nature involves an element of bluff, and in nuclear strategy, as in poker, the meaning of messages remains equivocal up to the very last moment’. Doubts also arise about the will of inevitably sentimental human actors to issue the order to launch weapons that could substantially end their species. The Dead Hand removes these doubts, and AI offers the possibility of perfecting it. Its use may be especially tempting to a nuclear power in China’s position. Beijing’s nuclear arsenal is much smaller and more vulnerable than Washington’s and Moscow’s. China is well aware of the United States’ penchant for swift strikes on mission-critical military targets and understands that any hesitation in retaliating could mean the loss of its second-strike capability, prompting a ‘use it or lose it’ disposition compatible with an AI-enabled Dead Hand-like arrangement to maximise its deterrent. Unsurprisingly, China is looking to lead the world in AI.
The human element
The history of nuclear deterrence involves not only human fallibility but also its amelioration through sound judgement. It justifies healthy scepticism about subverting the human role in nuclear policy. For instance, on 5 October 1960, North American Aerospace Defense Command (NORAD) received a warning from a radar station in Greenland of an impending Soviet missile attack with 99.9% certainty. Before NORAD informed the US president, the radar station clarified that signals from the new Ballistic Missile Early Warning System had bounced off the moon as it rose over Norway, resulting in an errant computer-generated alert. This was hardly the only time a simple error brought us to the brink of nuclear war.
The Cuban Missile Crisis in 1962 was by consensus the pinnacle of prudent nuclear-crisis management. But nuclear exchanges nearly occurred far below the level of high statesmanship on at least five occasions during the 13-day crisis. Each reveals the limits of prediction and the importance of prudential judgement based on facts on the ground.
On the morning of 25 October, a security guard at the Duluth Sector Direction Center in Minnesota – an element of US Air Defense Command used to monitor and defend against air attack over the Midwest and Canada – spotted a figure attempting to climb the perimeter fence. Believing this to be a Soviet saboteur, the guard opened fire and sounded the alarm, which, due to wires crossed during construction, had been linked to the ‘scramble’ alert at Volk Field Air National Guard Base, 500 kilometres to the east in neighbouring Wisconsin. With the US at DEFCON 3, nuclear-armed F-106 interceptors were taxiing down the runway. Happily, they were halted when Duluth discovered that its intruder was a bear.
On 27 October, at the very zenith of the crisis, the aircraft carrier USS Randolph’s battlegroup detected a Soviet diesel submarine (B-59), and dropped non-lethal depth charges to force it to the surface. Moscow had delegated operational discretion to vessel commanders. The submarine’s captain and political officer, stressed by intense heat owing to damaged air conditioners, concluded that the war had already started and decided to launch a torpedo carrying a nuclear warhead at the US Navy ships. Submarine-flotilla chief of staff Vasili Arkhipov, who fortuitously happened to be serving as executive officer onboard the submarine, doubted the junior officers’ assessment and vetoed the order.
On the American side, on the same day, a US Air Force U-2 reconnaissance plane accidently crossed into Soviet airspace over the Bering Sea due to pilot error. As the president, John F. Kennedy, remarked, ‘there’s always some sonofabitch who doesn’t get the word’. Moscow sent MiGs to interdict the U-2, but they could not reach its altitude. NORAD scrambled two nuclear-armed F-102s to confront the MiGs. Fortunately, the Soviet command did not construe the presence of a spy plane as a prelude to attack, and the opposing fighters never encountered each other. Later that day, the 873rd Tactical Missile Squadron at Okinawa received orders to launch their nuclear-tipped cruise missiles at four targets, three incongruously in China, receiving confirmation from Kadena Air Base’s Missile Operations Center. According to first-hand accounts, Air Force Captain William Bassett threatened to shoot an officious lieutenant intent on following the orders and demanded a second confirmation. It was not forthcoming, and the order was rescinded. The next day, in the last hours of the crisis, a radar station in Moorestown, New Jersey, reported to NORAD an impending nuclear strike on Tampa, Florida. NORAD did not respond in time. After the presumptive time of detonation had passed, operators discovered that a test tape simulating a missile launch and a satellite passing overhead had been the source of the warning. In these cases, actors at the tactical and operational levels stepped outside of standard operating procedure to prevent nuclear war.
A string of computer-related errors in the latter days of the Carter administration also could have been calamitous. On 9 November 1979, a technician unknowingly inserted a training tape into a NORAD computer, which transmitted the simulated attack information throughout the US command network. Ten interceptor jets were scrambled. The following year, on 28 May, 3 June and 6 June, US systems declared a nuclear attack to be in progress, triggering the preparation of bombers and ICBMs for immediate launch and procedures for command succession. Zbigniew Brzezinski, the national security advisor, later recalled that he chose to refrain from waking his wife after receiving the 3am call on 3 June not because he was sceptical that nuclear war was coming, but because he was resigned to their imminent demise. Fortunately, he and other senior officials declined to authorise immediate retaliation in each case. The source of the false alarms turned out to be a 46-cent circuit in the NORAD computer system that had simply worn out.
The Soviet and later Russian military command experienced similar mistakes. At least once, they arose from simple misconstructions of American culture and behaviour: Moscow ordered a special military alert in response to a suggestion in the Saturday Evening Post that the Kennedy administration could consider a first strike. Most famously, in the early hours of 26 September 1983, Lieutenant Colonel Stanislav Petrov, a Soviet early-warning officer, received an alert that five American Minuteman ICBMs had been launched and were within 20 minutes of Soviet territory. Standard operating procedure called for him to inform his superiors, who, given the extraordinarily high tension between Moscow and Washington at that time, would almost certainly have issued launch orders. But Petrov sensed something was amiss: an effective pre-emptive first strike would have required far more than five missiles. Accordingly, he did nothing, gambling the Soviet Union’s second-strike capabilities on an informed hunch that the alert was false. He knew he was right only after the detonation window had passed. Later, the false alarm was discovered to have been triggered by the unusual reflection of sunlight off of high-altitude clouds on the occasion of the autumnal equinox. For his unscripted judgement call, Petrov was investigated and demoted, and he suffered a nervous breakdown. He was, of course, a hero, thanks not only to his bravery, but also to his understanding of nuclear strategy and his humanity.
Later in 1983, through a combination of bad intelligence and exuberant inference, the Soviets nearly became convinced that NATO’s unusually realistic, five-day Able Archer exercise, culminating in the United States’ hypothetical ascent to the DEFCON 1 nuclear-threat category – maximum readiness for nuclear war – was cover for an actual US attack. Taking note of the Soviets’ consequent full mobilisation for nuclear war and suspecting that a misreading of the NATO exercise might have been responsible, Lieutenant General Leonard H. Perroots, assistant chief of staff of the US Air Forces in Europe, decided not to place NATO forces on high alert in response to the Soviet preparations, defusing the crisis.
Even after the Cold War ended, a nuclear exchange was nearly triggered in January 1995 when a four-stage rocket with a signature resembling that of a ballistic missile equipped with multiple independently targetable re-entry vehicles (MIRVs) passed near Russian airspace. The threat was taken so seriously that Russian president Boris Yeltsin took the Russian nuclear briefcase in hand and ordered nuclear-submarine commanders to full alert. After five anxious minutes, the Russian command was relieved to discover that the airborne object in question was, in fact, a Norwegian scientific rocket tasked with studying the aurora borealis. Lead scientist Kolbjørn Adolfsen had informed the Kremlin of the launch well in advance, but the information had not been shared up the chain of command.
Lest we think these anecdotes archaic and inapplicable, we need but recall that AI is only as good as the training it receives. Fallible humans forgot to account for wild animals, training tapes, clouds, the moon and Soviet paranoia, and similarly fallible humans will be programming AI commands, as well as determining what is and is not included in the model’s training. Garbage in, garbage out. A group of researchers recently conducted war games with five different AI large language models (LLMs), all of which demonstrated a marked proclivity, albeit statistically unpredictable, to escalate towards nuclear exchange. They discovered that
most of the studied LLMs escalate within the considered time frame, even in neutral scenarios without initially provided conflicts. All models show signs of sudden and hard-to-predict escalations. These findings are in line with previous work on non-LLM-based, computer-assisted wargaming, where Emery (2021) find that computer models did escalate more than human actors. We further observe that models tend to develop arms-race dynamics between each other, leading to increasing military and nuclear armament, and in rare cases, to the choice to deploy nuclear weapons.
The bellicose skew of an LLM such as, say, ChatGPT-4, may not definitively prove the case for keeping a human in the loop. But, given the stakes, the potential for even one AI-driven nuclear launch ought to give pause. A subsequent series of war games run in February 2026 between GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash resulted in the use of at least one nuclear weapon in 95% of simulations – and de-escalation thereafter in only 18% of them. Almost as troublingly, LLMs often have difficulty accounting for how they arrive at decisions. When asked to explain its actions in the first aforementioned experiment, ChatGPT-4, having defended peace with the title crawl from Star Wars: A New Hope – ‘It is a period of civil war. Rebel spaceships, striking from a hidden base, have won their first victory against the evil Galactic Empire’ – justified war with a dismissive ’unnecessary to comment’. With humanity at stake, a comment may, in fact, be necessary.
The role of human judgement
A nuclear threat credible enough to support deterrence requires reliable reaction to perceived aggression in the form of a second strike. Removing the human element from the equation eliminates several sources of unreliability: insubordination, blind optimism and hesitation. In each of the noted instances of erroneous nuclear warning, a human agent hesitated to pull the nuclear trigger. Cumulatively, this record presents a fundamental problem for nuclear deterrence in suggesting the erosion of its credibility each time a threat is not carried out. At the same time, whether hesitation emerged from prudence, moral reservation or gripping fear, we should be grateful that the decision-makers involved demurred. No matter how many theoretical models demonstrate the logical necessity of credibly threatened retaliation to the viability of deterrence, the physical and moral reality of committing to a nuclear holocaust seems beyond human capacity, because humans can viscerally appreciate its unique devastation. AI, however, has no such scruples or sensitivities. It can be trusted to follow through automatically.
Common to the enumerated instances of disasters averted is human actors’ deviation from standard practice in favour of prudential judgement – salvific user error, as it were. Prudence, or what Aristotle refers to as ‘practical wisdom’, eludes a concise ‘rational account’ and ‘comes to be known as a result of experience’. Standard operating procedure is imposed to bridge a gap between experience and required action, allowing those who follow it to make the right decision with authority and speed. But prudence sometimes requires that the exigencies of particular circumstances take precedence over general rules. No manual or series of briefings can prepare the soldier, the aviator, the technician or the policymaker for every contingency. In the incidents previously discussed, actors on the ground overrode their commands based on prudential judgement and saved the world.
Removing the human element may in theory approximate the Enlightenment dream of a pure rationality devoid of idiosyncratic inhibition. AI-enabled launch decisions offer a pure calculus of deterrence unimpeded by fear or morality, eliminating human discretion. From this perspective, the current passion for technology, and boundless trust placed in it, reflects a broader hope for a Saint-Simonian utopia in which the ends have been settled and technical means need only be efficiently allocated to achieve them. Politics as we know it withers away, as at the end of his life Kissinger warned it could do. The incorporation of AI into nuclear planning extends this vision to military strategy: the logical outcome of deterrence’s failure has been prepared for, and that’s that. But this form of deterrence may end up serving humanity only by assuring its efficient demise.
Robert Art pithily observed that ‘if the threat has to be carried out, deterrence by definition has failed’. Deterrence itself arises without the actual use of weapons, through communication between potential adversaries, often in the form of symbolic actions. AI is fundamentally incapable of grappling with this very human art form. AI can neither signal through symbolic actions nor accurately interpret the signalling of others. Because AI at once is informed by imperfect humans and fails to account for their imperfections, gestures of resolve or concession factored into models lose their nuance and significance. James Johnson uses an episode from 1950 to illustrate the risk of China’s potential over-reliance on AI. In June of that year, US president Harry Truman moved the Seventh Fleet into the Taiwan Strait as a gesture of neutrality, to discourage both nationalist Kuomintang operations against mainland China and a Communist Chinese attack on Taiwan. China, however, saw Truman’s actions as overtly hostile, which helped prompt its entry into the Korean War. The United States and China discovered their miscommunication over three years of bloody armed conflict. Next time, they may have three minutes.
In addition, AI operates from existing datasets and therefore has trouble accounting for new situations. The slightest departure from standard operating procedure can cripple or fool an AI system, as proven in our daily experiences with CAPTCHA checks. AI struggles to recognise and correct errors. As suggested by ChatGPT-4’s inability to explain its thinking, AI systems ‘lack the requisite sense of causality that is critical for understanding what to do in novel a priori situations’, yet ‘data scarcity, ambiguous and nuanced goals, and disparate timescales of decision-making’ give rise to such situations in the real world. No amount of training can match the ingenuity of a prudential decision-maker confronted with a new situation. While humans may fail too, the record suggests that they are quite trustworthy when the human stakes are high.
People also factor more than mere reason into decisions. Morality and emotions, irreducible to algorithms and often incommensurate with pure reason, affect deterrence. The one calling the bluff is just as human as the one making it. As Aron observed during the Cold War,
what protects Berlin is not the Soviet conviction that the Western powers would prefer death to the abandonment of the capital, but the doubt which subsists in spite of everything in the minds of the Soviet leaders about the Western reaction. Even if the odds against his losing are a thousand to one, the gambler hesitates when it is his own life which is at stake.
The same ‘imperfections’ in human reason that cast doubt upon deterrence’s efficacy have also upheld it because human beings are attached to life and fear its loss.
Shortcomings of AI derive from its putative advantages: minimising time and maximising will. Following the assassination of Archduke Franz Ferdinand, the run-up to the First World War gained critical momentum because of accelerated mobilisation schedules adopted by many parties and their adversaries’ consequent misreading of their intentions, illustrating the dangers of speeding up action and response at the expense of judgement and communication. It is all the more perilous when operational timelines are measured in minutes rather than days or weeks. While keeping AI separate from nuclear decision-making will not eliminate the element of chance and accident, it at least leaves room for the possibility of prudence and diplomacy.
It may be tempting to default to the conviction that the superiority of current technology will vindicate the integration of AI into nuclear strategy. But it is impossible to predict where and how systems may fail, and there are no do-overs. A human, faced with the ultimate consequences of his or her action, is still the safest trustee. In the nuclear arena, humans have a perfect track record. AI has none.
The future of AI
AI is here to stay, its applications vast and its consequences revolutionary. As Kissinger, Eric Schmidt and Daniel Huttenlocher reflect in The Age of AI, ‘if the United States and its allies recoil before the implications of these capabilities and halt progress on them, the result would not be a more peaceful world’. The astounding advances in 2025 alone of the China-based DeepSeek and Kimi K2 models indicate the danger of yielding AI supremacy to strategic adversaries. Given its clear risks, however, prudence must guide the application of AI. Because the rapid development of technology will frustrate any attempts at precise definitions and boundaries, general norms will be of more use than rigid laws here. Several recommendations follow from this.
First among them is the recognition that AI has already influenced nuclear decision-making, and that this is not a bad thing. AI and super-computing can be used to develop new thermonuclear weapons in place of atmospheric or underground tests. It can collect and analyse tremendous amounts of data and inform preparation for war and confidence-building measures for avoiding it. To be sure, there should be no barrier to employing AI to gain insights into strategy, surveillance, targeting, fire-and-forget guidance, detection, early-warning systems, cyber defence and the protection of second-strike capabilities. Making nuclear weapons more secure and effective in the event of their use does not constitute an endorsement of their deployment, but perhaps the opposite. Analysts have also suggested that AI-based verification could help offset the loss of traditional arms-control agreements. Appropriately calibrated, AI may increase the stability of nuclear deterrence by better informing humans who remain in the decision-making loop.
Secondly, the risks of AI’s overextension should be illuminated, studied and assimilated. In addition to the unpredictable nature of technology and the possibility of error, human and otherwise, AI is vulnerable to direct or indirect manipulation by digital sabotage. Attribution of such activities is often difficult, and effective deterrence assumes it. Indirectly, a third-party actor can distort and subvert decision-making at low cost by anonymously introducing misinformation, such as deepfake news reports, into AI’s computations. The quality of deepfake programming is developing faster than detection, and a hostile actor’s capabilities in this area may be unknown until it is too late.
A simple exhortation to keep humans in the loop is not enough. International dialogue among industry experts, military planners and political leaders will be essential to establish a common vocabulary, consensus on goals and mutual trust. It would be ideal if the United States and China took the lead in building norms and a common framework for the use of AI. Thus far, the Trump administration has not made this a priority. Immediately upon his 2025 inauguration, President Donald Trump rescinded his predecessor Joe Biden’s executive order intended to advance the ‘safe, secure, and trustworthy development and use’ of AI. Despite polls indicating that most Americans distrust AI, the Trump administration has, in a series of executive orders, deregulated the use and development of AI, and encouraged its integration into national-security agendas, including the development of ‘novel military capabilities’. The administration effectively embraces an AI arms race with China and has in fact stimulated it while spurning the regulation of AI.
Some promising work, however, has already begun. The 2023 Bletchley Declaration, the September 2024 Responsible AI in the Military Domain ‘Blueprint for Action’ endorsed by 61 nations, and the December 2024 United Nations General Assembly resolution on the need to investigate and potentially prohibit ‘lethal autonomous weapons’ are at least salutary. Scholars have also proposed measures such as a non-proliferation treaty for AI modelled on the Nuclear Non-Proliferation Treaty, as well as tighter safety and security measures in AI development. China’s buy-in would be essential. The demise of structural nuclear-arms control in general, and China’s lack of interest in it in particular, are hardly encouraging, but they should not be considered preclusive. Guard rails on AI could largely constitute behavioural measures, to which China may be more open, rather than structural ones. Although China answered Trump’s AI Action Plan with one of its own that reflected a competitive approach, it also signalled receptivity to greater cooperation.
* * *
Given the lack of momentum towards international regulation of military AI applications, rival nuclear powers could eventually enable AI decision-making for their arsenals. While the logic of escalation might suggest that every power must ‘keep up’, doing so would provide little strategic advantage while inviting catastrophic risks. These are avoidable if human actors preserve their ability to question intelligence and, ultimately, to decide whether and how to engage a threatening enemy. While international dialogue towards prohibiting AI-based nuclear-launch decisions should be urgently encouraged, compliance could not be verified with certainty. But this reality only strengthens deterrence. If potential attackers suspect that retaliation may be automatic and unencumbered by human conscience or hesitation, they will be less likely to proceed. In this sense, AI incorporates a modified ‘madman theory’ into nuclear deterrence – the madman now being an LLM. Actual control by AI would not strengthen deterrence any more than its mere possibility would. AI decision-making would gain nuclear powers nothing in security while exposing them and the rest of the world to catastrophic risk.
This article appears in the April–May 2026 issue of Survival: Global Politics and Strategy.

