The National Insecurity of AI
Following the Aspen Strategy Group Summer Workshop, Graham Allison offers his reflections and key insights about the impact of AI on American national security.
Following the Aspen Strategy Group Summer Workshop, Graham Allison offers his reflections and key insights about the impact of AI on American national security.
After a week of intense discussion of the implications of AI for American national security at the Aspen Strategy Group (ASG) Summer Workshop, I am still processing 50 pages of notes. The ASG is unique in its ability to bring together a group of distinguished former officials and private sector leaders for serious, candid, and nonpartisan conversation in a setting that encourages reflection. While high altitude perspectives in strictly off-the-record discussions with know-a-lots clarifies, conversations in which experts are candid about what they don’t know are even more valuable. About the impact of AI on American national security, I am still thinking. But for now, I offer six initial takeaways.
1. No One Knows the Risks
First, no one fully knows aboutthe risks posed by applications of AI that could allow significantly larger numbers of evil actors to create bioweapons that could kill hundreds of thousands of people, or even an AGI that could threaten the survival of mankind on earth. Specifically, those advancing the AI frontier recognize the potential of such risks, and those in the policy community are even more inclined to highlight such dangers. But as one cross-examines what the most informed people say, the incandescent truth is no one knows. Specifically: (1) no one knows how likely these risks are and (2) no one knows what to do about them.
2. Consult History
Second, consult history. While recognition that attempting to cope with this unbounded technology in the absence of confident answers about its risks is uncomfortable and even frightening, for perspective it is instructive to “consult history.” The most recent analogous case in which scientists and strategists attempted to manage the advance of a technology that gave governments the capability to kill millions of human beings—and even potentially to extinguish the life of Homo Sapiens on Earth—was the creation of nuclear weapons. For highlights from that history, the article titled “The Path to AI Arms Control” that Henry Kissinger and I coauthored last October in Foreign Affairs provides a good starting point.1
For the first decade after the explosion of atomic bombs destroyed Hiroshima and Nagasaki forcing Japan to surrender in WWII, the smartest scientists and strategists in the world grappled with the question of what this meant for strategy, statecraft, and the future of international order. Read what they said and wrote, and examine what they actually did. It was not only the peace advocates who called on nations to “ban the bomb.” Idealistic dreamers were not the only ones urging the U.S. government to give its nuclear weapons to the UN and empower it to act as a super-national overlord.
One of the toughest and savviest statesmen in American history was Henry Stimson. As the Secretary of War in World War II (a position in which he had also served in World War I), he led the arsenal of democracy to victory over Nazi Germany and Japan. Stimson is rightly revered as one of the “wise men” who won the war and built the peace that followed. But as the war came to an end, Truman asked Stimson for his best idea about what to do with the bomb. And what was Stimson’s answer? He recommended that the U.S. share its nuclear monopoly with Stalin’s Soviet Union and Churchill’s Britain in a great power condominium that would supervise a global nuclear order.
Allison, Graham. “The National Insecurity of AI.” Aspen Strategy Group, October 16, 2024
The full text of this publication is available via Aspen Strategy Group.