Event Summary

AI and Geopolitics: Global Governance for Militarized Bargaining and Crisis Diplomacy

The dawn of Artificial Intelligence (AI) in shaping innovation in economic and technological landscapes, logic, relationships, and the future of geopolitical competition has arrived. States are now incorporating sophisticated AI systems into their military strategic planning and postures, diplomatic toolkits, and decision-making processes. Organized by Dr. Anatoly Levshin, Harvard Kennedy School’s Belfer Center Program on Emerging Technology, Scientific Advancement, and Global Policy was pleased to lead an advanced AI + X  + G study group, selected to examine these transformations by investigating emerging uses of AI in militarized bargaining and crisis diplomacy and the governance issues associated with those uses.

The dawn of Artificial Intelligence (AI) in shaping innovation in economic and technological landscapes, logic, relationships, and the future of geopolitical competition has arrived. States are now incorporating sophisticated AI systems into their military strategic planning and postures, diplomatic toolkits, and decision-making processes. Organized by Dr. Anatoly Levshin, Harvard Kennedy School’s Belfer Center Program on Emerging Technology, Scientific Advancement, and Global Policy was pleased to lead an advanced AI + X  + G study group, selected to examine these transformations by investigating emerging uses of AI in militarized bargaining and crisis diplomacy and the governance issues associated with those uses.  
To understand the deeper implications of sophisticated AI systems in the global order, the ways in which the development of AI may encourage states to create new architectures of global governance, and examine options for  regulating the use of AI across defense, development and diplomacy, the Belfer Center hosted Michael Horowitz. Dr. Horowitz is the Director of Perry World House, Richard Perry Professor at the University of Pennsylvania, Senior Fellow for Technology and Innovation at the Council on Foreign Relations (CFR), and former Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities. 
 

Through our AI and Geopolitics study, we explored how the AI dynamics of militarized bargaining and crisis diplomacy might shape our future world. We considered possible changes in the capacity of belligerent states to communicate with each other, anticipated alternative escalation trajectories, and articulated and executed strategies for locking meaningful bargains in the shadow of war. We explored alternative types of AI systems states might use for these purposes. What capabilities will general-purpose Large Language Models or specialized non-linguistic systems calibrated for the execution of particular logics of action bring? Moreover, will specialized models be trained on actual historical data or simulated data generated in designated virtual environments? Finally, we examine how states might reimagine architectures of global governance to better regulate the use of AI in militarized bargaining and crisis diplomacy. 
 

The advent of AI is unique among historical advances in military technology. Unlike firearms, tanks, and nuclear weapons, inventions which were isolated to a particular domain of warfare, artificial intelligence holds both multidimensional, dual-use, and cross functional purpose and application, and was not invented in the sense of a single creation, but rather it is a phenomenon of technological discovery and engineering to be developed and understood over time, with the the potential for widespread disruption. The last time our world experienced an “invention” or discovery of this magnitude was nearly 275 years ago, in 1752. Electricity, the harnessing of which transformed nearly every aspect of daily life, enabling the advancement of communication technologies, urban infrastructure, and industrial automation. Electricity dramatically transformed warfare—not only technologically, but also tactically and strategically. This distinction carries significant implications for how AI should be governed—especially in the military domain. Unlike traditional arms control, which targets specific systems, the governance of AI will require a more adaptive, multifaceted approach.
 

On the international stage, Horowitz explained, experts are already working to come to consensus on three specific tracks for AI governance: international standard setting, AI and safety, and the evolving character of war. International standard setting and AI safety are not exclusive to militarized bargaining, but they are undoubtedly fundamental to geopolitical competition. Horowitz noted that countries are already positioning themselves to take the lead in setting international standards that favor their domestic companies to produce a competitive advantage in international security and international trade. Simotaneously, the United States is simultaneously aiming to lead the way in AI safety, convening the inaugural plenary meeting of States endorsing the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. 
 

In defining the character of war writ large, Horowitz described attempts by United Nations stakeholders, including the Group of Governmental Experts on Lethal Autonomous Weapons Systems, to regulate lethal autonomous weapon systems, also known by some as “killer robots.” The United States, China, and Russia have notably expressed skepticism towards novel international conventions aimed solely at AI, arguing that existing humanitarian laws already provide sufficient regulatory coverage. Horowitz noted a significant divide between the United States and its core allies, which believe that existing legal frameworks (e.g. International Humanitarian Law) address ethical and operational concerns posed by AI, and medium and smaller states, such as  Costa Rica, the Philippines, and Austria, which advocate for new conventions rooted in broader human rights law. 
 

Despite skepticism about a broad regulatory consensus, Horowitz identified a practical opportunity for international cooperation. Drawing on successful Cold War precedents, Horowitz suggested establishing an "autonomous incidents agreement" similar to maritime and aerial agreements. Such legal framework agreements could modernize and outline clear rules of engagement for autonomous systems in peacetime and standardize de-escalation procedures for high-risk incidents. He viewed them as realistic and beneficial due to shared interests among global powers in avoiding accidental conflicts involving autonomous military technologies.
 

Our study group initially explored how geopolitical competition may shape the development and deployment of AI. It is also important to consider, however, whether global diffusion of AI might shape the balance of power among states. Horowitz posited that the winner of the competition between the United States and China will be determined by the Global South-specifically their uptake and application of AI models from the US and China. With AI companies like DeepSeek and Open AI emerging and maturing at lightning speed, it’s hard to overstate how close the competition is. Horowitz further highlighted Global South countries' growing concerns regarding dependence on foreign AI technologies and associated security risks. Many of these nations, he argued, are actively pursuing sovereign AI capabilities by developing domestic data centers and technical infrastructure. Their goal is greater autonomy and reduced vulnerability from future technological dependence, shaping the current competitive dynamics between major powers like the U.S. and China.
 

Horowitz also urged us to consider, in addition to the technical problem of the development of frontier capabilities, the organizational difficulties involved in adapting and deploying such frontier capabilities. History has repeatedly shown, Horowitz explained, that military competitions are won not necessarily by states that create a new technology but, rather, by those that figure out how to best adopt the technology. Historians remember it was Britain's naval engineering that produced the world’s first battleship and aircraft carrier, but it was the United States’ innovative and supremely effective deployment of these naval technologies—adapted and adopted with American airpower across the Pacific during WWII—that amplified its impact on the course of history.  It is this strategic ingenuity, vision, and risk that, when coupled with AI, could prove similarly and globally disruptive. That said, the countries who do create the AI models will be pivotal in determining who has access to the AI models and, potentially, how they use AI for economic, military, or other purposes.
 

In 2025, we have entered the era of “precise mass” in war due to advances in manufacturing, AI, and precision munitions. These advances have lowered the barriers including cost, scalability and tech, and, as Horowitz says, “democratized access to destructive power.”  This transformation is empowering state and non-state actors alike to access and wield technologies—such as drone swarms and suicide drones—that were once reserved for only the most advanced militaries, and it has profound implications for how we think about military advantage and deterrence. Therefore, who has access to advanced AI models and how they use them becomes ever more consequential.
 

This reality will drive allies to collaborate and communicate in new domains, including by sharing massive amounts of data and developing interoperable AI interfaces. s However, many observers may believe that AI adoption will increase challenges to interoperability across the militarized domain as different states use different platforms and approaches. Horowitz, though, argued that current interoperability concerns are overstated. The primary challenge is secure and efficient data-sharing among allies, rather than technical incompatibilities of AI models themselves. In fact, he saw AI adoption potentially improving collaborative military efforts rather than hindering them.
 

Nevertheless, Horowitz claimed that most military applications of AI will have nothing to do with combat or militarized bargaining. Rather, AI will be used to increase efficiency on the business side of the military, helping to streamline supply chains, hiring processes, finances, support and logistics functions. In this sense, the application of AI in the military may not look dramatically different from that of the civilian government realm or the private sector. 
 

Historically, major shifts in military technology have redrawn the global balance of power. The coming years will be shaped not just by who builds the most powerful models, who governs them, who gains access to them, and how they are used, but also the countries who best adopt them. In other words, to achieve the global AI competitive edge, the strategy must be to position ourselves to grow and retain leadership not only at the point of scientific invention but also at the point of deployment and use on the battlefield of the future.
 

Recommended citation

Michael C. Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power”, Texas National Security Review, v. 1, no. 3 (2018): 36-57. Available at https://doi.org/10.15781/T2639KP49.

François Chollet, “The Implausibility of Intelligence Explosion”, published on Medium.com (November 27, 2017) and available at https://medium.com/@francois.chollet/theimpossibility-of-intelligence-explosion-5be4a9eda6ec

Toni Erskine and Steven E. Miller, “AI and the Decision to Go to War: Future Risks and Opportunities”, Australian Journal of International Relations, v. 78, no. 2 (2024): 135-47.

Avi Goldfarb and Jon R. Lindsay, “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War”, International Security, v. 46, no. 3 (Winter 2021/22): 7–50.

Michael C. Horowitz and Erik Lin-Greenberg, “Algorithms and Influence: Artificial Intelligence and Crisis Decision-Making”, International Studies Quarterly, v. 66, no. 4 (2022), 1-11.

Henry A. Kissinger, Craig Mundie, and Eric Schmidt, Genesis: Artificial Intelligence, Hope, and the Human Spirit (New York, 2024: Little, Brown, and Co.), 109-136.

Erik J. Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (Cambridge, 2022: Harvard University Press), 106-57.

Jon R. Lindsay, “War Is from Mars, AI Is from Venus: Rediscovering the Institutional Context of Military Automation”, Texas National Security Review, v. 7, no. 1 (2023): 29-47.

Arvind Narayanan and Sayash Kapoor, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (Princeton, 2024: Princeton University Press), 150-178. 

Eric Schmidt, “AI, Great Power Competition and National Security”, Daedalus, v. 141, no. 2 (2022): 288-298. 

Sam Weiss
Author

Sam Weiss

Dauren Kabiyev
Author

Dauren Kabiyev

Sarah Lober
Author

Sarah Lober