Analysis & Opinions - Belfer Center for Science and International Affairs, Harvard Kennedy School

AI Agents in Diplomacy – Promoting Stability or Chaos?

| May 22, 2024

If the Russia-Ukraine war is considered among the first true AI enabled battlefields,[i] it likely offers only a snapshot of what future combat operations will look like as Artificial Intelligence is adopted into an array of military doctrines and applications. Even in the short period from 2022 to 2024, the impact of AI enabled drones, surveillance and targeting have shaped nation states’ ability to wage war, mitigate an adversary’s strengths and seize asymmetric advantages.[ii] The war has also shown the limitations of AI enabled warfare, particularly where the technology allows each combatant to have high levels of understanding of the battle space, reducing opportunities for breakouts and feints.

While military applications of AI continue apace, less clear is how AI will be applied in diplomacy and state-craft.[iii] Among the burgeoning use cases for AI is development of a range of AI agents, basically an AI virtual assistant,[iv] that have the potential to do everything from writing emails, to scheduling meetings, book travel and complete on-line shopping.[v] In concept, users could empower their AI assistant to conduct business on their behalf, increasing efficiency and effectiveness.

Ambitions for AI agents don’t stop at productivity; several companies are building AI agents that allow users to interact with virtual versions of the deceased.[vi] These companies compile background information on a lost loved one, including available media, written work and other sources, to enable customers to engage with an avatar or AI agent version of the departed. While the intent is to allow family members to grieve and adapt,[vii] this concept points to additional use cases. Might not foreign services, intelligence agencies or even businesses develop avatars of rivals as a means of seeking advantage, particularly as the technology approaches the level of Artificial General Intelligence (AGI)?[viii]

The appeal is clear. Prior to conducting a high stakes negotiation, officials can practice engaging with their counterparts using avatars programmed on officials’ public statements, videos, writings, emails, phone calls, and anything else captured in our ever expanding digital footprints.[ix] When developing diplomatic strategies, AI agents could provide a reliable sense of an adversary’s thinking, allowing users to game out probable responses and counter-responses. The nature of international relations may soon have to account not only for the people in the room, but the digital representations of world leaders, their representatives and all the functionaries involved in decision making.

In this notional world of AI enabled avatars of world leaders and foreign ministers, it’s possible that increased understanding and reduced uncertainty could usher in an era of transparency and stability. It’s also possible that to regain an ability to mask intentions and achieve surprise, such advancements may incentivize new behaviors in diplomacy that could prove disruptive and dangerous. 

Faced with the prospect of adversaries and allies alike successfully predicting a country’s plans and intentions, leaders may turn to the same AI agents to inform denial and deception tactics to scramble the algorithm. Countries could embrace strategies that allow for short term losses, in trade deals or conflict, if it ultimately supports pursuit of long-term gains.

In his epic series “The three-body problem,” Cixin Liu (spoiler alert) depicts a world in which Earth’s every move is monitored by a hostile alien force. In an attempt to circumvent this surveillance and regain the element of surprise, the world empowers a small number of individuals, known as “Wallfacers,” to independently devise and execute a strategy to defeat the enemy. The trick is that only each individual Wallfacer can know what their plan is. The only way to achieve success is to devise a plan that remains inscrutable to the enemy until it’s too late.[x] In the face of a degraded ability to outmatch an adversary’s AI toolkit, nations may turn to AI tools whose paths to achieving victory take unexpected turns.

Testing has already shown how AI systems can defeat seasoned fighter pilots by ignoring rules or taking advantage of pilots’ training.[xi] AI tools have shown creativity in conceiving of moves in complex games, like Go, that even the world’s best player could not anticipate.[xii] Introducing AI agents whose purpose is to fulfill national objectives through obtuse methods may erode any analyst’s ability to understand and predict future decision-making, introducing new variables to conceptual frameworks, such as the Rational Actor Model,[xiii] that has underpinned international relations for the last 40 years.

In a future with powerful AI agents, leadership may adopt data curation strategies that restrict access to information that would allow an algorithm to anticipate their thinking. Likewise, leaders may have to inject noise into the information environment to keep the system guessing. Ensuring controls on access to current and future leaders’ available data might become routine, further unsettling a world rife with disinformation and a reduced trust in media.While the technology to create these AI avatars may be far off, now is the time to start considering the impact such technologies will have on statecraft, diplomacy and decision making. In the current race for AI supremacy, the focus has been on development and deployment across military and economic domains. What will the world look like if AGI agents introduce new incentives for novel decision making? It may generate a cautious, calculated landscape of predictable actions or it may open the door to chaos, characterized by unpredictable, high stakes actions meant to throw the algorithm and the adversary off balance. Considering these issues now provides us an opportunity to think through the consequences and guide policies and practices meant to develop steady international relations in the future.

[i] Bergengruen, Vera, “How Tech Giants Turned Ukraine into an AI War Lab,” Time, February 8, 2024,

[ii] Jones, Grace, Janet Egan and Eric Rosenbach. “Advancing in Adversity: Ukraine’s Battlefield Technologies and Lessons for the U.S..” Policy Brief, Belfer Center for Science and International Affairs, Harvard Kennedy School, July 31, 2023,

[iii] Several authors have examined technical benefits AI tool can provide practitioners of diplomacy, negotiations and state-craft, particularly as it pertains to research, writing, sentiment analysis, security, monitoring of agreements and even idea generation. For more background, see Moore, Andrew, “How AI Could Revolutionize Diplomacy,” Foreign Policy, March 21, 2023,; Blaser, Virginia, “How to Use Artificial Intelligence in Diplomacy,” Diplomatic Diary, Washington International Diplomatic Academy, October 1, 2023,; Kramár, J., Eccles, T., Gemp, I. et al. Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy. Nature Communications 13, 7214 (2022).; Walker, Vivian S.,” AI and the Future of Public Diplomacy, USC Center on Public Diplomacy, August 22, 2023,

[iv] “An Introduction to AI Agents,” Medium, December 27, 2023,

[v] Metz, Cade and Weise, Karen, “How ‘A.I. Agents’ That Roam the Internet Could One Day Replace Workers,” New York Times, Oct. 16, 2023,

[vi] Zahn, Max, “Artificial intelligence advances fuel industry trying to preserve loved ones after death;

Generative AI, like ChatGPT, promises progress but poses dangers, experts said,” ABCNews, July 21, 2023,

[vii] Carballo, Rebecca, “Using A.I. to Talk to the Dead,” New York Times, December 11, 2023,

[viii] For a definition of Artificial General Intelligence (AGI), see Manning, Christopher, “Artificial Intelligence Definitions,” Stanford University Human Centered Artificial Intelligence, September 2020,

[ix] Fowler, Bree, “Your Digital Footprint: It's Bigger Than You Realize,”, April 4, 2022,

[x] See the entire The three-body Problem series: Liu, C., & Liu, Ken. (2014). The three-body problem. Tor.; Liu, C., & Martinsen, J. (2016). The dark forest (First U.S. trade paperback edition.). Tor, A Tom Doherty Associates Book.; Liu, Cixin, and Liu, Ken. 2016. Death's End. First edition. New York: Tor.

[xi] Everstine, Brian W, “Artificial Intelligence Easily Beats Human Fighter Pilot in DARPA Trial,” Air & Space Forces Magazine, August 20, 2020,

[xii] Mozur, Paul, “Google’s AlphaGo Defeats Chinese Go Master in Win for A.I.,” New York Times, May 23, 2017,

[xiii] Allison, Graham T. 1971. Essence of Decision; Explaining the Cuban Missile Crisis. Boston: Little, Brown.

For more information on this publication: Belfer Communications Office
For Academic Citation: McMahon, Gerald.“AI Agents in Diplomacy – Promoting Stability or Chaos?.” Belfer Center for Science and International Affairs, Harvard Kennedy School, May 22, 2024.

The Author