Introduction and Background
The intersection of AI and conflict resolution is rapidly becoming one of the most critical frontiers in diplomacy and peacebuilding. AI technology, from Large Language Models (LLM) to immersive digital platforms, is reshaping the way mediators, negotiators, and policy leaders worldwide approach some of today’s most complex disputes.
Hosted by Harvard Kennedy School’s Belfer Center and moderated by Dr. Nur Laiq, Belfer Technology and Geopolitics Fellow, the session titled “AI & Conflict Resolution: How Can Artificial Intelligence Improve Peace Negotiations?” convened two leading voices in the field. Dr. Martin Wählisch, former Lead at the UN Department of Political and Peacebuilding Affairs Innovation Cell, and Dr. Jeffrey Seul, Harvard Law School lecturer, shared insights into how emerging technology is transforming both the opportunities and risks in conflict mediation.
Building on early experiments in online dispute resolution and the evolving role of AI in conflict monitoring, the speakers explored how AI can assist with blind spot analysis, scenario planning, and broader inclusivity efforts. However, they also raised important cautions: over-reliance on AI, the persistence of bias in training datasets, the rise of corporate influence, and the need for public and policymaker AI literacy.
The panelists reflected on parallels from fields where AI initially disrupted but ultimately elevated human thinking. Based on this presentation, participants discussed how AI, if thoughtfully integrated as a tool, could enhance peace processes.
Key Takeaways from Jeff Seul’s remarks
Versatility of AI’s Role in Conflict Resolution
Jeff Seul opened with a personal reflection on how AI's role in conflict resolution has evolved over the past three decades. He recalled two foundational examples: early online dispute resolution at eBay, which used simple algorithms to mediate buyer-seller disputes, and an experimental virtual negotiation platform deployed during the Sri Lankan peace process. Both of these cases demonstrated the early attempts at incorporating computational analysis into conflict mediation tools that serve as precursors to today’s AI applications.
Seul then discussed the present-day uses of AI in conflict and highlighted AI’s growing importance in conflict monitoring and early warning. In his work as a mediator, he has employed LLMs to assist with conflict analysis in contexts where parties hold divergent worldviews. LLMs can identify blind spots, encourage perspective-taking, and support the generation of options to help resolve differences. However, Seul emphasized that their outputs should be assessed critically.
Limitations of AI in Mediating Different Belief Systems
He acknowledged some cases encouraging early evidence of AI’s potential in mediation. For example, one study found AI-generated proposals are clearer and less polarizing than those written by human mediators. However, Seul also noted the limitations of AI particularly when addressing conflicts rooted in sacred values or fundamentally different belief systems. He posited that this creates concerns regarding the oversimplification of moral reasoning and the loss of cultural nuance.
The Need for a Critical Approach to AI Use
Seul noted that AI as a technology is a double-edged sword. In the context of conflict resolution, AI can be used for credible analysis and inclusive dialogue, while also having the potential to create disinformation and division. Moving forward, the key is to determine how AI can enhance human interactions during conflicts and augment our judgement while mitigating the potential harms it may cause.
Seul cautioned against what some scholars term “AI Solutionism”, which is the tendency to believe that AI tools, simply because they exist, can resolve deeply human centered conflicts. AI must be positioned as a support to human judgment, not a replacement for the nuanced cultural, historical context, and live human empathy that a peace process would demand.
Key Takeaways from Martin Wählisch’s remarks
Using 21st Century Tools to Solve 21st Century Problems
Martin Wählisch discussed that as government technology (GovTech) spending is projected to double by 2034, particularly in security and defense, the global landscape is undergoing a technological transformation. AI has demonstrated versatile applications in warfare, including natural language processing, deep learning for aircraft recognition, AI-assisted geospatial analysis, social media mining, and more. Diplomacy, meanwhile, lags behind.
Nonetheless, Wählisch suggested that AI holds potential to redefine the ways in which mediators facilitate diplomacy and peacebuilding. AI systems such as ChatGPT may at some point surpass human performance in emotional awareness, challenging us to rethink how emerging technology can enhance or even compete with human mediation. Behavioral analysis, social media mining, and virtual reality tools can also play a role in assisting mediators with producing creative conflict resolution approaches.
New Frontiers: XR, Digital Twins, and Deliberative Tools
Technologies such as Extended Reality (XR) are also opening new possibilities for immersive peacebuilding. For example, the development of digital twin technology--or virtual replicas of real-world environments updated with real-time data--allows us to better simulate complex negotiations and their potential outcomes. Meanwhile, digital deliberation tools, including surveys and focus groups, create avenues for more inclusive and structured public engagement to help facilitate resolution of differences.
Vigilance Remains Critical
Alongside AI’s great potential, Wählisch emphasized that it is important to recognize the increasing dominance of private companies in the AI ecosystem. Large technology firms behind AI models may inadvertently embed corporate interests into conflict mediation tools, raising questions about neutrality, access, and long-term governance.
As industries continue to witness the “AI of everything” in which short-term effects are overestimated and long-term impacts are underestimated, the need for AI literacy will be as essential as ever, especially for mediators.
Conclusion
This live discussion underscored that AI is neither a magic solution nor an existential threat on its own. It is a powerful tool the impact of which depends largely on how it is designed, deployed, and governed. Participants stressed the need for a dual approach: leveraging AI’s capabilities to enhance human decision making while remaining vigilant about bias, disinformation risks, outside corporate influence, and oversimplification.
Moving forward, building AI literacy among mediators, policy makers, and the broader public will be crucial. Equally important will be ensuring that AI systems used in conflict resolution are ethically grounded, contextually aware, and appropriately governed.
Just as AI’s entry into fields like chess elevated, rather than displaced, human talent, there is an opportunity now to harness AI to strengthen negotiation processes, leading to more inclusive and informed pathways to peace.