Article
from Belfer Center for Science and International Affairs

AI’s New Frontier in War Planning: How AI Agents Can Revolutionize Military Decision-Making

Image for "AI’s New Frontier in War Planning"

Throughout history, rapid changes in the geopolitical and military environment impacted decision-makers’ ability to accomplish strategic or operational objectives.  Being too slow to adapt to changing conditions can be catastrophic in a dynamic environment. History is rife with accounts of militaries paying steep prices in lost lives, battles, and even wars due to their failure to adapt.[1] The United States’ national security depends on planners’ ability to account for this dynamism and expeditiously identify gaps, exploit opportunities, and keep pace to stay competitive in modern warfare.

The Department of Defense should aggressively begin experimenting with Agentic AI tools (a category of AI that can work through a series of tasks on its own to achieve an assigned, complex objective[2]) in its Joint Operational Planning Process (JOPP) for two important reasons. First, Agentic AI has the potential to more quickly and comprehensively synthesize a broad scope of traditional and non-traditional planning factors than humans alone to help produce more thorough, objective courses of action (COA). Second, once a COA is selected, Agentic AI also has the potential to help rapidly publish downstream directives and orders, flattening communication and saving hundreds of man-hours in each planning cycle.

Agentic AI is a capability that could swiftly account for these changing battlespace conditions and help solve large-scale, complex problems independently. This differs from current popular large language models that are dependent on individual prompts to perform a simple, specific task.  Creating multiple dilemmas for a near-peer adversary requires continuous integration of capabilities across all instruments of power and all domains, including the electromagnetic spectrum and the information environment.[3] In the fourth industrial revolution, Agentic AI is a method of deploying multiple autonomy-based technologies working synergistically that can perceive its environment and define a course of action on its own to achieve a given goal.[4] Using this technology with human planners can produce an accelerated multi-disciplinary thinking machine.

Imagine a planning cell with a multifaceted “agent” who could understand geopolitical trends, global dynamics, and national policies as it pertains to a conflict. It could also account for the limitations and constraints of a military in all operational domains through the survey of multiple data sets. This type of “think-spear,” which could also minimize the influence of groupthink, favor-chasing, and counterproductive biases, can generate new opportunities and avenues of approach for decision makers. Deputy Secretary of Defense Kathleen Hicks confirmed this notion during the unveiling of the Pentagon’s 2023 Data, Analytics and Artificial Intelligence Strategy, stating that “from the standpoint of deterring and defending against aggression, AI-enabled systems can help accelerate the speed of commanders’ decisions and improve their quality and accuracy.”[5] We offer here that Agentic AI is the new frontier ‘AI enabler’ the DOD should accelerate the adoption of to achieve these aims.

Alternatively, envision the United States – slow to adapt and hamstrung by its traditional planning processes – competing with an adversary equipped with this “think-spear” across the strategic, operational, and tactical levels. No amount of high technology in the hands of our warfighters can out-fight an adversary who out-maneuvers us when they have better, more rapid information flow. The implications of contesting an adversary with this type of intelligence and decision space warrant strong consideration for Agentic AI in a parallel planning construct.

The Russia-Ukraine war has offered a glimpse of the value of AI in modern warfare and its impact on military operations and tactics. Earlier this year, Time reported that Palantir Technologies AI software was responsible for most of the targeting in Ukraine.[6] Additionally, Palantir has imbedded a software engineer with each battalion, demonstrating the kind of experimentation that has accelerated the “most significant fundamental change in the character of war ever recorded in history,” according to General Mark Milley, former Chairman of the Joint Chiefs of Staff.[7] Indeed, Defense One reported that the Pentagon has also been integrating “AI and machine learning into its intelligence, surveillance, and reconnaissance operations, helping the Ukrainian military thwart some Russian attacks.[8] These nascent experimentations in AI on the battlefield foretell the urgent need for our nation’s military to get ahead on decision-making processes, too.

Agentic AI in the Joint Operations Planning Process can provide information superiority at the speed of relevance. Following, we submit a few ways in which Agentic AI could serve as an effective mean to achieve ends:

  1. Agentic AI, with superior multi-domain awareness, could make force posture recommendations to planners and create multiple dilemmas in a Multi-Domain Operations (MDO) construct due to its ability to consistently curate information on movements of joint and coalition units as well as the adversary.
  2. Agentic AI can help distinguish priorities on the Joint Integrated Prioritized Target List (JIPTL) based on real-time conditions in the battlespace, including the adversary’s capabilities, avenues of approach, risks, and opportunities.
  3. Agentic AI can track and determine potential logistical shortfalls (e.g. fuel, supply, munitions) before they occur to ensure copacetic sustainment support to discrete forces across a vast theater.
  4. Agentic AI can keep “know thy enemy” at the center of COA development. Red teaming is an element planners can quickly lose sight of as the stress of conflict naturally induces one to return to a comfortable known, our own way of fighting, without the enemy’s vote.
  5. Agentic AI can instantly synchronize guidance and intent across the battlespace. Reducing the potential for fratricide and increasing tactical-level flexibility and lethality.
  6. Finally, most fundamentally, planners can leverage AI to produce and disseminate all downstream orders that are born from the cyclical planning process, saving hundreds of man-hours every cycle on tedious, repetitive administrative inputs, and permitting more warfighters to be redirected to the fight.

We acknowledge there is still much to learn about the risks of Agentic AI and its resilience in a contested communications environment. Theoretical discussions on ethics, security, and best practices should continue. Nonetheless, there are countries like China that are competitive in the AI race with a clear desire to achieve technological superiority. Future warfare will almost certainly be won first in the information domain.

Military leaders should accelerate experimentation and adoption of Agentic AI tools into joint operational planning processes. It is critical they should do so with an iterative mindset, working to mitigate risks as they arise (machine learning will be helpful in this regard), rather than waiting for a perfect product to implement. When on the precipice of a technological revolution, we must embrace the risk that comes with taking a giant leap. For it is, no doubt, a greater risk to national security to not be the first Great Power to harness this great power.

 

 

Opinions, conclusions, and recommendations expressed or implied within are solely those of the authors and do not necessarily represent the views of the United States Army, the United States Air Force, the Department of Defense, or any other US government agency. 

Rich Farnell is a 2024 National Security Fellow at Harvard Kennedy School’s Belfer Center. He is skilled in strategic level planning, has led multiple organizations, and has experience in data analytics.  His research focuses on artificial intelligence parallel planning in dynamic environments. 

Kira Coffey is a 2024 Air Force National Defense Fellow and International Security Program Research Fellow at Harvard Kennedy School’s Belfer Center. She is a graduated squadron commander and combat pilot. Her research focuses on whole-of-nation coordination to effectively compete in Great Power Competition.


Statements and views expressed in this commentary are solely those of the authors and do not imply endorsement by Harvard University, the Harvard Kennedy School, or the Belfer Center for Science and International Affairs.

Recommended citation

Farnell, Richard and Kira Coffey. “AI’s New Frontier in War Planning: How AI Agents Can Revolutionize Military Decision-Making.” Belfer Center for Science and International Affairs, October 11, 2024

Footnotes
  1. Mallick, Pankaj. 2024. Artificial Intelligence, Ethics, and the Future of Warfare: Artificial Intelligence, National Security, and the Future of Warfare. 1st ed. London: Routledge India. https://doi.org/10.4324/9781003421849.
  2. Griffith, Erin. “A.I. Isn’t Magic, but Can It Be Agentic?” The New York Times, September 6, 2024. https://www.nytimes.com/2024/09/06/business/artificial-intelligence-agentic.html.
  3. Multi-Domain Operations in NATO - Explained: NATO's Strategic Warfare Development Command." Allied Command Transformation. October 5, 2023. https://doi.org/Retrieved from https://www.act.nato.int/article/mdo-in-nato-explained/.
  4. Mallick, Pankaj. 2024. Artificial Intelligence, Ethics, and the Future of Warfare: Artificial Intelligence, National Security, and the Future of Warfare. 1st ed. London: Routledge India. https://doi.org/10.4324/9781003421849.
  5. "DOD Releases AI Adoption Strategy." U.S. Department of Defense. November 2, 2023. https://doi.org/Retrieved from https://www.defense.gov/News/News-Stories/Article/Article/3578219/#:~:text=The%20Pentagon's%202023%20Data,%20Analytics%20and%20Artificial%20Intelligence%20Adoption%20Strategy.
  6. Bergengruen, Vera. “How Tech Giants Turned Ukraine Into and AI War Lab.” TIME Magazine, February 8, 2024. https://time.com/6691662/ai-ukraine-war-palantir/.
  7. Ibid.
  8. Tucker, Patrick. “AI Is Already Learning from Russia’s War in Ukraine, DOD Says.” Defense One, April 21, 2022. https://www.defenseone.com/technology/2022/04/ai-already-learning-russias-war-ukraine-DOD-says/365978/.
Up Next