As government leaders battle increasingly severe and complex disasters, multimodal AI emerges as a promising tool for effective, coordinated crisis response.
This essay is part of a continuing publication series for the Global Crisis & Resilience Forum led by Juliette Kayyem, Faculty Chair of the Belfer Center’s Homeland Security Program. The forum is supported by McKinsey & Company. The ideas in these essays are the independent products of the authors.
Imagine this future scenario: as a hurricane develops, both its intensity and the timing of landfall are recalculated every hour on desktop-grade computers. The nature of the impact, including noncorrelated crises that may occur and second-order effects of the hurricane, are modeled through multiple scenarios on city-scale digital twins that have property-level granularity. The output of these simulations results in a clear set of trigger-based action plans that are tailored, verified through human-in-the-loop mechanisms, and sent to emergency responders, community leaders, government agencies, and potentially even residents in affected areas. Community leaders have access to tools that allow them to understand what to expect, what resources to leverage, and what actions may make the most difference. Homeowners receive targeted suggestions of how to protect their assets, avoid falling victim to fraud, and navigate post-disaster support. Disinformation campaigns get countered by fact-based automated outreach.
Ellencweig, Ben , Jessica Lamb, Jon Spaner, Mihir Mysore and Jesse Salazar. “How Multimodal AI Could Retool Global Crisis Response.” Edited by Kayyem, Juliette and Nate Bruggeman. Belfer Center for Science and International Affairs, Harvard Kennedy School, June 3, 2024