Paper - Belfer Center for Science and International Affairs, Harvard Kennedy School

Artificial Intelligence and National Security

    Authors:
  • Greg Allen
  • Taniel Chan
| July 2017

Project Overview

Partially autonomous and intelligent systems have been used in military technology since at least the Second World War, but advances in machine learning and Artificial Intelligence (AI) represent a turning point in the use of automation in warfare. Though the United States military and intelligence communities are planning for expanded use of AI across their portfolios, many of the most transformative applications of AI have not yet been addressed. 

In this piece, we propose three goals for developing future policy on AI and national security: preserving U.S. technological leadership, supporting peaceful and commercial use, and mitigating catastrophic risk. By looking at four prior cases of transformative military technology—nuclear, aerospace, cyber, and biotech—we develop lessons learned and recommendations for national security policy toward AI.

Executive Summary

Researchers in the field of Artificial Intelligence (AI) have demonstrated significant technical progress over the past five years, much faster than was previously anticipated.

  • Most of this progress is due to advances in the AI sub-field of machine learning.
  • Most experts believe this rapid progress will continue and even accelerate.

Most AI research advances are occurring in the private sector and academia.

  • Private sector funding for AI dwarfs that of the United States Government.

Existing capabilities in AI have significant potential for national security.

  • For example, existing machine learning technology could enable high degrees of automation in labor-intensive activities such as satellite imagery analysis and cyber defense.

Future progress in AI has the potential to be a transformative national security technology, on a par with nuclear weapons, aircraft, computers, and biotech.

  • Each of these technologies led to significant changes in the strategy, organization, priorities, and allocated resources of the U.S. national security community.
  • We argue future progress in AI will be at least equally impactful.

Advances in AI will affect national security by driving change in three areas:
military superiority, information superiority, and economic superiority.

  • For military superiority, progress in AI will both enable new capabilities and make existing capabilities affordable to a broader range of actors.
    • For example, commercially available, AI-enabled technology (such as long-range drone package delivery) may give weak states and non-state actors access to a type of long-range precision strike capability.
    • In the cyber domain, activities that currently require lots of high-skill labor, such as Advanced Persistent Threat operations, may in the future be largely automated and easily available on the black market.
  • For information superiority, AI will dramatically enhance capabilities for the collection and analysis of data, but also the creation of data.
    • In intelligence operations, this will mean that there are more sources than ever from which to discern the truth. However, it will also be much easier to lie persuasively.
    • AI-enhanced forgery of audio and video media is rapidly improving in quality and decreasing in cost. In the future, AI-generated forgeries will challenge the basis of trust across many institutions.
  • For economic superiority, we find that advances in AI could result in a new industrial revolution.
    • Former U.S. Treasury Secretary Larry Summers has predicted that advances in AI and related technologies will lead to a dramatic decline in demand for labor such that the United States “may have a third of men between the ages of 25 and 54 not working by the end of this half century.”
    • Like the first industrial revolution, this will reshape the relationship between capital and labor in economies around the world. Growing levels of labor automation might lead developed countries to experience a scenario similar to the “resource curse.”
    • Also like the first industrial revolution, population size will become less important for national power. Small countries that develop a significant edge in AI technology will punch far above their weight.

We analyzed four prior cases of transformative military technologies – nuclear, aerospace, cyber, and biotech – and generated “lessons learned” for AI.

  • Lesson #1: Radical technological change begets radical government policy ideas
    • As with prior transformative military technologies, the national security implications of AI will be revolutionary, not merely different.
    • Governments around the world will consider, and some will enact extraordinary policy measures in response, perhaps as radical as those considered in the early decades of nuclear weapons.
  • Lesson #2: Arms races are sometimes unavoidable, but they can be managed
    • In 1899, Fears of aerial bombing led to an international treaty banning the use of weaponized aircraft, but voluntary restraint was quickly abandoned and did not stop air war in WWI.
    • The applications of AI to warfare and espionage are likely to be as irresistible as aircraft. Preventing expanded military use of AI is likely impossible.
    • Though outright bans of AI applications in the national security sector are unrealistic, the more modest goal of safe and effective technology management must be pursued.
  • Lesson #3: Government must both promote and restrain commercial activity
    • Failure to recognize the inherent dual-use nature of technology can cost lives, as the example of the Rolls-Royce Nene jet engine shows.
    • Having the largest and most advanced digital technology industry is an enormous advantage for the United States. However, the relationship between the government and some leading AI research institutions is fraught with tension.
    • AI Policymakers must effectively support the interests of both constituencies.
  • Lesson #4: Government must formalize goals for technology safety and provide adequate resources
    • In each of the four cases, national security policymakers faced tradeoffs between safety and performance, but the government was more likely to respond appropriately to some risks than to others.
    • Across all cases, safety outcomes improved when the government created formal organizations tasked with improving the safety of their respective technology domains and appropriated the needed resources.
    • These resources include not only funding and materials, but talented human capital and the authority and access to win bureaucratic fights.
    • The United States should consider standing up formal research and development organizations tasked with investigating and promoting AI safety across the entire government and commercial AI portfolio.
  • Lesson #5: As technology changes, so does the United States’ National Interest
    • The declining cost and complexity of bioweapons led the United States to change their bioweapons strategy from aggressive development to voluntary restraint.
    • More generally, the United States has a strategic interest in shaping the cost, complexity, and offense/defense balance profiles of national security technologies.
    • As the case of stealth aircraft shows, targeted investments can sometimes allow the United States to affect the offense/defense balance in a domain and build a long-lasting technological edge. 
    • The United States should consider how it can shape the technological profile of military and intelligence applications of AI.

Taking a “whole of government” frame, we provide three goals for U.S. national security policy toward AI technology and provide 11 recommendations.

  • Preserve U.S. technological leadership
    • Recommendation #1: The DOD should conduct AI-focused war-games to identify potential disruptive military innovations.
    • Recommendation #2: The DOD should fund diverse, long-term-focused strategic analyses on AI technology and its implications.
    • Recommendation #3: The DOD should prioritize AI R&D spending areas that can provide sustainable advantages and mitigate key risks.
    • Recommendation #4: The U.S. defense and intelligence community should invest heavily in “counter-AI” capabilities for both offense and defense.
       
  • Support peaceful use of the technology
    • Recommendation #5: DARPA, IARPA, the Office of Naval Research, and the National Science Foundation should be given increased funding for AI-related basic research.
    • Recommendation #6: The Department of Defense should release a Request for Information (RFI) on Dual-Use AI capabilities.
    • Recommendation #7: In-Q-Tel should be given additional resources to promote collaboration between the national security community and the commercial AI industry.
       
  • Manage catastrophic risks
    • Recommendation #8: The National Security Council and State Department should study what AI applications the United States should seek to restrict with treaties.
    • Recommendation #9: The Department of Defense and Intelligence Community should establish dedicated AI-safety organizations.
    • Recommendation #10: DARPA should fund research on fail-safe and safety-for-performance technology for AI systems.
    • Recommendation #11: NIST and the NSA should explore technological options for countering AI-enabled forgery.
For more information on this publication: Please contact the Belfer Communications Office
For Academic Citation: Allen, Greg and Taniel Chan. “Artificial Intelligence and National Security.” Paper, Belfer Center for Science and International Affairs, Harvard Kennedy School, July 2017.

The Authors