Analysis & Opinions - Daedalus

The Moral Dimension of AI-Assisted Decision-Making: Some Practical Perspectives from the Front Lines

| Spring 2022

This essay takes an engineering approach to ensuring that the deployment of artificial intelligence does not confound ethical principles, even in sensitive applications like national security. There are design techniques in all three parts of the AI architecture–algorithms, data sets, and applications–that can be used to incorporate important moral considerations. The newness and complexity of AI cannot therefore serve as an excuse for immoral outcomes of deployment by companies or governments.

One of the most frequent questions I was asked as U.S. Secretary of Defense (2015–2017) was whether there will be autonomous lethal weapons. My answer was no, the U.S. Department of Defense (DOD) would not deploy or use truly autonomous systems in the application of lethal force. Being technologically inclined, I established the Pentagon’s official policy in a memorandum back in 2012 when I was Deputy Secretary. When conceiving of this directive, I had imagined myself standing in front of the news cameras the morning after innocent bystanders had been killed in an airstrike aimed at terrorists or opposing combatants. And suppose I answered in response to obvious and justified interrogation over responsibility: “It’s tragic, but it’s not our fault: the machine did it.” This reply would be rightly regarded as unacceptable and immoral.

What, then, can ethically “justify” the risk of a terrible error made in the application of artificial intelligence?1 In one sense, nothing, of course. Yet as a practical matter, AI is going to be used, and in an ever-widening set of applications. So what can bound moral error? Algorithm design? Data set selection and editing? Restricting or even banning use in sensitive applications? Diligent, genuine, and documented efforts to avoid tragedies? To some extent, all of these.2 The fact that there are practical technical approaches to responsible use of AI is paramount to national defense. AI is an important ingredient of the necessary transformation of the U.S. military’s armamentarium to the greater use of new technologies, almost all of them AI-enabled in some way. 

This essay takes a technical rather than a legal approach to AI ethics. It explores some practical methods to minimize and explain ethical errors. It provides some reasons to believe that the good to be obtained by deployment of AI can far outweigh the ethical risks.

 

 

For more information on this publication: Belfer Communications Office
For Academic Citation: Carter, Ash.“The Moral Dimension of AI-Assisted Decision-Making: Some Practical Perspectives from the Front Lines.” Daedalus, Spring 2022.