Skip to Main Content
Have library access? Log in through your library
Research Report

Autonomous Weapons and Operational Risk: Ethical Autonomy Project

Paul Scharre
Copyright Date: Feb. 1, 2016
Pages: 55
OPEN ACCESS
https://www.jstor.org/stable/resrep06321

Table of Contents

Export Selected Citations Export to NoodleTools Export to RefWorks Export to EasyBib Export a RIS file (For EndNote, ProCite, Reference Manager, Zotero, Mendeley...) Export a Text file (For BibTex)
  1. (pp. 1-1)
  2. (pp. 2-2)
  3. (pp. 3-4)
  4. (pp. 6-7)

    We have two intuitions when it comes to autonomous systems, intuitions that come partly from science fiction but also from our everyday experiences with phones, computers, cars, and myriad other computerized devices.

    The first intuition is that autonomous systems are reliable and introduce greater precision. Just as autopilots have improved air travel safety, automation can also improve safety and reliability in many other domains. Humans are terrible drivers, for example, killing more than 30,000 people a year in the United States alone (roughly the equivalent of a 9/11 attack every month).² Even without fully autonomous cars, more advanced vehicle autopilots...

  5. (pp. 8-17)

    In order to better understand the risks associated with autonomous weapons, we can first examine the nature of control over autonomous systems in general. An autonomous system is one that, once activated, performs a task on its own. Everyday examples range from simple systems like toasters and thermostats to more sophisticated systems like automobile intelligent cruise control or airplane autopilots. The risk in employing an autonomous system is that the system might not perform the task in a manner that the human operator intended.

    There are a number of reasons why an autonomous system might begin performing inappropriately, from simple...

  6. (pp. 18-24)

    Autonomous weapons are a special kind of autonomous system. In autonomous weapon systems, the task being performed is selecting and engaging targets on the battlefield. Once activated, an autonomous weapon will select and engage targets on its own. It selects targets according to preprogrammed criteria written by humans, but human operators have not chosen the specific targets to be engaged.

    The risk in using an autonomous weapon is that it selects and engages targets other than what the human operator intended. This could result in fratricide, civilian casualties, or unintended escalation in a crisis. The U.S. Department of Defense policy...

  7. (pp. 25-33)

    It is tempting to think that risk can be designed out of complex systems, but this is not the case. Risk can be reduced but never entirely eliminated. It is impossible to account for all of the possible interactions that can occur in sufficiently complex systems.

    As a result, complex systems are potentially vulnerable to system failures due to components interacting in unexpected or nonlinear ways. These can stem from interactions within the system itself, with human operators, or with its environment.

    When there is sufficient “slack” in the system in terms of time between interactions and the ability for...

  8. (pp. 34-37)

    While military systems can exhibit the same kind of complexity that leads to normal accidents, they differ from nuclear power plants, airliners, or spacecraft in one crucial way: Military systems operate in a competitive environment against an adversary. Everyone involved in the operation of a spacecraft or nuclear power plant is trying to get the system to operate safely. There is no enemy out to sabotage its operation. However, for militaries, adversaries are not merely incidental to the system’s operation, they are its very reason for being.

    This added competitive dimension increases the possible ways in which failures can occur....

  9. (pp. 38-40)

    Much of the discourse on autonomous weapons to date has focused on whether their use would be legal and ethical, but an equally important question is whether they could be used safely. Even if they could be used in a manner that is lawful and ethical under most operating conditions, it is conceivable that they could be quite dangerous. The consequences of a failure with some types of autonomous weapons could be catastrophic. Autonomous weapons, like other complex systems, are susceptible to failure. While better design, testing, and operator training can decrease the likelihood of these failures, they cannot be...

  10. (pp. 41-48)

    Human-machine teaming is a better approach than using humans or autonomous systems alone, bringing to bear the unique advantages of each. Understanding how human-machine teaming, or “centaur warfighting,” might work in engagement decisions requires first disaggregating the different roles a human operator performs today with respect to selecting and engaging enemy targets.

    In today’s semi-autonomous weapon systems, humans currently perform three kinds of roles with respect to target selection and engagement. In some cases, human operators perform multiple roles simultaneously.

    The human as essential operator: The weapon system cannot accurately and effectively complete engagements without the human operator.

    The human...

  11. (pp. 49-52)

    While careful risk assessments of autonomous weapons are essential, policymakers, independent experts, and military professionals should be skeptical of their confidence level in any estimation of the risk of employing an autonomous weapon. Understanding risks associated with low probability, high consequence events is notoriously difficult, and militaries’ track records in managing risks of this type are mixed at best.

    Simply accurately estimating the risk of a low probability accident can be exceedingly challenging. In an appendix to the official report on the Challenger accident, Nobel prize–winning physicist Richard Feynman noted the wide disparity of views within NASA regarding the...

  12. (pp. 53-54)

    War is a hazardous endeavor. Militaries must balance various kinds of risk—risk to their forces, the mission, their citizens, innocent civilians, and possibly to the state itself. Military personnel risk their own lives in combat and, in some wars, the state’s very survival may be at stake. States also balance risk among strategic objectives: deterrence, defense, and crisis stability. Military forces must be ready to respond at a moment’s notice to provocation, for example, but not on such a hair-trigger that they create a crisis or cause one to escalate unnecessarily. Militaries will come to different conclusions on how...