Global, Ethical, and Legal Concerns of Autonomous Warfare

by Todd Hatcher

In his TED talk “The Decision to Kill Shouldn’t Belong to a Robot”, Daniel Suarez (2013) raises some legitimate concerns. He highlights the crumbling of representative government and cautions that we lose the humanity in warfare when we legalize lethal autonomy. But Suarez’s proposed solution—a total ban on autonomous weaponry—seems unlikely to be realized. And if machines are given the option to kill on their own initiative, it seems sooner or later we will face the prospect of machine vs. machine. Warfare loses legitimacy if the weapons of all combatants are fully automated. The scenario evokes a radical shift towards cyber efforts and away from “boots on the ground”, where real lives are won and lost daily on decisions in the field. In a futuristic robot-vs.-robot world, it’s possible the loss of human life would no longer be the most imminent threat in warfare. So what, if anything, would stop conflicts from becoming perpetual?

It often happens that the final outcry for nations to cease conflict is prompted by the loss of life, whether military or civilian. Aldous Huxley writes in a 1946 forward to his classic dystopian thriller Brave New World that should humanity avoid nuclear annihilation, militarily we would be in one of two positions: citizens held hostage by nations possessing atomic weaponry or, should warfare be small and localized (as it is now), citizens remaining in fear of perpetual conflict. Amid the lingering global anxiety of nuclear-capable countries declaring war on one another, the idea of autonomous weapons systems giving warfare a permanent avenue to continue around the world is the last thing the populace could wish for.

Image: Bigstock.com/Digital Storm

Even if Daniel Suarez’s ideal of a total global ban on autonomous weapons were to be adopted, this wouldn’t preclude criminal organizations from obtaining the necessary parts to create their own. In the case of certain hi-tech weapons there are safeguards in place to prevent them from falling into the wrong hands. For example, plutonium and uranium, both elements used in nuclear warfare, are hard to obtain. This largely limits their use by unlawful organizations. Both components are tracked and an inventory kept worldwide (Glaser & Mian, 2015). In contrast, it would be nearly impossible to trace robotic parts that can be used to create autonomous weapons. The Islamic State of Iraq and Syria (ISIS) is already arming consumer drones for the purpose of warfare, creating dirty bombs with plans to unleash them on crowds of civilians and troops (Feller, 2016).

In 2012, the idea of creating a group of governmental experts on lethal autonomy was floated. The goal of the group would be to negotiate an agreement about the use of automated weapons among nations (Gubrud, 2016). With that objective in mind, such a group was called into action in November of 2017, according to the United Nations Office at Geneva, to resolve impending issues concerning autonomous weapons systems.

In November 2017, in a statement delivered to the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), Mary Wareham spoke on behalf of Human Rights Watch and the Campaign to Stop Killer Robots. She said, “Our campaign . . . fundamentally objects to permitting machines to take a human life on the battlefield or in policing, border control, and other circumstances. For us this is a moral ‘red line’ that should never be crossed.” Although no decision was reached this year, the committee has plans to meet in early 2018. It’s certain the decisions made at that meeting will have a long-lasting impact on the future of humanity. Wareham challenged all nations that have not signed the ban on AWS to join the movement or explain their rationale. So far, only 20 nations have signed.

Central to safeguarding autonomous weapons is human factors engineering.

Central to safeguarding autonomous weapons is human factors engineering. The field has been careful to emphasize the need for a human–machine interface (HMI), on the understanding that a human must always remain in the control loop. The HMI is an inherent safeguard against full autonomy. But the mere involvement of human beings in decision-making does not, of course, resolve the larger issues. Ethical challenges include the increasing dehumanization of combat and how to deal with minimizing collateral damage when a system inevitably does go awry. Unmanned systems (including drones and related technologies) and humans need to evolve together to be optimally effective. Since misconduct on the battlefield is hardly unheard of, this will necessitate specific consensus on ethical values. “Ethical” AWS can theoretically be created by translating international laws pertaining to warfare into software code that governs their operation. If AWS were also able to perform a series of checks and balances to monitor soldiers for unethical behavior and report infractions, it might contribute to the justification for their use in battle.

Image: Bigstock.com/paul fleet

The legalities of combat have been the subject of debate for centuries. Markus Wagner, Associate Professor of Law at the University of Warwick, sums up the legal concerns pertaining to AWS in terms of the principle of distinction, the principle of proportionality, and the topic of individual blame (Wagner, 2014). The principles of distinction and proportionality have been and will remain ever-evolving. The main purpose of both is to reduce civilian loss and unnecessary harm to insurgents during combat. Distinction is made between civilian and relevant military targets to minimize unnecessary damage. Proportionality focuses on the question of whether an operation is justified against the harm it likely entails: Is the military target you need to eradicate important enough to justify civilian casualties? A legal question of great importance concerning AWS is who gets blamed when a system malfunctions or assumes incorrect proportionality. There’s no doubt that a machine contributing to loss of civilian life would draw criticism. The use of unmanned aerial vehicles—most notably the drones deployed by the United States in countries such as Pakistan, Iraq, and Afghanistan—has stirred controversy worldwide for over a decade.

In 2015, Catherine Sandoval of the California Public Utilities Commission laid out the framework of legal issues that need to be addressed for autonomous systems. The commission’s focus was on autonomous ride-sharing technology, yet many of the points Sandoval identified have universal relevance: privacy issues, who holds blame in the event of an accident, insurance, and even who is considered the driver of an autonomous vehicle.

…individual programmers could be held accountable through a series of audits the system performs on itself.

Wagner (2014) makes several proposals on how to deal with concerns around AWS. He points out that even a system of high intelligence is not capable of suffering or feeling remorse for its actions. With this in mind, he suggests that individual programmers could be held accountable through a series of audits the system performs on itself. This way the system could identify exactly where in its code a decision originated and who was responsible for writing that code. Another suggestion is to hold the military officer(s) who made the last decision involving the AWS responsible for the machine’s actions.

As the evolution of technology puts ever more sophisticated weapons at the service of human conflict, we can ill afford to sleepwalk into a chaotic situation where a weapons system with lethal autonomy does malfunction and there is no accountability. Yet even with a human element at some point in the process, ultimately there is no blanket solution to the dilemmas of allowing fully autonomous unmanned systems in combat. There may be merit in the idea that deploying machine proxies rather than humans at the front lines of battle could initially reduce human casualties, but at what cost? Without the terrible incentive of great human loss to end wars and inspire eras of peaceful cooperation, we face Huxley’s frightening prospect of more localized conflicts perpetually on our doorstep.


This has been an excerpt from Age of Robots Volume 2 Issue 1


 

References
Feller, S. (2016, October 12). Pentagon: ISIS arming small drones with explosives [Web log post]. Retrieved from https://www.upi.com/Top_News/World-News/2016/10/12/Pentagon-ISIS-arming-small-drones-with-explosives/1861476249628/

Glaser, A., & Mian, Z. (2015). Global fissile material report 2015: Nuclear weapon and fissile material stockpiles and production. Retrieved from http://fissilematerials.org/library/ipfm15.pdf

Gubrud, M. (2016, June 1). Why should we ban autonomous weapons? To survive [Web log post]. Retrieved from  http://spectrum.ieee.org/automaton/robotics/military-robots/why-should-we-ban-autonomous-weapons-to-survive

Sandoval, C. (2015, June). The “sharing” economy: Issues facing platforms, participants, and regulators. Presentation at the Federal Trade Commission. Retrieved from https://www.ftc.gov/system/files/documents/public_events/636241/sandoval.pdf

Suarez, D. (Director). (2013). The decision to kill shouldn’t belong to a robot (TED talk) [Video file]. Available from https://www.youtube.com/watch?v=pMYYx_im5QI

United Nations Office at Geneva. (2017, November 10). 2017 Group of governmental experts on lethal autonomous weapons systems (LAWS) [Web log post]. Retrieved from https://www.unog.ch/80256EE600585943/(httpPages)/F027DAA4966EB9C7C12580CD0039D7B5?OpenDocument

Wagner, M. (2014). The dehumanization of international humanitarian law: Legal, ethical, and political implications of autonomous weapon systems. Vanderbilt Journal of Transnational Law, 47, 1371–1424.

Wareham, M. (2017, November). Statement to the Convention on Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapons Systems in Geneva. Retrieved from https://www.hrw.org/news/2017/11/15/statement-convention-conventional-weapons-group-governmental-experts-lethal