In his Ted Talk “The Decision to Kill Shouldn’t Belong to a Robot”, Daniel Suarez (2013) makes legitimate points, such as the crumbling of representative government and losing the humanity in warfare by legalizing lethal autonomy. His assertion is creating a total ban on autonomous weaponry will curb its use in warfare, however, this does not seem like a feasible option. Although the first predicament discussed is whether or not machines should have the option to kill on their own, there is little doubt that this trend would almost certainly lead to machine against machine. Warfare loses legitimacy if all weapons from all nations involved are automated. This could indicate a shift in warfare to cyber efforts and not boots, or autonomy, on the ground. It is likely the loss of human life would no longer be the most imminent threat in future warfare if it was robot versus robot. So what, if anything, would stop conflicts from becoming perpetual?

It often happens that the final outcry for nations to stop conflict is the loss of life, whether civilian or military. Aldous Huxley writes in the forward to his classic science fiction thriller Brave New World should humanity avoid nuclear annihilation, militarily we would be in one of two positions: citizens held hostage by nations possessing atomic weaponry or, should warfare be small and localized (as it is now), citizens remain in fear of perpetual conflict. Huxley predicted this in 1932. As global sentiment becomes fear of nuclear capable countries declaring war on one another, the idea of autonomous weapons systems giving warfare a permanent avenue to continue around the world is the last thing the populace wants to face.

Another point to consider is if there were a total global ban on autonomous weapons, it would not necessarily preclude criminal organizations from obtaining the necessary parts to create their own. With other types of weapons there are safeguards in place to prevent them from falling into the wrong hands. For example, plutonium and uranium, both elements used in nuclear warfare, are hard to obtain. This largely limits the use of them by unlawful organizations. Both components are tracked and an inventory is kept worldwide (Glaser & Mian, 2015).

It would be nearly impossible to trace robotic parts that can be used to create autonomous weapons. The Islamic State of Iraq and Syria (ISIS) is already arming consumer drones for the purpose of warfare, creating dirty bombs with plans to unleash them on crowds of civilians and troops (Feller, 2016). In 2012, the idea of creating a group of governmental experts on lethal autonomy was established. The goal of the group, would be to negotiate an agreement about the use of automated weapons among nations (Gubrud, 2016). With that objective in mind, this group was called into action in November of 2017, according to the United Nations Office at Geneva, to resolve impending issues concerning AWS.

This past November, in a statement delivered to the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), Mary Wareham spoke on behalf of Human Rights Watch and The Campaign to Stop Killer Robots. She said “…our [campaign] fundamentally objects to permitting machines to take a human life on the battlefield or in policing, border control, and other circumstances. For us this is a moral ‘red line’ that should never be crossed.” Although no decisions were reached this year, the committee has plans to meet in early 2018. It is certain the decisions made there will have a long-lasting impact on the future of humanity. Wareham challenged all nations that have not signed the ban on AWS to join the movement or explain their rationale. So far, only 20 nations have signed.

At the foundation of safe-guarding the future of autonomous weapons is human factors engineering. This is because the field will always emphasize that these weapons need a human-machine interface (HMI), which indirectly suggests a human must always remain in the control loop with these weapons. This is an inherent safeguard against full autonomy. Having a person involved with decision-making brings ethics to the forefront of this debate as well. Ethical challenges include the eventual dehumanization of combat and how to deal with minimizing collateral damage when a system eventually does goes awry. Unmanned systems, including drones and the like, and humans need to evolve together to be optimally effective. This involves the sharing of similar ethical values. Misconduct on the battlefield is not unheard of. “Ethical” AWS can theoretically be created through translating international laws that pertain to warfare into code. That software could then be used to make decisions with the inability to do something out of hatred or revenge as humans can be. If AWS was able to perform a series of checks and balances to monitor soldiers for unethical behavior as well as report infractions, it could also share justification for their use in battle.

Legalities of combat have been around for centuries. As Markus Wagner (2014), associate professor of law at University of Miami School of Law, points out, the legal concerns pertaining to AWS are that of principle of distinction, principle of proportionality, and the topic of individual blame. The principles of distinction and proportionality have been and will remain ever-evolving. The main purpose of both is to reduce civilian loss and unnecessary harm to insurgents during combat. Distinction is exactly what it sounds like: distinguishing between civilian and relevant military targets to minimize damage. The principle of proportionality essentially asks if the operation is worth the loss that might occur. Is the military target you need to eradicate important enough to justify civilian casualties? A legal question of great importance involving AWS is who gets blamed when a system malfunctions or assumes incorrect proportionality? There is no doubt that a machine contributing to loss of civilian life would draw criticism. The use of unmanned aerial vehicles, or more famously “drones”, by the United States in countries such as Pakistan, Iraq, and Afghanistan have stirred controversy worldwide for over a decade.

In 2015, Catherine Sandoval, a commissioner with the California Public Utilities Commission, laid out the framework of legal issues that need to be addressed and answered for autonomous systems. Although she was speaking in regards to autonomous ride-sharing technology, the concerns are congruent with military technology. Privacy issues, who holds blame in the event of an accident, insurance, and even who is considered the driver in an autonomous vehicle are all a different facet of the same concern with regulating autonomous weapons.

Wagner (2014) presents several theories on how to deal with this issue. He points out that even a system with high intellectual capability is not capable of suffering or feeling remorse for its actions. Keeping this in mind, he suggests that maybe the programmer should be held responsible through a series of audits the system performs on itself. This way the system can find where exactly in its code the decision came from and who is responsible for writing that code. Another suggestion is to hold the military officers who made the last decision involving the AWS responsible for the machines actions.

Ultimately, the goal is to minimize the chaotic scene of blamelessness when the time does come that a machine malfunctions. There is not a blanket solution to fully autonomous unmanned systems being allowed in combat. In one instance it may be appropriate, however, there is always the possibility that it is not necessary. The idea of ending conflict and reducing casualties is noble but when that introduces the possibility for perpetual warfare, it is tough to endorse.


Feller, S. (2016). Pentagon: ISIS Arming Small Drones With Explosives. Retrieved from (Links to an external site.)Links to an external site.

Glaser, A. & Mian, Z. (2015). Global Fissile Material Report 2015: Nuclear Weapon and Fissile Material Stockpiles and Production. Retrieved from (Links to an external site.)Links to an external site.

Gubrud, M. (2016). Why Should We Ban Autonomous Weapons? To Survive. Retrieved from (Links to an external site.)Links to an external site.

Sandoval, C. (2015). The “Sharing” Economy: Issues Facing Platforms, Participants, and Regulators. Retrieved from

Suarez, D. (Director). (2013, June 13). The Decision to Kill Shouldn’t Belong to a Robot [Video file]. Retrieved from

United Nations Office at Geneva. (2017). 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS). Retrieved from

Wagner, M. (2014). The Dehumanization Of International Humanitarian Law: Legal, Ethical, And Political Implications Of Autonomous Weapon Systems. Vanderbilt Journal of Transnational Law, 47(5), 1399.

Wareham, M. (2017). Statement to the Convention on Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapons Systems in Geneva. Retrieved from