Download Article

As we’ve seen from recent reports on sex robots and newspaper articles discussing the pending job loss as a result of robotics and AI, the introduction of robots into our society is increasing rapidly, and it will change the way we live our lives like nothing else before it. Already, robots have an enormous presence across different sectors of our society, spanning from drones used by militaries for unmanned attacks to security guards patrolling malls to personal robots that change the way that we settle into our homes. Millions of people around the world already use these types of robots on a day to day basis, and as the technology develops, that number will only stand to increase.

As we prepare for the widespread integration of robots into our lives it is of paramount importance that ethics is considered in the design, development, and implementation of these technologies. Addressing ethical issues means taking stock of the impact these innovations will have on societal values like safety, security, privacy, and wellbeing. Robots are human creations, and therefore ensuring ethical development requires human accountability. This means that there is a responsibility to anticipate and weigh the costs and benefits that will come from each new development, to consider proactively the potential issues that the new technology will cause rather than trying to contain the effects of robots after their introduction into society.

Moral values must be embedded into the design process of robots, and it is therefore important to consider what values should and will be promoted in their application – a consideration which is made significantly more difficult when accounting for differences in values across cultures. Designers are responsible both to the end users of their product, and for the actions that the robot takes. Because we are still at the point where robots are simply tools with no moral intelligence, the responsibility falls on the humans behind them – from those who design the machines to the policymakers who govern their implementation in society.

We examine three main questions in our discussion of robotic ethics here. First, how do we act ethically in robotics applications? Privacy is a primary concern, particularly with personal robots that have places in the home. The degree of surveillance which is possible through these home robots is quickly increasing: they can listen to you, watch you, and learn your patterns. Furthermore, these devices have greater access to data about you than ever in the past, from the temperature and lights in your house to the social media profiles that they sync to. And the treatment of these kinds of home robots as social figures digs privacy concerns even deeper. Humans have a tendency to anthropomorphize robots, particularly those we are in close contact with, and by doing so we then react to the robot as if it were another person in the room. This sense of another presence further degrades any sense of solitude or privacy one might have in his own room.

Additionally, the use of robots in settings such as schools and nursing homes bring up concerns about social isolation. While there is considerable potential for the technology to help humans, serious ethical issues may arise if users rely on the robots at the expense of human contact, whether by choice or by institutional design. Exclusive use, or even prioritization, of machines over other people could result in a decreased ability to create social bonds, an issue which should be considered prior to the implementation of robots in these settings.
The second question that some researchers put forward to address is, can we design robots to act ethically themselves? As robots increase in autonomy and interact and even cooperate with humans in the unpredictable and unstructured environment in which we live, we must ensure human physical and psychological safety. Some scholars believe that the best way to protect humans from harm is to program robots to behave in ways that appear to be ethically correct. However, while it is essential to equip a robot with rules that make it safe for humans, it could be considered intellectually misleading to say that the machine was actually being ethical. Giving a machine a list of operating rules does not make it a moral agent.

Moreover, it has even been suggested that a robot could be better at doing ethics than humans since it does not have the burden of emotions and relationships. However, others argue that ethical reasoning requires emotional understanding and the ability to empathise and to understand what it means to suffer.

The final question we consider is, do we need to treat robots ethically? Some people believe that robots deserve to be treated ethically because they believe that robots have rights like animals and humans. Much of this thinking is engendered through anthropomorphism, the tendency to project human attributes onto inanimate objects. Some scholars believe that robots will one day have to have rights when technological developments make them equal to humans. This is philosophically interesting but in reality we don’t know when or if this is likely to happen.

Others believe that robots should be treated ethically because of the impact it will have on humans if they are not. The main concern is that robot abuse or mistreatment may be transferred over to human interactions. A modern example arises in the case of sex robots in a recent report by the Foundation for Responsible Robotics. Some scholars have discussed the notion of “robot rape”. Although a sex robot cannot give consent as it is inanimate, it can be programmed to resist sexual advances and allow a human to overpower it and simulate rape. Though it does no direct harm to a living being, there is an argument that such an act may be normalized sexual violence and have a widespread impact on future sexual practices when it comes to consent from a human sexual partner.

None of these issues have a clear resolution, and none should be considered by only one part of the chain in designing and implementing these robots. The Foundation for Responsible Robotics aims to promote the responsible design, development, implementation, and policy of the robots that are becoming increasingly embedded in our society. By bringing together roboticists, ethicists, legal scholars, and leading thinkers in the fields of technology and ethics, the FRR hopes to engage policy makers to consider the potential risks that come with the application of new robotic technologies and ensure that social accountability and responsibility are of primary concern when shaping the policy to regulate the widespread adoption of robots into society.


Age of Robots Magazine available from the following newsstands:


support
Need Help?
Support Ticket