How we decide who & what we care about & whether robots stand a chance

Dr. Dan Crimston

The movie Blade Runner 2049 explored a dystopian vision of a world where the boundaries between artificial intelligence, biorobotics and human are blurred and where humans empathise with robots. Image: Harrison Ford in “Blade Runner 2049.” Stephen Vaughan/Alcon Entertainment, Warner Bros.

 

When psychologists talk about a “moral circle” they are referring to how far we extend our moral consideration towards others. That is, whether we care about the well-being of others, and act accordingly.

For most of us, the continuum of our moral circle is pretty straightforward: we include our loved ones, and we aren’t all that concerned about rocks or the villains of society. But the middle ground between the obvious ins and the obvious outs are not quite as clear-cut.

In a paper published in this February’s issue of Current Directions of Psychological Science titled Toward a Psychology of Moral Expansiveness, myself and a team of researchers from The University of Queensland, The University of Melbourne, and The University of Bath synthesised this emerging field of psychological research. We found that our moral circles are a surprisingly multifaceted and impressionable element of our moral cognition.

And historical trends suggest they are expanding, meaning the future of our moral circles may be vastly different from today. Could they one day include robots?

 

Why moral circles are important

The moral circle is an intuitive concept. We are concerned about the welfare of those inside our moral circle and feel a sense of moral obligation for their treatment. Those on the outside can be subject to indifference at best, and horrific treatment at worst – think the Holocaust, or the cruellest elements of factory farming.
Therefore, our assessment of who is in and who is out is incredibly consequential, and we are confronted with the reality of these decisions every day. Do you feel an obligation to help a homeless person you pass? Are you concerned about the plight of refugees? Or the survival of the great apes?

These issues are frequently presented to us as direct tradeoffs. For example, if you support political policies that champion economic advancement you might be less concerned about the protection of ecosystems that would interfere with such policies.

Our research suggests that how we respond to these ethical challenges is in large part determined by the makeup of our moral circle.

Summer 1944 – German Nazi death camp Auschwitz in Poland. The arrival of Hungarian Jews who were clearly outside the Nazi moral circle. The image is part of the Auschwitz Album. See also http://www.zug-der-erinnerung.eu/Bilder/ausalb113.jpg

What determines our moral circle

Whether you include someone or something within your moral circle is more complicated than you may think. When pressed, you may be able to identify whether an entity is worthy of moral consideration, but can you explain why?

 

Individual differences

As a bedrock, our moral circle judgements are associated with some relatively stable differences at the individual level. For example, including more entities within our moral circle correlates strongly with increased empathy, the ability to take another’s perspective, and the endorsement of egalitarian values.
Similarly, we tend to possess a larger moral circle if our moral instincts centre around the reduction of harm, rather than a priority for our in-group. People who identify with all humanity are likely to show greater concern for out-group members. While those who possess a sense of oneness with nature feel a strong moral obligation toward non-human animals and the environment.

 

Motivation

Beyond individual differences, your moment to moment motivations have the power to manipulate your moral circle. For example, if you love animals, but you also love eating meat, in the moment you are about to tuck into a steak you are likely to deny the moral standing of animals.
Similarly, we are more likely to cast an entity out of our moral circle if its needs conflict with our own, such as when weighing up our desire for economically valuable land with habitat protection. Likewise, if resources are scarce – say, during a recession – we are more likely to hold biased attitudes towards out-group members and view them as exploitable.

 

Perceptions of others

 

Our perceptions of others are also crucial to their inclusion within the moral circle. First and foremost is the possession of a mind. Can they feel pain, pleasure or fear? If we perceive the answer is yes then we are far more likely to grant them moral inclusion.

Equally, if groups are dehumanised and perceived to lack fundamental human traits, or objectified and denied personhood, we are far less likely to include them within our moral circle. Consider how stigmatised groups are often portrayed by political leaders, or on social media, and the power this might have in determining their moral inclusion.

Our moral circles can be shaped by subtle cognitive forces beyond our conscious awareness. The simple cognitive switch of adopting an inclusion versus an exclusion mindset can have a substantial impact.

Cognitive forces

 

Finally, our moral circles can be shaped by subtle cognitive forces beyond our conscious awareness. The simple cognitive switch of adopting an inclusion versus an exclusion mindset can have a substantial impact. Looking for evidence that something is worthy of moral inclusion produces a smaller moral circle than when looking for evidence that it is unworthy.

 

Similarly, how an entity is framed can be of tremendous consequence. Framing animals as subtly human-like has been shown to reduce speciesism and expand our moral circles.

 

An impending ethical challenge

History shows that humanity trends toward moral expansion. Time and again, generations consider the moral standing of entities beyond the scope of their ancestors.

In the coming years we will face yet another novel ethical challenge due to the inevitable rise of artificial intelligence. Should robots be granted moral inclusion?
Indeed, some are already beginning to ask these questions. Robots have been awarded citizenship status, and their perceived mistreatment can elicit an emotional response.

The estimation of robots as worthy of moral consideration could depend on whether they meet many of the criteria outlined above. Do we perceive them to feel pain, pleasure or fear? Are they are framed as human-like or entirely artificial? Are we looking for evidence that they should be included in our moral circle, or evidence that they shouldn’t be? And do their needs conflict with our own?

While this issue is guaranteed to be divisive, one cannot deny that it presents a fascinating ethical challenge for our species.

 

Study: http://journals.sagepub.com/doi/abs/10.1177/0963721417730888


This article was originally published on The Coversation. Read the original article here: https://theconversation.com/how-we-decide-who-and-what-we-care-about-and-whether-robots-stand-a-chance-91987