Considering Ethical and Societal Implications of Robotics and AI:

In reality and science fiction there is no easy answer

In this question-and-answer session, a computer scientist, a philosopher and a novelist share diverse views on the promise and peril of robotics and AI. Robert Venditti, a novelist and writer of comic books, is best known for his graphic novel The Surrogates, where robotic surrogates deal with life’s obligations while the characters’ real selves experience life virtually from the safety of home. Ryan Jenkins is an assistant professor of philosophy at California Polytechnic State in San Luis Obispo, who studies the ethics of emerging technologies like driverless cars and killer robots. Lynne Parker is associate dean of engineering at University of Tennessee-Knoxville, and co-led a White House-commissioned task force that created the U.S. National Artificial Intelligence Research and Development Strategic Plan.

Click on the image to download this article

Question: Lynne, at a high level, what do you advise policy makers on robotics- and AI-related issues?

Lynne Parker: Industry alone isn’t going to solve all the issues raised by robotics and AI, because related societal challenges don’t always have a market driver. So there are important public issues that need research investments and policy makers need to appreciate this. At the same time, because robotics and AI have the power to alter how we live, learn and work, policy makers need to understand the implications of developing these technologies. They need to understand what the technology can do to imagine how it could be used in unexpected ways so we can address those issues.

This argues for a new way of thinking about education. We need to ensure that people learn critical thinking skills and creativity – skills less likely to be replaced by AI. Also, people need to have basic computational skills to understand the rudiments of how AI systems work in order to trust and accept them – or to detect when they are applied to potentially deceive us, say, via altered news footage. This is a policy issue and the people making local decisions about what’s happening in the curriculum need to recognize that.

Question: Ryan, in one of your academic papers, you suggest that we face a challenge in programming AI-enabled agents to reliably make moral decisions. How can we hope to implement AI to reliably make good moral decisions when humans themselves don’t?

Ryan Jenkins: I think the more pertinent question is whether we can trust AI to make moral decisions as well as or better than humans. Computers and AI are not human and therefore they are going to navigate the moral universe differently than we do. That raises questions about how we can trust them to do the right thing, if they understand morality in a totally different way than we do – or if they don’t understand it at all. Perhaps a better way of thinking about this issue is not whether AI can be programmed to have an intrinsic ability to make moral decisions, but whether we’re aware of and can shape the morally-laden impacts of AI decision-making or algorithmic outcomes. We already know AI can demonstrate systematic biases that reflect traditional dimensions of injustice in society. So our approach and its outcomes must be reviewed and lead to better algorithmic designs and results.

Question: Ryan, is it fair to say that security and privacy are widely shared goals among creators and users of AI?

Ryan Jenkins: I’m tempted to say there’s a greater concern for privacy and security among the people that rely on AI than the people who create it. That’s revealed when we see lax security practices of companies and government entities that should know better. AI users – the public – seem resigned to a world where their personally identifiable information is on the internet, though that’s not the best of all possible worlds. That said, technologists who create AI appreciate that the value of related products increases in proportion to the amount of data they gather. So there’s a tension between the user’s desire for privacy and AI’s need for data to be effective.

Question: Lynne, how do you view the apparent trade-offs between AI and the issues of security and privacy?

Lynne Parker: Let’s make an important distinction here. AI software relies on lots of data, but the protection of that data relies on good old-fashioned software that’s buggy or on sub-standard cybersecurity practices. This leads to a false sense that we must relinquish our privacy in order to use AI. People are working on AI systems that can actually create smarter cybersecurity systems. One can imagine an “AI protector” wrapped around every database, constantly vigilant to unauthorized intrusions, or AI systems in a development environment that hack into database software or cybersecurity software that the database sits behind so developers can find vulnerabilities. If AI depends on massive amounts of data, and that data is properly protected, then AI and privacy is not an either-or proposition.

Question: Robert, another take on data privacy is that today we willingly surrender some privacy in exchange for value received. How do you balance convenience and caution relative to AI in life or fiction?

Robert Venditti: People love their 99-cent downloads. We’re all willing to trade some level of privacy for the sake of convenience. When a search engine suggests another download we might like based on AI analysis, it can be rather startling . . . but we download it anyway. I wrote a book titled The Homeland Directive about a surveillance state and how much privacy we sacrifice on a daily basis for convenience or perceived safety. Like much of science fiction, the drama derives from how humans react to moral dilemmas created by advanced technology. Through my stories’ characters, I’m always wrestling with themes that I want to explore. Ultimately, I might raise questions that might not be answerable. For me, that’s where the drive and inspiration to write comes from – spending time exploring issues for which I don’t have an answer.

Question: Lynne, Ryan, did science fiction play a role in making you scientists and ethicists?

Ryan: I don’t know that it pushed me to be a philosopher but I have enjoyed reading my fair share of science fiction. It’s fun to take one particular technological possibility and tweak it and say, “What if human beings had access to a device that did something like this?” Just to see how human beings would behave against that kind of backdrop. That’s what I think is so interesting about science fiction.

Lynne: When I look at science fiction I don’t think about a particular book, TV show or movie, I think about the creativity that’s exhibited. It helps me think out of the box because as a genre it’s so good at pushing the envelope of what’s possible. It helps me think about how I can be really radical and creative in my thinking about work.

Question: Robert, what about the impact of science fiction on technology?

Robert: I’ve been contacted over the years by people doing dissertations or writing for scholarly journals about the post-human experience, but it’s hard for me to say if somebody is sitting down and trying to create a surrogate right now. As a writer, you have to be thinking about what’s next more than what’s already here. You have to look at the thing that’s now and take it two, three, four steps down the road and imagine what it could be. I can see how that would spark ideas for technological development, which in turn would spark ideas for more science fiction stories. It’s a symbiotic relationship.

Robert Venditti, Ryan Jenkins and Lynne Parker will provide insight on ethical and societal implications of technology in scientific fact and science fiction at the annual SXSW Conference, March 9-18, 2018 in Austin. The session, A Roboticist, Ethicist and Novelist Walk into a Bar, is included in the IEEE Tech for Humanity Series at SXSW. For more information please see http://techforhumanity.ieee.org.