How Old Biases Are Infiltrating New Autonomous Technologies

 

Nita Patel

 

           

Click on the image to download this article

Artificial intelligence (AI) is beginning to show racist and sexist associations, and it learned them by watching us.

 

In a world where technology leadership is skewed 80% men and 83% white, it is clearly time that the global technical community starts confronting how unconscious biases might well be penetrating autonomous systems. Evidence, in fact, is growing that AI’s algorithms are reliant on data and benchmarks based on biases that much of the world has been trying to move away from for decades.

We face a choice today. Either we recognize the biases that haunt our daily culture and work to counteract their further proliferation throughout AI. Or we relive our past mistakes.

 

Representative of the Real World

The reality is that technology leadership remains predominantly white and male, and AI is reflective of that reality. And everybody has biases, both conscious and unconscious. They exist, and we are not going to rid the world of them. The key is to be aware of the pervasiveness of bias and control for it.

But while there have been large-scale training and consciousness-raising efforts to counteract bias in processes such as those within enterprise human resources, AI is developing largely without such intentional efforts. We are becoming increasingly reliant on cutting-edge autonomous systems without understanding what cultural impressions we are developing.

How much of a problem is it? At this point, it appears that no one knows. “We are still trying to understand the full impact that comes from the many AI systems using these biased embeddings,” says James Zou, an assistant professor at Stanford University who researched the issue while at Microsoft, as quoted in MIT Technology Review.

 

The Not-So-Good Old Days

It appears especially likely that unconscious bias could permeate developing AI and come to a head in large core elements that streamline functionality and intelligence. In this way, we are creating AI entities with human biases and then empowering them to make choices proactively and at the same time perpetuate stereotypes that we have been trying to either uproot or alter for years.

One study of gender bias in the open-source software community, for example, looked at nearly 1.4 million users and showed that women’s contributions were more frequently accepted than men’s—but only if the contributors were not identifiable as women. Another study found that a public dataset of photos reflected traditional gender stereotypes.

Particularly for poorer people, minorities, and other traditionally marginalized groups, the stakes are high, given that AI and its algorithms are already being used in financial and legal decision-making.

Now What?

There is some debate about whether AI should be engineered to create an arguably more desirable world than the imperfect real one. Could a well-intentioned effort to root bias out of algorithms ultimately make AI a less useful tool for humanity? There is still much to be learned about unconscious bias and how it moves in the world; it is not a very ingrained concept in behavioral science.

So, what exactly is there to be done within the global community of AI developers today? At this point, the challenge is to consider whether algorithms are helping undo social progress that has been made around the world and whether rules need to be created around AI to ensure it behaves in the ways that we hope it does. We are only in the early stages of understanding unconscious bias and how it may be seeping through in the development of autonomous systems. Now is the time to be looking at the problem, monitoring it, and seeking opportunities to expand representation in our tech tools.

The Institute of Electrical and Electronics Engineers (IEEE) is helping technologists learn from one another in areas such as this one. I am scheduled to join a panel discussion on “Algorithms, Unconscious Bias & AI,” which is part of the IEEE Tech for Humanity Series, being held at South by Southwest in Austin, Texas. And there are a number of ongoing IEEE activities in this vein around the world. The mission of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, for example, is to ensure that every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems.

The garbage in/garbage out (GIGO) concept—that bad input yields bad outputs—has long been recognized in computer science, but we can already see how AI presents an altogether new and vexing spin on the problem when some of our input is unconsciously inserted.


Nita Patel of L-3 Warrior Systems and IEEE Women in Engineering (IEEE WIE), along with Lynn Conway from the University of Michigan and Amy Nordrum at IEEE Spectrum, will provide insight on this topic at the annual SXSW Conference, 9–18 March 2018, in Austin. The session on “Algorithms, Unconscious Bias & AI” is scheduled on 13 March 2018. For more information please see http://techforhumanity.ieee.org

 

Nita is also founder of the IEEE WIE International Leadership Conference in San Jose, California, 21–22 May 2018. For more information, please see http://ieee-wie-ilc.org/

https://www.bloomberg.com/news/articles/2017-12-04/researchers-combat-gender-and-racial-bias-in-artificial-intelligence