DS in the Real World
Our world is changing.
We are more connected now than at any other point in history; armed with a wealth of information at our fingertips, the geographical distance that once kept us apart has been erased by the digital age.
This transformation has altered the way we interact, how we do business, and how we stay informed. It has given a voice to those who did not have one and created greater transparency and accountability across our sociopolitical landscape. And yet, through this increased connectedness, we’ve seen how different our views of the world remain.
While the last few years have seen more open conversation about these differences — and the underlying biases that drive them — we have also seen the impacts they can have when these biases are ignored; particularly in the world of technology, where automation is both a blessing and a curse.
Trending AI Articles:
1. From Perceptron to Deep Neural Nets
2. Neural networks for solving differential equations
3. Bursting the Jargon bubbles — Deep Learning
4. Turn your Raspberry Pi into homemade Google Home
I’ve followed the early political career of Alexandria Ocasio-Cortez and have often found myself drawn to her approach and message. As her meteoric ascent continues, AOC’s trademark clear and direct approach to the issues of our time has been a welcome change of pace in an otherwise static space.
There have however, been moments when her stance on less-discussed issues have given me pause. During a talk at this year’s MLK Now event, AOC discussed the role technology plays in our every day lives and made headlines with the following comments:
“Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated assumptions and if you don’t fix the bias, then you are just automating the bias.”
It’s not surprising that this drew criticism from a handful of conservative pundits. However, what struck me was how conflicted I was with my own thoughts on the subject. While the intention of the comments were clear, this blanket statement that (all) algorithms were biased because they’d been created by humans didn’t sit right with me.
Can algorithms be biased? Of course.
Do all algorithms contain bias? Of course not.
Fascinated by this mental game of ping pong, I decided to dig in further to re-evaluate my own perceptions and learn more from some of the experts leading in this space. What follows is a brief summary of my own research: what algorithms are, how they are constructed, what pre-defined biases may indeed be automated, and steps we as a society can take to address these concerns.
Before we can fully understand algorithmic bias, we must first decompose the moving pieces of these algorithms themselves. Generally, the word “algorithm” is used to describe some form of machine learning/artificial intelligence. While there are many techniques for developing such a model, at its highest level there are four basic steps:
- Collect a set of training data and feed it into a machine
- Let the machine… well, learn
- Fine tune any parameters until you have an acceptable level of accuracy
- Use the resulting formula to predict something, given a set of inputs
Algorithms are used in nearly every digital experience we interact with; they recommend products, tell us which of our friends are in the photo we took, and suggest things we might want to do next.
While these may seem harmless, others uses can be more questionable: calculating the credit risk of a loan application, using facial recognition as a means of identification for law enforcement, or automating resume review and selection while searching for the “most qualified” candidates.
It’s clear that the latter set of use cases are more likely to end up as a headline in a New York Times cover story. Yet in all of these examples, we’ve already seen the negative impact algorithmic bias can have when gone unchecked.
Surely the creators of these products didn’t set out to intentionally introduce these forms of automated bias, right? So what gives?
It turns out that, while we’ve gotten very good at teaching our machines to learn, we’re still fundamentally flawed as human beings when it comes to selecting the underlying data to learn from. Whether it be underrepresentation of a particular demographic or over-indexing on another, we’re still guided by our own preconceived notions of the world around us and it’s often reflected in the data sets we use to develop these algorithms.
”Basic Human Assumptions”
So, we’ve established that even without malicious intent, algorithmic bias is possible given the nature of how machine learning models are constructed and fed data. But why is it so hard to collect data sets representative of our population in the first place? To answer that question, we turn to the study of unconscious bias.
We all have to two types of biases: conscious and unconscious.
According to UCSF’s Office of Diversity & Outreach:
“Unconscious biases are social stereotypes about certain groups of people that individuals form outside their own conscious awareness. Everyone holds unconscious beliefs about various social and identity groups, and these biases stem from one’s tendency to organize social worlds by categorizing.”
It’s important to note that unconscious biases don’t make us bad; they just make us human. However, these stereotypes shape our view of the world, and it’s imperative that we understand these biases before hoping to construct representative data sets for fair algorithms.
The good news? Unconscious biases can be changed over time. To minimize their affect, however, we must be proactive in identifying our own biases and take steps to address them.
Here’s a short checklist to get you started:
- Take a test, like the IAT, to discover more about your own biases and become more self-aware of how and why we may make certain decisions.
- Explore many of the free online resources available, such as Grovo’s Unconscious Bias Microlearning lessons, managing bias in the workplace by Google and Facebook, or Microsoft’s interactive eLesson.
- Make a conscious effort to take time and reflect on decisions being made, particularly when those decisions have impacts on others.