Research Technology (ResTech)

May 16, 2018

AI & Discrimination: How Is It Affecting You?

Acknowledging that algorithmic bias exists is the first step of finding a solution

AI & Discrimination: How Is It Affecting You?
Bethan Turner

by Bethan Turner

Head of Data Insights at Honeycomb

0

Editor’s Note: This post is part of our Big Ideas series, a column highlighting the innovative thinking and thought leadership at IIeX events around the world. Bethan Turner will be speaking at IIeX North America (June 11-13 in Atlanta). If you liked this article, you’ll LOVE IIeX North America. Click here to learn more.


Algorithmic bias is everywhere. It can be found in machine learning tools that predict whether or not criminals will re-offend (and finds that black offenders are much more likely to re-offend). It can be found in English proficiency tests used for visa applications (and those with a different accent are more likely to fail, according to voice recognition software). It can be found in insurance markets (where algorithms calculating premium prices result in a higher price for black customers).

We may not be used to hearing the words “algorithm” and “bias” together, but we should be aware of this relationship, and I’m certain we will be in the near future. We, as a society, will soon be acknowledging the need to face the challenge of dealing with algorithmic bias and the consequences it can bring to our communities.

For me, Safiya Noble got it on the first pages of her book, Algorithms of Oppression:When we think of terms such as ‘big data’ and ‘algorithms’ as being benign, neutral, or objective, they are anything but”.

Another example was unearthed in a study where researchers built over 17,000 fake profiles on a job advertising site and found that male profiles were significantly more likely to see adverts for higher paid jobs. Noble speaks at length in her book about the bias existing in digital search engines such as Google, and how, at the beginning of her research in this field, over 80% of top search results for “black girls” related to pornography; whilst pornographic results for “white girls” were rare. (Some of the results she mentions in her book have since been manually changed by Google).

Stuart Geiger, a computational ethnographer at the Berkeley Institute for Data Science, recently came to Manchester and his first slide still rings true with me: “Data are made by people. Data are people”.

This, in my opinion, is the reason algorithmic bias exists. In order to have machine learning, you need to teach the machine. People need to teach the machine. The machine needs data to learn (a “data training” set). People collect these data. Outliers need to be flagged, patterns need to be labeled. This, again, is done by people. In order to “learn” how it is doing and to improve algorithms, machines need to be explicitly told what success looks like. It’s people that decide what constitutes a success, and what is deemed as a failure. We know that people are biased. Therefore, we know that algorithms, which are founded on people’s interpretations and perspectives of fair, representative, success and failure, are biased. The question now is what we can do about it.

We’ve already found that simply removing human interaction from the equation does not work – the situation with the car insurance premiums shows that. Firstly, I think the best piece of advice I’ve read so far comes (again) from Safiya Noble, who, speaking at an event in my glorious hometown of Manchester recently, simply said, “Don’t give up”. The best thing we can do is be aware of the bias that exists: recognize it, assess it, and question it. Challenge your notions of success, failure, and fairness with regards to these specific data and this specific model.

Noble argues that we don’t yet have the legal or social frameworks or policies in place to support ethical ambiguities like this yet. Her answer to someone’s question at the recent event has played in my mind since: “To prove discrimination you need to prove intent. How do you prove intent with a machine learning algorithm?”

It’s only through asking these questions, raising these issues, and researching solutions, will we be able to look towards a future where artificial intelligence and the broader cultural, social and economic landscape can harmoniously co-exist.

0

algorithmic biasbig ideas seriesiiexmachine learning

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*