It's 2018, let's talk about machine bias

Add bookmark
Megan Wright
Megan Wright
01/02/2018

The year ahead represents a critical moment for machine learning and AI, as algorithms become smarter—and more subjective

machine bias hero
Image via Pexels

When it comes to artificial intelligence, skepticism is rife. There are those who believe the permeation of machines will result in job loses, economic crises and—in short—widespread chaos.

Other cynics question the overall usefulness of machines as a stand-in or supplement to human labor. After all, ‘machines couldn’t possibly do the jobs we’ve been training our whole lives for,’ they argue.

But a key area of contention that is fastest gaining traction right now is underpinned by one simple question:

Is it possible to build machines that are free from human biases?

The answer, in short, is probably not. But it’s the question that we have to start asking ourselves now, before it’s too late, said Zoe Stanley-Lockman, associate fellow at the European Union Institute for Security Studies.

“Many people assume that machines are objective, but this assumption is not airtight: Intelligent enterprise is still a product of human creators and data based on humans,” she said.

“Algorithmic bias can already be seen, for example in software that discriminates against black people when forecasting which criminals are most likely to become re-offenders, or in ranking male teachers higher than their female counterparts.”

Stanley-Lockman is referring here to the case involving the somewhat controversial risk assessment software known as COMPAS, which was developed to forecast which criminals were most likely to reoffend. Analysis into the software by Pulitzer Prize-winning nonprofit news organization ProPublica revealed that racist undertones impacted the algorithm’s ability to correctly predict whether individuals were likely to reoffend. Specifically, black people were almost twice as likely as white people to be labeled a higher risk but not actually reoffend, whereas white people were likely to be labeled low risk and then go on to reoffend.

“It’s not necessarily a bad idea to use risk assessment systems like COMPAS. In many cases, ADM [automated decision making] systems can increase fairness,” wrote AlgorithmWatch’s Matthias Spielkamp in MIT Technology Review.

“But often we don’t know enough about how ADM systems work to know whether they are fairer than humans would be on their own. In part because the systems make choices on the basis of underlying assumptions that are not clear even to the systems’ designers, it’s not necessarily possible to determine which algorithms are biased and which ones are not,” he concluded.

“So the question is not so much whether it is possible to build bias-free machines, but whether we are willing to pay attention to these issues now and be willing to ask hard questions that inform the design of systems capable of identifying and/or correcting bias,” added Stanley-Lockman.

An enterprise-scale challenge

Identifying biases presents the most pressing challenge for enterprises, as datasets reflect inherent biases that are not immediately clear, said Stanley-Lockman. “Intelligent enterprises will face situations where they may possibly have to make the choice between the most competitive option or the most ethical option,” she said.

“What do enterprises do when they have to make a choice between biased data versus no data? How do they backtrack once a bias becomes clear? What standards does an enterprise hold itself to if competitors forge ahead irrespective of the ethical pitfalls?”

These ethical dilemmas are reflective of growing concern around the gap between the data that algorithms are collecting and analyzing, and the organizations responsible for developing and implementing them.

“Biased algorithms are everywhere, and no one seems to care,” wrote MIT Technology Review’s senior editor for AI, Will Knight. “Opaque and potentially biased mathematical models are remaking our lives—and neither the companies responsible for developing nor the government is interested in addressing the problem,” he argued.

What’s left is for enterprises to take a good hard look in the mirror and decide how they want to play it. “How can an organization incentivize computer scientists and engineers with a highly technical skillset to consider the social ramifications which aren’t always associated with those individuals’ training?” asked Stanley-Lockman.

“And vice-versa, how can organizations fuse together individuals with stronger backgrounds in the social sciences to inform and work with their hard-science counterparts on questions of identity and systemic discrimination?”

AI to the rescue

As much as AI is the question, it may also be the answer. One promising strand of thought is the growing potential for AI to identify biases that humans are incapable of seeing, which could offer significant lessons moving forward, said Stanley-Lockman.

“Right now the willingness to design bias-free systems is potentially more important than the possibility of doing so,” she said. “It is up to system designers and engineers to consider the sources of their datasets, and it is up to users to be informed about—and even take action against—biases.”

“The unfortunate reality is that datasets will never be discrimination-free.”

But this doesn’t make the quest any less critical. “It is everyone’s responsibility,” concluded Stanley-Lockman. “But that does not mean that everyone shares the responsibility in equal measure.”

[inlinead]

RECOMMENDED