Banishing Bias from Artificial Intelligence (AI)

Preventing AI From Learning Prejudice
November 25, 2019

Artificial intelligence (AI) has provided algorithms that can perform such tasks as recognizing faces, diagnosing diseases and winning computer games, but even the most sophisticated AI algorithms also exhibit undesired behavior, such as reflecting the gender bias in texts or images that were used in its development.

New techniques for constructing AI programs suggest that such aberrant behavior in machine learning can be prevented by specifying guardrails in the code from the beginning of its construction. Recently, customers claimed that the algorithm used in Apple credit cards provided much lower credit limits to women than men who had the same financial resources. Apple was not able to demonstrate that the algorithm had somehow been inadvertently biased by training data. Similar problems could stand in the way of the use of AI in healthcare, education and government.

READ: Researchers Want Guardrails to Help Prevent Bias in AI

Developing strategies to banish bias from AI could prove to be complicated, requiring ways to define fairness. “One of the major challenges in making algorithms fair lies in deciding what fairness actually means,” says Chris Russell, a fellow at the Alan Turing Institute in the UK. “At the moment, there are more than 30 different definitions of fairness in the literature,” Russell notes. “This makes it almost impossible for a non-expert to know if they are doing the right thing.”


Navigate in this section: