Bias. Everyone has it. Every citizen wants less of it. Every algorithm is susceptible to it. Here’s how to reduce it in your data science projects.

 

 

“Bias doesn’t come from AI algorithms, it comes from humans,” explains Cassie Kozyrkov in her Towards Data Science article, What is Bias? So if we’re the source of AI bias and risk, how do we reduce it?

It’s not easy. Cognitive science research shows that humans are unable to identify their own biases. And since humans create algorithms, bias blind-spots will multiply unless we create systems to shine a light, gauge risks, and systematically eliminate them.

The European Union tasked a team of AI professionals to define a framework to help characterize AI risk and bias. EU Artificial Intelligence Act (EU AIA) is intended to form a blueprint for human agency and oversight of AI, including guidelines for robustness, privacy, transparency, diversity, well-being, and accountability.

What are their recommendations and how do they benefit your business? And how can technology help add wind in the sails of AI adoption? The seven steps they recommend, and the concrete actions on how to fulfill them below, are a great place to start. But first, let’s review their “algorithmic risk triangle,” which characterizes risk on a scale from minimal to unacceptable.

What Kind of Bias is Acceptable?

As Lori Witzel explains in 5 Things You Must Know Now About the Coming EU AI Regulation, the EU Artificial Intelligence Act (EU AIA) defines four levels of risk in terms of their potential harm to society, and therefore important to address. For example, the risk of AI used in video games and email spam filters pales in comparison to its use in social scoring, facial recognition, and dark pattern AI.

Their framework aids understanding but is not prescriptive about what to do about it. The team does present seven key principles of AI trustworthiness, which this article uses as guide rails for an action plan for algorithmic bias mitigation:

Principle #1: Human agency and oversight. The EU AIA team said, “AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms are needed through human-in-the-loop, human-on-the-loop, and human-in-command approaches.”

What they…

Continue reading: https://towardsdatascience.com/seven-steps-to-help-you-reduce-bias-in-algorithms-in-light-of-the-eus-trustworthy-ai-blueprint-b348dc3cf2ae?source=rss—-7f60cf5620c9—4

Source: towardsdatascience.com