By Manjesh Gupta, Associate Manager – AI/Machine Learning at Virtusa.

Our present human society is a product of millions of years of biological evolution and thousands of years of social evolution. Everything has a history. We make beliefs about people or things based on our accumulated knowledge. In such a scenario, it is quite natural that some of our beliefs are prejudiced because, at times, we do not have enough information. Gordon Allport defines “prejudice” as a “feeling, favorable or unfavorable, toward a person or thing, prior to, or not based on, actual experience.” It is often said that prejudices exist and will continue to exist. The real question is whether we as individuals or a society are willing to change our prejudiced beliefs when presented with counter-evidence. In 1953, Albert Einstein wrote in an essay, “Few people are capable of expressing with equanimity opinions which differ from the prejudices of their social environment. Most people are even incapable of forming such opinions.

In a social setting, these prejudiced beliefs manifest themselves as attitude or behavior, favorable or unfavorable, towards an individual or a group, based on their sex, gender, social class, race, ethnicity, language, political affiliation, sexuality, religion, and other personal characteristics. In such cases, generally, the group identity of an individual or sub-group takes precedence over the individual identity. We know that we behave in a prejudiced manner (which may not even be necessarily wrong at times).

Do AI algorithms reproduce this human behavior?

Let us examine a few cases.

If you ask some of the natural language processing algorithms – “Man is to Computer Programmer as Woman is to ___________?” It may answer “Homemaker.” The word-embeddings used in such algorithms are known to reflect gender (and other biases) for quite some time now. This paper examined the “word2vec” embeddings to show the presence of gender stereotypes. The paper also suggests a method to neutralize the bias.

The Gender Shades project showed that Facial recognition systems from IBM, Microsoft, and Face++ are biased against women and “darker” subjects in terms of accuracy of recognition. These algorithms were, on average, around 15% less accurate for female and “darker” subjects. Recently, an algorithm designed to generate “high-definition” faces from pixelated images generated an output of a “white” person…

Continue reading: https://www.kdnuggets.com/2021/08/demystifying-ai-prejudices.html

Source: www.kdnuggets.com