Two prominent concerns emerged in the debate around the implementation of AI in fighting crimes. Authoritarian governments exploiting AI surveillance and biases in facial recognition technology.

Artificial intelligence (AI)¹ has been rapidly growing worldwide, with new applications being discovered every day. While AI has applications across many sectors, one area where it is commonly utilized is in AI surveillance and facial recognition technology to combat crimes. As of 2019, at least seventy-five countries globally are actively using AI technologies for surveillance purposes, including smart city/safe city platforms, facial recognition systems, and smart policing initiatives (Feldstein 2019: 1). However, the widespread use of AI in the name of combating crimes does not come without a cost; multiple ethical concerns have arisen in the past couple of years, which questions the feasibility of implementing AI technology to combat crimes. This article will examine two prominent ethical concerns regarding AI in fighting crimes: biases in facial recognition technology and authoritarian governments exploiting AI surveillance in the name of public safety.

Fueled by new research in AI, facial recognition technology has become more popular than ever; however, it is not always accurate in its findings. According to a recent study conducted by The Journal of Research of the National Institute of Standards and Technology (NIST), facial recognition software has certain biases regarding race, age, and sex. Patrick Grother, a NIST computer scientist, headed this first of its kind study. Grother and his team evaluated 189 software algorithms from 99 developers to measure whether these algorithms exhibit demographic differentials, which is a term measuring if an algorithm’s ability to match images differ for various demographic groups (NIST 2019). Using four collections of photographs containing 18.27 million images of 8.49 million people provided by various government agencies, the team evaluated these algorithms’ matching ability concerning demographic factors. The results were astounding; though levels of inaccuracies differ between algorithms, most of them exhibited demographic differentials. In particular, Grother points out that Asian, African American and Native groups are 10 to 100 times more likely to be wrongly identified compared to Caucasians. Moreover, algorithms also struggle with…

Continue reading:—-7f60cf5620c9—4