Unmasking The Hidden Biases of Facial Recognition Technology

Written by ---

Faces are slowly becoming the keys to unlocking a digitalized future. Over 50 years ago, pioneers such as Woody Bledsoe conducted experiments to determine the potential of ‘programming computers’ to recognize human faces. While unsuccessful, it was his team's efforts that kickstarted the development of facial recognition technology into what it is today. From Apple’s groundbreaking “FaceID” to Facebook’s facial recognition software for photo tagging, artificial intelligence has evolved to become a backbone within essential applications. However, this rapid development raises the question of how well these systems truly “see” diverse faces as we entrust machines with the power to perceive and identify individuals. An examination of this expanding issue reveals a concerning narrative that weaves together the themes of diversity, bias, and the ongoing quest for a more equitable digital future.

As facial recognition technology becomes commonplace, so does its host of challenges and ethical considerations. The algorithms designed for the classification, detection, and prediction of race and gender can inadvertently perpetuate biases, reflecting the limitations and perspectives of their designers. These errors were brought to light through the study by Joy Buolamwini, a researcher at the MIT Media Lab. As part of her research, Buolamwini presented 1,000 faces to facial recognition systems and asked the systems to identify whether the faces were female or male. The findings were revealing; error rates were as high as 34 percent for dark-skinned women, in contrast to light-skinned men, where error rates never worsened than 0.8 percent.

Moreover, the adoption of facial recognition in various sectors, notably law enforcement, raises concerns about the misidentification of individuals due to bias. Those in favor of incorporating artificial intelligence into law enforcement praise its ability to bolster public safety by aiding investigations, while those opposed contend that the potential for infringement on civil liberties outweighs the unrealized benefits. High-profile cases of misidentification of suspects have bolstered this belief, like that of Robert Williams. In 2020, Williams was wrongfully arrested in Detroit based on a faulty facial recognition match that accused him of committing a felony and larceny. After confirming his innocence, however, Williams had his fingerprint expunged. While one of the very first of its kind, this case highlights the potential risks of relying on facial recognition technology without proper safeguards.

Addressing diversity issues in facial recognition technology requires a multifaceted approach. First, there needs to be an emphasis on diversity within teams behind these technologies to foster the exchange and consideration of varying experiences and viewpoints. Moreover, training data sets must be representative of its users to perform proficiently across different demographic groups; according to Buolamwini’s paper, researchers at a major U.S. technology company claimed an accuracy rate of more than 97 percent, when in reality, the data set was more than 77 percent male and more than 83 percent white. Furthermore, transparency between the developers of these technologies and affected communities is essential. Initiatives like AI Impact Alliance and Data for Black Lives advocate for responsible and ethical use of technology, specifically regarding artificial intelligence.

At its core, this problem highlights the importance of the cultivation of collaborative environments for innovation. It influences how artificial companies operate, the products produced, and most importantly, what groups are disproportionally impacted by these products. Only by a united and conscientious effort can the potential of these advancements be actualized while simultaneously safeguarding the principles of fairness, inclusivity, and respect for individual liberties.