Unmasking AI: Unveiling Biases in Artificial Intelligence

Joy Buolamwini’s experience with facial recognition bias sparked her research into AI algorithms. She discovered that training data lacking diversity leads to biased AI systems that favor certain races and genders. This can cause real-world discrimination, highlighting the need for fair and inclusive AI design

Joy Buolamwini’s Pioneering Journey to Expose Bias in AI Algorithms

Joy Buolamwini’s story in the world of AI began when, during a facial recognition project, she discovered that the system failed to recognize her face because she is of African descent. This discovery shocked her, but more importantly, it motivated her to start investigating biases in artificial intelligence algorithms. Despite being a qualified engineer, she could not understand why the facial recognition system failed to recognize her face. Her research as a postgraduate student at the „Future Factory“ revealed that the datasets used to train artificial intelligence systems often lack diversity. These datasets are frequently compiled from photographs of white individuals, meaning that AI systems are better at recognizing faces of Caucasian descent than those of other races or ethnic groups. Buolamwini’s research also showed that the datasets used for training AI systems could contain stereotypes. For example, crime data files might be predominantly composed of photographs of men, meaning that AI systems are better at recognizing male faces than female faces. As a result of these biases and stereotypes, AI systems can discriminate against marginalized groups. For instance, it is less likely for the system to recognize a woman’s face than a man’s, which could lead to women being unfairly stopped by police or denied mortgage applications. Buolamwini’s research is significant because it highlights the potential risks of AI systems. It is crucial that we design and use these systems to be fair and inclusive for everyone.

The main theme of Buolamwini’s research and activist bias is seen in the concept of „the coded gaze.“ This term was created to describe the ubiquity of biases in facial recognition technology. Her book „Unmasking AI: My Mission to Protect What Is Human in a World of Machines“ [1] introduces the reader – observer to the complexities of AI systems in relation to personal or societal freedom, bounded by the interests of authorities to protect society. It reveals AI systems that are becoming more common in our lives, used for everything from navigation on the streets to deciding who gets a loan. As these systems become more sophisticated, we begin to realize that they can be biased.

Buolamwini’s efforts are important because they highlight the potential negative consequences of AI. Her research emphasizes the need to reflect on how AI is used and the implications it may have for society. The publication of „Unmasking AI“ has the potential to lead to changes in the field of AI ethics. Her research helps stimulate discussion about how we can ensure that AI systems are used ethically and do not support discrimination and inequality. In her strategies, we can observe elements of investigative aesthetics, which describes the use of artistic and aesthetic techniques to investigate and reveal hidden truths. The strategic presence of elements reminiscent of investigative aesthetics should probably be understood in a broader view as artivism, which specifically involves using art to support social and political changes.

Joy Buolamwini’s experiences with bias in facial recognition spurred her research into AI algorithms. She found that training data lacking diversity leads to biased AI systems that favor certain races and genders. This can cause discrimination in the real world, highlighting the need for fair and inclusive AI design.

Joy Buolamwini ,“Poet of Code,” TedX,

Source: zdroj: https://medium.com/africana-feminisms/the-coded-gaze-algorithmic-bias-what-is-it-andwhy-
should-i-care-51a416dbc3f3