💡 Learn from AI

Understanding AI Bias

Examples of AI Bias in Practice

AI Bias in Practice

One of the most well-known examples of AI bias in practice is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) system used by many US courts to aid in sentencing decisions. In 2016, an investigative report by ProPublica found that COMPAS exhibited bias against black defendants, rating them as higher risk than white defendants with similar criminal histories. This bias had significant real-world consequences, leading to black defendants being more likely to receive harsher sentences than white defendants.

Facial Recognition Technology

Another example of AI bias is facial recognition technology. Studies have shown that facial recognition algorithms exhibit higher error rates for people with darker skin tones, leading to concerns about the potential for racial profiling and false accusations. In 2018, the American Civil Liberties Union (ACLU) conducted a study of Amazon's facial recognition software, Rekognition, and found that it falsely matched 28 members of Congress with mugshots of people who had been arrested. The false matches were disproportionately people of color.

These examples demonstrate the very real consequences of AI bias in practice. As AI becomes more prevalent in our society, it is crucial that we work to address these biases and ensure that they do not perpetuate existing inequalities.

Take quiz (4 questions)

Previous unit

Impact of AI Bias

Next unit

Strategies for Mitigating AI Bias

All courses were automatically generated using OpenAI's GPT-3. Your feedback helps us improve as we cannot manually review every course. Thank you!