Systemic Racism and Inherent Biases in AI, Machine Learning, and Beyond

How can criminal defense attorneys recognize and challenge issues of algorithmic bias, scientific validity, and “dirty data” in criminal cases?

Racist by Design: How Systemic Racism and Inherent Biases Manifest in Artificial Intelligence, Machine Learning, and Beyond

From policing and sentencing to incarceration and parole, every step of the criminal legal process can now be outsourced to algorithmic decision-making systems. Social media monitoring tools, risk assessment instruments, facial recognition software, and data-driven policing technologies are now being designed and deployed at a rapid pace, with little to no interrogation of the ways in which such technologies can reproduce social hierarchies, amplify discriminatory outcomes, and legitimize violence against marginalized groups that are already disproportionately overpoliced.

This webinar from April 1, 2021 featured Rashida Richardson, Visiting Scholar at Rutgers Law School and Rutgers Institute for Information Policy and Law, Cathy O’Neil, author, mathematician, and founder of ORCAA, an algorithmic auditing company, and Cierra Robson, a doctoral student in the Sociology and Social Policy program at Harvard University and the Inaugural Associate Director of the Ida B. Wells JUST Data Lab at Princeton University.

Presentation Slides

Presentation Slides from Rashida Richardson

Presentation Slides from Cierra Robson

Supplemental Materials

Featured Products