Artificial Intelligence has become increasingly essential in our world today, shouldering roles in many industries such as the criminal justice system. With its promises of efficiency and accuracy, it has managed to integrate itself into applications such as risk assessment tools, facial recognition, and predictive policing. This integration, however, has not come without its share of controversies over discriminatory bias and its effects on the system. In a study conducted by ProPublica in 2016 by Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin, it was found that COMPASS (Correctional Offender Management Profiling for Alternative Sanctions), a tool used to measure “Risk of Recidivism” and “Risk of Violent Recidivism.” was more likely to incorrectly flag Black defenders at a higher risk of recidivism as compared to white defenders.
As seen by a recent study in 2021 by Douglas Yeung, Inez Khan, Nidhi Kalra, and Osonde A. Osoba, the bias can creep into the AI system at any point ranging from initial planning and acquisition to eventual deployment of it. The biases that get incorporated may be a result of various factors such as bias in the data on which the AI system is built (due to sampling discrepancies or historical biases in the data), and human bias in the deployment of the AI. Allowing these AI to be used without proper precaution can lead to systemic discrimination already faced by minorities being further perpetuated, as seen in the case of Delia, a predictive AI system in Italy that is used to profile and “predict” people’s future criminality.
Designed by Freepik
Training Data Bias
The data that is used to create AI that helps make the criminal justice system more efficient plays a huge role in determining the potential for discriminatory bias that is embedded within the AI. For example, if a set of data used contains marginally less composition of individuals and information about people from a certain community, the information is not evenly distributed. Additionally, we can find that collecting the data required to create these systems presents the trouble of inequity of characteristics in the data collected. How a person’s face is perceived in a photo can vary based on factors such as lighting, and thus could potentially lead to incorrect data collected. This can have serious consequences, as shown by a study conducted by Perpetual Lineup in 2016 revealed that African Americans were more likely to be subjected to facial recognition searches than others.
AI models trained on biased data will reflect the systemic inequalities in society and could lead to further propagation of these. For example, if historical data shows highly disproportioned arrests for certain communities, the individuals of those communities could be at higher risk to be at higher risk for over-policing and discrimination. This perpetuates a cycle of systemic discrimination where marginalised communities are unfairly targeted. Additionally, if data shows higher recidivism rates for a particular demographic, people from these demographics could be termed high-risk leading to longer sentences or denial of parole. These scenarios show how essential mitigation of training data bias in AI is.
Algorithmic Bias
Algorithmic bias refers to the bias created due to the design of the AI system itself, rather than the data from which it is made. This could be due to certain statistical methods and feature selections favouring one group over another. These systemic biases reveal how error rates can be up to seven times higher in women than men as per a study conducted by the Massachusetts Institute of Technology in 2017. A similar study conducted on the performance of facial recognition systems in 2012 shows how these systems are more likely to misidentify females, Black people, and individuals aged 18-30 in comparison to others.
The use of biased AI systems in the decision-making of criminal justice can lead to adverse results such as legal and ethical implications, and a larger erosion of trust in the justice system. These discriminatory practices could lead to violation of civil rights, and could even lead to legal trouble with the legitimacy of the AI being called into question. This contradiction of fairness, justice and equality could lead to the general public losing trust in the system, causing decreased cooperation towards law enforcement and lower crime reporting rates. In light of these situations, we must take all precautionary measures to ensure that algorithmic bias can be minimised as far as possible to avoid legal and ethical troubles.
Human Bias in Deployment
Human Bias is when the individual using the AI is affected by the biased data that they are given, or when their inherent biases about the world translate to the utilisation of the AI in providing judgement. Studies have shown how individuals are affected by biased data and how they internalise it even after they stop using the AI system. This could manifest in the form of wrong information or inferences given by the AI retaining itself in the mind of the person passing judgements, and skew their decision making. A real-life example of this is the city of Santa Cruz, which banned predictive policing software in 2020, having internalised the bias from this software.
How Do We Move Forward?
With Bias mitigation proving to be an essential component of integrating AI into the criminal justice system, efforts are being taken to look into techniques and strategies to accomplish this. This includes deliberate sampling of data, re-sampling data, and re-weighing data to ensure a balance of representation so as to minimise training data bias in the AI. Similarly, developing algorithms specifically to recognise and minimise bias such as IBM's AI Fairness 360, allows for transparency and explainability of the algorithms used in the AI. Finally, implementing training sessions for the individuals in charge of using these AI to pass judgment could provide ethical guidelines to be followed to keep human bias in check.
Comments