Legal algorithms seem up to no good. Some algorithms have incorrectly accused and fined thousands of innocent people for alleged unemployment fraud. Others have told child welfare agencies to investigate families for child abuse based, in part, upon the family’s poverty. Within criminal law, algorithmic risk assessments are among the worst offenders, encouraging judges to preventively incarcerate people based upon weak predictions of violence drawn from racially biased data.
But what if I told you that we’ve been looking at predictive algorithms the wrong way all along — and that they can be recruited as an unexpected ally in the fight to end mass incarceration? Properly understood, algorithms don’t cause inequalities or injustices in our criminal laws. They expose them. And in so doing, they can help advocates shape a future in which people, including mostly people of color, are no longer incarcerated for things that they might do one day.
So far, algorithmic tools’ only purpose has been to optimize the law. It happens like this: Many criminal laws require prediction. Humans are bad at prediction, and algorithms are better. Human intuition just can’t compete with big data and statistical methods. Wherever the criminal law requires prediction, replacing humans with algorithms is a no-brainer for reducing bias and error. And lately, algorithms have been replacing humans a lot. In some places, criminal procedure is now algorithmic from start to finish. Based on predictions of wrongdoing, algorithms tell police to investigate, judges to incarcerate, probation to surveil, and parole boards to deny release.
Predictive algorithms were heralded as an objective way to tackle mass incarceration and its racial disparities, but wherever algorithms have taken over, a familiar pattern emerges. Researchers study the algorithm and find that it is either following unfair rules, producing unfair consequences, or both. In recent years, legal algorithms have come under fire for producing racially inequitable outcomes and for sending people to jail and prison based on weak predictions of future violence. Algorithms intended to challenge mass incarceration seem instead to be perpetuating it. When these studies emerge, conscientious data scientists explore ways to recalibrate the algorithm to produce fair results. But they fail. And when they do, advocates seek to remove the unfair algorithm from the field. Removing algorithms means bringing back humans to make predictions. But decades of research have shown that humans are worse than the machines at making predictions.
Something doesn’t add up. Algorithms are somehow both unjustifiable and the best option available. It turns out we’ve been looking at the problem the wrong way. Replacing humans with algorithms is the way to fix the criminal law only if the problem with criminal law is that humans are making poor predictions. But what if the problem with criminal law is the law itself? What if the most accurate, least biased applications of our laws would still result in racial inequities and mass incarceration? In these circumstances, replacing humans with algorithms won’t do the trick.
Optimizing injustice will simply produce optimized injustice — and worse, it will hide that injustice behind a mask of scientific objectivity.
Optimizing injustice will simply produce optimized injustice — and worse, it will hide that injustice behind a mask of scientific objectivity.
Systemic, decarceral change requires transforming our laws — particularly the preventive incarceration laws that allow the state to imprison people not for what they have done but for what they might do. And as I argue in a forthcoming law review article, the same algorithms used to optimize the law can expose the harms that the law produces. Algorithms give us a window into how accurate and how fair (or how inaccurate and unfair) a law can be.
Simply put, algorithms can reveal how the law works by revealing how prediction works. Whether done by humans or machines, prediction is fundamentally the same. Patterns from the past are used to anticipate the future. When making predictions, both humans and algorithms attempt to find an underlying pattern that connects information about the present with outcomes in the future.
Machine-learning algorithms are really just high-powered pattern-recognition tools. They don’t exist in the wild. They have to be built — by people. Building a machine-learning model includes testing the many ways that a particular prediction can be made. Through brute-force automation, the machine-learning process can uncover all the ways that a prediction can be made under the rules of a particular law. This process can capture a big picture of all the ways that the law can be applied and all the outcomes that the law can produce. In the normal course of building legal algorithms to optimize the law, this exploration process is used to find the one, optimal way to make a legal prediction. This one way to make a prediction is kept and the big picture is discarded.
But the big picture is valuable in its own right, for at least two reasons. First, it allows us to see the limits of what the law can accomplish. Algorithms’ much-ballyhooed strength is that they exploit huge datasets and an array of statistical methods to outperform humans at making predictions. But because algorithms are the best available option for prediction, their shortcomings are telling. Limits on what algorithms can achieve reveal the limits of what a predictive law can achieve. Take the accuracy of pretrial incarceration laws as an example. The purpose of pretrial incarceration laws is to detain people who might flee or commit a violent crime while their criminal case is pending. That’s the prediction the law is trying to make. If our best machine-learning model can predict who will flee or commit a violent crime with a high degree of accuracy, that’s the best the law can do. Likewise, if our best machine-learning model can predict who will flee or commit a violent crime with a low degree of accuracy, then that’s also the best the law can do. The algorithm reveals the law’s ability to do what the law says it’s trying to do.
Second, the big picture can reveal when a law is stacked against particular groups, especially communities of color. The machine learning process reveals how a law’s outcomes will be distributed across the population. By including all possible predictions, the big picture can reveal when a law will necessarily harm certain groups — no matter how the law is applied. This mistreatment can happen even when a law is written in neutral language and has not been designed to be discriminatory. So long as a characteristic like race is associated with the factors the law uses to make its predictions, structural inequities will be reproduced in the law’s outcomes. As the computer science adage goes: garbage in, garbage out. But with algorithms exposing law, the garbage coming out is plain to see.
As the computer science adage goes: garbage in, garbage out. But with algorithms exposing law, the garbage coming out is plain to see.
The payoff is not just academic. Take the risk assessment algorithms used for pretrial incarceration as a case in point. Some progressive prosecutors have joined activists in criticizing risk assessment algorithms for making racially biased predictions. But crucially, these prosecutors have not opposed a system of pretrial incarceration based on predictions of future dangerousness. They may talk of a clean break with their predecessors’ track record on pretrial incarceration, but their basic approach hasn’t changed: Dangerous people, they say, should stay in jail, and low-risk people should go free. This approach to pretrial incarceration came out of the tough-on-crime Nixon era, was codified into law in the 1980s, and has long been opposed by scholars and activists as violating the presumption of innocence and destroying communities in a racially discriminatory manner — all without improving public safety.
Activists could repurpose risk assessment algorithms to hold progressive prosecutors accountable for their pretrial practices. These algorithms can reveal how jailing people based upon predictions of dangerousness must result in the disproportionate incarceration of young, Black men based on weak predictions of violence. Until now, progressive prosecutors have been able to disparage algorithmic risk assessments while discounting their complicity in a process that produces the same outcomes. But algorithms can show the world how even the least-biased, most accurate versions of a prosecutors’ pretrial practices are at odds with their commitment to decarceration and racial justice. An advantage of this process is that it can be localized. It’s not an abstract claim. Data from the prosecutor’s own district can be used to show how the local community fares under current policies. And if a prosecutor’s office gets the message and changes its practices, this same data could justify that change in court and in the bully pulpit.
To end mass incarceration, our society must stop prematurely incarcerating people for what they might do in the future. Algorithms can reveal how prediction fails as a limiting or neutral principle for preventive incarceration. Machine learning and artificial intelligence are already reshaping law and society. When used to optimize existing practices, legal algorithms consolidate and preserve systemic inequality. But this is not the only way that algorithms can be used. In a political environment in which legal reforms must be evidence-based — and in which most evidence-based reforms are only minor tweaks to current practices — algorithms have an overlooked potential to expose inequality and to provide empirical support for more radical, decarceral change.
Image: Unsplash