Skip to main content

Our Evidence-Based Obsession

Better research won’t get us out of our crisis of mass incarceration.

ross-elder-uXQVNkam3hE-unsplash

As the movement against mass criminalization continues apace, the “evidence-based paradigm” (EBP) for criminal justice reform has become increasingly influential. By “evidence-based paradigm,” I mean a “data-driven” approach to criminal legal reform in which researchers like myself aim to identify “what works” in the quest for decarceration.

We can trace the origins of EBP back to the dramatic increases in U.S. criminalization and related fiscal costs in the late twentieth century. When the 2008 recession imposed new fiscal strains for state and local governments, policymakers were increasingly receptive to cost-cutting reforms. To that end, criminologists partnered with criminal legal agencies to downsize prisons and reduce recidivism (though economists have become comparatively more influential as of late).

While EBP proponents can claim partial credit for prison population declines in the early 2010s, it seems fair to say that the movement’s primary interest has been the cost effectiveness of criminalization. One 2017 report on EBP from the American Enterprise Institute calls for “a more efficient and cost-effective system that does more with less or, more precisely, more with the same amount.” I would not be the first to point out that this aspiration could increase the scope of the system instead of shrinking it.

To date, EBP has produced reforms designed to meet the needs of punishment bureaucrats. For example, actuarial risk assessment tools rationalize pretrial release decisions by removing human intuition from the equation (which alleviates political pressure on judges regarding their potentially racist intuitions). This reciprocal relationship is already one good reason to raise an eyebrow, but I fear that EBP could be even more dangerous if it spills over into the broader movement against criminalization. This is a concern about a kind of potential “public EBP,” contrasted against the “inside baseball” of existing EBP. According to a paradigm like this, any given decarceration proposal must be vetted by a stack of randomized controlled trials.

In more recent years, EBP proponents have argued that we need to produce high-quality research about criminalization in order to rationalize advocacy for reforms. Researchers, “practitioners,” and funders have linked arms in their pragmatic, high-minded quest to determine “what works.” Fortunately, this line of thinking has largely remained quarantined—the spillover into “public EBP” remains a terrifying prospect rather than a grim reality. Yet policing research roundups in national media outlets, as well as widely shared academic critiques of defunding, suggest that EBP’s public influence is already on the rise. If this public influence—which we might call a “public EBP”—takes hold outside of the “inside baseball” context, I fear that organizers and advocates will increasingly contend with baseless skepticism over proposals which are not necessarily “evidence-based.” It would be a movement ultimately governed by academics, nonprofits, and philanthropic organizations.

I am not writing from the outside of this issue: I am a sociology PhD candidate who researches criminalization, and I previously worked for criminal justice reform nonprofits. Through these experiences, I have come to an ambivalent view of the relationship between research and political advocacy or activism. Although improvements in causal inference research have largely been a good thing for the scientific community, I am increasingly concerned that this “way of knowing” is directly at odds with decarceral organizing.

First, I think that a public EBP would fail in the political arena. Americans became very punitive in the late twentieth century on a cultural and moral basis. In this context, the rationalist appeal to “what works” is like bringing a pool noodle to a shootout. Second, even if EBP could make a difference in the political fight against U.S. punitiveness, its narrow and incrementalist methodology would unnecessarily restrict anti-criminalization advocacy in both scope and haste. Third, despite researchers’ efforts to work within EBP and document the harms of criminalization, the fact that quantitative researchers depend upon state institutions for data tilts the scales against continued decarceration. Below, I devote a section of the essay to elaborating on each of these three critiques.

Advocates and activists can and should make use of social science research when it fits their goals, but they should also feel free to ignore it. Although the increasing public influence of EBP benefits certain researchers (through funding, press, and academic prestige), professional self-advancement might not always align with the public good. If this group of funders and researchers keeps expanding its authority into the public sphere, I fear that the rigid and selective EBP standpoint would put blinders on the movement’s radical potential.


If something resembling present-day EBP had existed in the mid-twentieth century, its advocates would have found themselves working in an environment ideal for their pursuits: criminal justice professionals then focused on structural causes of crime and individualized treatment toward rehabilitation. David Garland calls this paradigm “penal-welfarism.” Although the rehabilitative ideal was hardly realized in practice, explicitly punitive sentiment was taboo, and deference to experts coincided with the public’s perception of the carceral state as more punitive than it actually was (e.g., through indeterminate sentencing).

This status quo ultimately wasn’t durable, in part because the broader ideal of welfarism was vulnerable to racist fearmongering from the start. After conservatives increased their attacks on welfare in the 1960s, progressives critiqued penal-welfarism more specifically. Because liberals failed to develop an effective response to growing public punitiveness, the penal-welfarist ideal was replaced by two contradictory frameworks. Among punishment bureaucrats, the neoliberal “criminology of the self” became paradigmatic (the rational individual, responsive to sanctions). In the realm of politics and public opinion, the neoconservative “criminology of the other” took hold (the threatening outcast requiring expressive retribution). Historically, EBP has operated within the first of these two frameworks.

More from our decarceral brainstorm

Every week, Inquest aims to bring you insights from people thinking through and working for a world without mass incarceration.

 

Sign up for our newsletter for the latest.

Newsletter

  • This field is for validation purposes and should be left unchanged.

Over the past fifty years, the neoconservative framework has come to dominate public opinion and politics. Perhaps surprisingly, this frustrated neoliberal bureaucrats, and some devised quiet tactics to mitigate increases in incarceration while avoiding political backlash. EBP advocates have rightly noted that the most extreme contributions to mass incarceration were never supported by evidence as effective tools to reduce serious interpersonal harm. Yet harsh criminalization “worked” to harm Black Americans in particular. It was no accident that in the 1988 presidential election, Willie Horton (an incarcerated Black man who committed a series of violent crimes while on furlough) was turned into a bogeyman to stir up racist bloodlust.

While corrections administrators might care about reducing recidivism and cutting costs, I am skeptical that EBP can persuade a broader audience that demands retribution. We can’t put the genie back in the bottle; if there was once an era during which inside-baseball persuasion tactics could dismantle mass criminalization, that era is long gone.


Let’s bracket the question of political effectiveness for now. If EBP was an effective approach for advocacy, what would an EBP-inspired future look like?

The recent past can help us learn about this potential future. Law scholar Cecelia Klingele notes that some results of EBP thus far include risk assessment tools, drug and mental health courts, and the end of certain mandatory minimums (among other line items). This is a mixed bag: Risk assessment tools seem bad, whereas ending mandatory minimums is obviously good. Klingele suggests that the historical EBP emphasis on reducing recidivism and cutting costs is neither benign nor necessarily in alignment with the movement to end criminalization. Along similar lines, Erin Collins previously argued in Inquest that EBP strengthens criminalization mechanisms by making them more efficient. This is why EBP wants punishment bureaucrats “at the table.”

EBP-style research emphasizes quantitative data, and its “gold standard” is the randomized controlled trial (RCT). Although the term “gold standard” is used to suggest that RCTs are superior to other methods, this is really a marketing strategy adopted by development economists and nonprofits in the early 2000s. This style of reasoning appealed to “philanthro-capitalist” funders who evaluated their donations in the same way that they evaluated investments.

When it comes to policy research, economics is king (or colonizer): U.S. policymakers today speak the language of cost–benefit analysis. This type of thinking has seeped into discourse about non-police violence interventions, for example, despite the fact that police have received hundreds of times more funding for decades. Earlier this year, ProPublica’s Alec MacGillis questioned whether credible messengers were worth the money. Through quotes from academics, he suggests that we can’t know for sure because Cure Violence (the violence interruption nonprofit discussed in his article) is “fundamentally difficult [for social scientists] to assess,” whereas other programs can be evaluated with RCTs. So, if we can’t design an RCT or a natural experiment for an intervention, an increasingly prominent line of thought seems to suggest that we should just drop it.

The economic style of reasoning will likely continue to shape the criminal justice policy world. Major funder Arnold Ventures recently appointed economist Jennifer Doleac to lead its criminal justice research portfolio. Doleac is infamous for her refusal to engage with research that doesn’t employ methods similar to RCTs, and perhaps most notably for her moral hazard paper invoking fictitious “naloxone parties.” She even created a list of criminal justice researchers who meet her standards, more than half of whom are economists.

Although I disagree with her perspective, I can understand it. In recent decades, economics experienced a “credibility revolution”: Most older economics research making causal claims through statistics was discredited. Other social science disciplines are, at least so far as economists are concerned, still catching up. Hence Doleac’s disregard for non-economists.

RCTs are a useful thinking tool here. In a clinical trial to test the efficacy of a new medicine, for example, a researcher randomly assigns who gets treatment A and treatment B. In theory, this means any difference in measured outcomes is attributable only to the treatment. Many social scientists studying incarceration want this kind of specificity, too, in pursuit of answers to their research questions. Do pretrial detention and longer prison sentences actually reduce serious harms, or do they cause more harm than good? Did New York’s summer youth employment program cause fewer youth arrests, or was it something else?

However, it’s unethical to run experiments in which subjects would be assigned to conditions the researcher knows are harmful, or more harmful than the alternative. (This doesn’t always stop researchers from doing so.) But we still want to study questions like these. Enter “natural experiments,” where sources of randomness or sudden change in the social world are exploited by statistical models to isolate the process of interest. These approaches can be more complicated than RCTs. For example, we might believe that the “treatment” is not randomly assigned in the real world; this was the case in my recent study of traffic stops and voter turnout, where Black drivers were stopped at higher rates. We deal with this kind of concern by trying to drill down into narrower groups of people in the data where we can credibly claim an approximation of random treatment assignment.

This means that credible causal claims in social scientific literature today are extremely narrow. We can estimate the average effect of a program in a particular place, at a particular time, with particular people—but it’s not certain that the same finding will apply under different conditions. If a study finds that cops in Dallas don’t try to earn extra overtime pay by making unnecessary late-night arrests, that doesn’t have much bearing on lawsuits about New York cops doing exactly that. Social scientists have been pretty straightforward about this limitation in academic venues, but this distinction is rarely made in policy advocacy or media quotes.

Another form of narrowness we can see in recent policy research is the emphasis on incremental interventions that can be plausibly isolated. Researchers try to find out what an intervention changes with “all else held equal.” Empirically, this can be a good thing, but the political implications are underappreciated. Under a dogmatic adherence to “what works,” advocates would only be justified in backing proposed interventions which can be studied using causal inference methods—one small step at a time.

Imagine the Green New Deal (or a Green New Deal for decarceration). These kinds of proposals would dramatically change many features of society, all at once. From the perspective of the causal inference researcher, the effects of specific components of this intervention may become unknowable. And if they don’t know what works among all those small parts, how could they support broad-reaching proposals?

Requiring such a specific standard of evidence in order to advocate for an intervention necessarily limits the range of acceptable interventions toward narrower and more measurable ideas. This is why EBP could be even more dangerous if it escapes its philanthro-capitalist confines and constrains the decarceration movement at large. While the emphasis on certain types of data and methods has been advantageous for certain groups of academics, it’s less clear whether it benefits criminalized people.


The EBP emphasis on quantitative data raises still more issues. It’s widely acknowledged that criminal justice data are terrible. As a researcher, I’m not opposed to efforts to improve the data that I work with, but I don’t think that improvements in data quality are “equal opportunity,” so to speak. It is fairly common for researchers to enter into restrictive data use agreements with criminal legal agencies in order to collect better data. This can create conflicts with aspirations toward data transparency and replication. Scientists expect academic journals to share data that others can use to replicate findings. This is a concern about the integrity of science, but there are also political problems: Law enforcement can freely deny data access to any researcher who is perceived to be too critical, which makes political moderatism and “relationship management” key for the aspiring researcher.

Structurally, then, high-quality quantitative criminalization research often depends upon the political calculations of police chiefs and corrections commissioners. If this kind of research is the minimum threshold for “what works,” won’t the substance of EBP proposals tilt toward the preferences of cops and corrections officers (or their relationships with certain academics)? This isn’t to say that every study supporting the efficacy of policing or incarceration can be discredited, but rather that, on balance, the scales will tilt toward the status quo. In this sense, “what works” could boil down to “what the bureaucrats will tolerate.”


One might think that EBP is not restricted to the shortcomings of EBP as it presently exists—that we could change the outcomes of interest and methods for the better, perhaps meeting the critical scholar’s demand for community engagement in, and governance of, research. This would move EBP away from its roots as a handmaiden to criminal legal bureaucratic efficiency, and perhaps academics should do this to improve our research. But it wouldn’t justify EBP as a political priority.

In brief, why should we create new hurdles for decarceration? If research was not required to create mass criminalization, it’s not clear why reversing that trend should require it. If left to the methodological preferences of EBP’s proponents, efforts to shrink prison and jail populations will take too long, since establishing credible evidence for a proposed policy change can take years or even decades—and, again, high-quality causal inference research is definitionally narrow.

While recent reforms have made a dent in U.S. criminalization, we can’t afford to slow down. Following the historical high point of U.S. incarceration in 2009, prison populations declined by an average of 2.7 percent annually through 2022. If this pace held constant, incarceration rates would not return to 1972 levels until 2069. In fairness, these reductions are partially attributable to EBP, but I fear that the efficacy of “inside baseball” is hitting a wall. For example, while reform prosecutors have successfully downsized jail populations across the United States, racist backlash to the movement is rapidly gaining momentum despite Doleac’s excellent causal inference research showing that reform prosecutors don’t harm public safety.

We need the largest changes we can push through, as quickly as we can possibly get them. New York’s bail reform rollback saga helps us to understand why. Despite constant sober reminders that a temporary increase in interpersonal violence could not be attributed to bail reform, we still saw three rounds of rollbacks to a law which already included serious compromises to the tough-on-crime camp. During the first round of rollbacks in 2020, I toed the EBP line and argued that rollbacks were totally irrational and had no grounding in research. This kind of argument was a total flop. The bloodlust that I wanted to undermine is indifferent to even the most rigorously analyzed data.

Instead, why don’t we advocate for what we believe in? U.S. social scientists seem to be especially cagey about stating their political values, perhaps due to the mid-twentieth-century fantasy of apolitical scientific practice. Although it might be flattering to imagine that we stand on the shore watching the political fish swim from afar, in reality we’re right in the water with everyone else. Rather than insisting on deference to experts, we might instead contribute to political education and build power in the movement against criminalization.

After decades of fearmongering crime news coverage and racist political campaigns, we have to fight back on the same terrain of emotions and values. As an abolitionist, I prefer a certain articulation of values—abundance and solidarity instead of austerity and punishment—but I could imagine more moderate iterations, too. This would involve messaging in favor of things that make life better for marginalized people as well as messaging against policing and incarceration.

I am not so bold as to claim that I hold all the answers to the question of “what works,” politically, or that research can never facilitate advocacy. I only want to caution against dogmatic adherence to EBP. Advocates and activists should make use of our research when it benefits them and ignore us when it doesn’t.

For fellow researchers, I believe that we can and should contribute our skills to campaigns or movement formations, but we should join forces as equals. This means stepping back from the rationalist conceit of EBP, being willing to set our professional interests aside, and getting our hands dirty through ideological struggle.

Image: Ross Elder/Unsplash