Skip to main content

The Algorithmic School-to-Prison Pipeline

With little transparency or oversight, technology is being used to flag youth as risks to public safety and deciding who is surveilled, arrested, and confined.

Stewart header 1

In the fall of 2020, the Tampa Bay Times broke a story that rocked communities in a western county of Florida. For at least nine years, the Pasco County Sheriff’s Office had been keeping a list of “at-risk” children they thought could “fall into a life of crime.” They did so without the knowledge of the affected children or their parents.

The office’s Intelligence Led Policing manual, revised in 2018, described the factors that might help identify a child—who might be as young as five or six years old—as “at-risk.” The factors it listed included “low intelligence,” “coming from broken homes,” being “socio-economically deprived” and maintaining a “poor school record.” Children who had never committed an offense could still be flagged, their traumas filed away not as harms done to them, but as statistical predictors of what they might do next. Simply growing up in the wrong neighborhood, the wrong family, or the wrong circumstances was, in and of itself, damning.

Between 2019 and 2021, Pasco County’s At-Risk Youth List ranged from 10,000 to over 17,000 students, with hundreds of students flagged for additional monitoring each year. Deputies were instructed to show up at the homes of those on the list repeatedly. They harassed families in the middle of the night—often without warrants or probably cause—looking for reasons to make arrests. The department ultimately settled in December 2024 a civil rights lawsuit brought by affected families, paying $105,000 to four residents and formally admitting constitutional violations.

The Pasco County sheriff’s program lays bare the current state of the algorithmic school-to-prison pipeline: interconnected systems in which surveillance infrastructure concentrated in Black and brown communities marks young people as threats and feed them into the criminal legal system. The pipeline doesn’t begin with an arrest, or even an offense. It can begin with a grade, a counselor’s note, a zip code. 


The public conversation about carceral algorithms tends to focus on adults: wrongful arrests, biased sentencing tools, predictive policing in urban neighborhoods. When it turns to children, it emphasizes how chatbots can cause or amplify self-harm and contribute to the creation and spread of child pornography. These harms are real, alarming, and well-documented. But they risk obscuring something quieter—the algorithmic criminalization of youth, which intercepts young people before they’ve had any meaningful encounter with the criminal legal system and routes them toward one anyway. The same logic that reshapes adult criminal justice quietly extends into schools, child welfare systems, and juvenile facilities. Young people are being marked as threats before they’ve had the chance to become anything at all.

Consider the Strategic Subject List the Chicago Police Department maintained for about a decade, a secretive “heat list” ranking individuals by their predicted likelihood of being involved in a shooting, as either victim or perpetrator. Before the list was quietly decommissioned in 2020, it surveilled hundreds of thousands of Chicagoans, with analyses noting a disturbing bias toward young people—specifically young African American men. Today, algorithmic risk assessment tools are deciding the futures of youth in Ohio’s juvenile justice system and New York City’s child welfare system, shaping interventions, case management, and supervision based on predicted risk.

More from our decarceral brainstorm

Inquest—finalist for the 2025 National Magazine Award for General Excellence & cited in The Best American Essays 2025—brings you insights from the people working to create a world without mass incarceration.

 

Sign up for our newsletter to get the latest in your inbox every Saturday.

Newsletter

  • This field is for validation purposes and should be left unchanged.

And then there is Adam Toledo. On March 29, 2021, a Chicago police officer, responding to a gunshot detection system alert, chased a thirteen-year-old Mexican American child into an alley and shot him in the chest after he dropped a gun and put his hands in the air. The gunshot-detection system involved, ShotSpotter, does not detect crime; it detects sounds that may or may not be gunshots. But the AI-powered technology saturates Black and brown neighborhoods with police deployments, multiplying the high-stakes encounters between officers and the young people who live there. The alert brought the officer into that alley. The infrastructure created the encounter. Adam Toledo died from it.

As school districts race to adopt AI literacy standards and hand students access to generative tools, many of the roughly 32,000 youth locked up in carceral facilities across the country often can’t get Internet access at all. Meanwhile, AI is actively deciding the futures of these same youth—who stays confined, what interventions they receive, and how closely they are monitored—often with almost no transparency or input from those affected. The same society that wants to teach kids to use ChatGPT is using algorithms to decide whether they get to come home. 


What unites these algorithmic interventions is a logic popularized by nineteenth-century criminologist Cesare Lombroso: the belief that criminality can be predicted from characteristics rather than actions. 

While Lombroso used skull measurements and facial features—at some point, he cited “the projection of the lower face and jaws (prognathism) found in negroes”—today’s systems use grades, neighborhoods, and even abuse histories. This is digital Lombrosianism, and its purest expression is perhaps the Pasco manual, which listed the circumstances of a child’s life as potential evidence of their future crimes. 

The tools of digital Lombrosianism vary. In some cases, it’s direct data-sharing between schools and police, as in Pasco. In others, adult risk assessment tools like COMPAS, originally designed for criminal-court decision making, have been repurposed for juvenile proceedings—applying the same actuarial logic to children whose lives are still unfolding. In still others, it is surveillance infrastructure—such as SoundThinking or school-based facial recognition— concentrated in Black and brown neighborhoods, ensuring that young people in those communities live in near-constant contact with law enforcement in ways their peers elsewhere do not.

Amber Baylor of Columbia Law School, who works with young people caught in precisely these systems, put it to me plainly: “Communities that are already heavily policed experience surveillance technology very differently than others. Something that feels innocuous to some communities can be oppressive and harmful to others.” She added: “Youth are often the ones most impacted by policing and criminalization, especially in schools and public spaces.”

The numbers make the disproportionate targeting painfully clear. In Pasco County’s school district, Black students and students with disabilities are already twice as likely to be suspended as white students. Those were the disparities being fed into the algorithm. The output could only amplify them. Similarly, the  data fed into Chicago’s Strategic Subject List reflected and amplified existing biases.


In early March, Montgomery County Public Schools in Maryland began piloting AI-powered weapons detection across three high schools, fitting existing security cameras with software from a company called Volt AI that claims to identify weapons, fights, and medical emergencies in real time. Volt AI’s CEO has said the system doesn’t use facial recognition, doesn’t make decisions autonomously, and cannot identify race or gender.  But such assurances will not be able to not prevent real-world harms.

A few months ago, an AI-driven security system outside Kenwood High School in Baltimore County misidentified a crumpled Doritos bag as a firearm. After a miscommunication within the school, the alert was forwarded to law enforcement, and numerous police cars responded to the scene. Officers ordered sixteen-year-old Black teenager Taki Allen to the ground at gunpoint, searching and cuffing him before realizing that he was unarmed. The technology itself didn’t act alone, but the incident shows how even safeguards can fail in practice. How Montgomery County’s pilot will unfold—and what consequences it may bring—remains to be seen.

Every new surveillance technology arrives with some form of assurance: it’s not facial recognition; it’s not autonomous; it’s not biased; it’s only a tool. Every tool becomes the baseline for the next one, often expanding the scope of surveillance without anyone noticing. 


What tends to get left out of the carceral AI conversation is that young people haven’t been passive in the face of these systems. They’ve been among the most effective organizers against them. And they’ve been winning.

In 2020 fifteen-year-old Sneha Revanur campaigned against California’s Proposition 25, which would have replaced cash bail with an algorithmic risk assessment system. Her youth-led coalition helped defeat the measure by a thirteen-point margin despite the political establishment’s support, one of the rare instances of voters rejecting a tech-driven “reform” outright. Soon after, she founded Encode Justice, a national youth organization advocating for the regulation of artificial intelligence. The organization—which has over 1,000 members and a partnership with the Electronic Frontier Foundation—spans 40 states and 30 countries.

In 2019 high school students in New York City organized against a $30 million school surveillance initiative under the slogan “Counselors Not Cameras.” Two years later, in 2021, youth organizers in Massachusetts partnered with Encode Justice and the state’s ACLU for a statewide Week of Action demanding a ban on facial recognition in schools. In 2024 student-led coalitions—including Encode Justice—delivered over 12,000 petition signatures to the U.S. Department of Education, again pressing for the same ban. Meanwhile, in Chicago, youth and community organizers were part of a broader push that helped build pressure to end the city’s ShotSpotter program. 

Baylor told me something that keeps coming back to me: “Movements like Ferguson were heavily informed by youth leadership, and that energy created real shifts in national conversations. If you see something that should be different, don’t be discouraged if others say it’s impractical.” She didn’t need to add that the people telling you it’s impractical are usually the ones who built the thing you’re trying to dismantle.


The Justice Education Project (JEP), the first national Gen Z–led criminal justice reform organization, has documented many of these dynamics in our research and publications. Last year, we published Next Steps Into Criminal Justice Activism: Technology, Ethics, and the Future of Justice, an examination of how algorithmic systems reinforce punishment instead of disrupting it. The book’s findings have been briefed to the ACLU Criminal Law Reform Project, the Sentencing Project, and other leading reform organizations, demonstrating that youth-authored research can shape policy discussions rather than just serving as fodder for classroom debate.

This spring, in partnership with UC Berkeley’s Incarceration-to-College Program and law professor Jonathan Simon, we launched a civic writing initiative inside Alameda County Juvenile Hall, bringing that same ethos directly to the young people most affected by these systems—including the carceral technologies shaping their lives. They shouldn’t be the mere subjects of these systems. They should be their critics, their auditors, and their opponents.

The most important lesson from the organizing wins above is that communities can’t fight what they can’t name. The Stop ShotSpotter campaign chose, deliberately, not to center debates about the technology’s efficacy: centering efficacy accepts that the technology should exist. They centered power instead: who benefits, who is harmed, who decides. That shift, from technical critique to political analysis, is what political education makes possible.

Teaching kids to use AI is one thing—teaching them how AI is used against them is something else entirely. An AI literacy that teaches students how to use generative tools fosters job skills. A political education that teaches them how those tools are used against them fosters survival skills. These are not the same thing, and we’ve been funding the first while ignoring the second almost entirely.

The young people who organized against Proposition 25, demanded a ban on facial recognition in schools, and chanted “Counselors Not Cameras” outside the New York City Department of Education did not need to be computer scientists. They needed to understand how these systems function, whose interests they serve, and what alternatives exist. That’s what we at JEP are trying to provide: the political vocabulary to see the pipeline clearly, and the organizing tools to stand in its way.

The modern school-to-pipeline starts with school data feeding into sheriff’s lists, moves through risk prediction scores that can land youth in juvenile detention, and extends to surveillance that shapes how young people experience their neighborhoods. But it isn’t inevitable. It has been interrupted before, by young people who understood what they were up against. Understanding it, naming it, tracing it, refusing it. That’s where the work begins.

Image: Joshua Hoehne / Unsplash