Skip to main content

Precrime Is No Longer Science Fiction

Biometric technologies are increasingly using facial expressions, eye movements, voice patterns, and more to predict whether someone has or will commit a crime.

Fathallah header 1

Every few months, a harrowing story of biometric misidentification makes headlines. Last April, a Black man named Trevis Williams was wrongfully arrested by the NYPD after facial recognition technology misidentified him. The man they were looking for was eight inches shorter, seventy pounds lighter, and, unlike Williams, actually in the vicinity of the crime that day. Such stories highlight the dangers, identified by Simone Browne, of encoding the body as evidence—and policing it accordingly.

Stories of misidentified faces may receive the most public attention, but biometric technologies far beyond facial recognition are being currently deployed for carceral means. They encode and turn into data not just people’s face shapes but also their irises, their walks, and even their veins. Increasingly, they are combined with predictive analytics to go beyond identification. Today, they might be used to predict, categorize, and infer who might be a suspect, who might be lying, who might be risky, and who might reoffend. By introducing an element of speculative imagination, these systems create racialized “suspects” before any wrongdoing has occurred.

Last month, one such example came to light: an AI model, trained on years of prison phone calls, is being piloted to help detect “when crimes are being thought about or contemplated.” Beyond undermining the presumption of innocence, claims that biometric technology can foretell crime falls under the logic of “precrime,” a term coined by author Philip K. Dick in his 1956 novella The Minority Report. That this idea was born in science fiction is telling: it underscores both the speculative foundations of such biometric systems and the distinctly dystopian path their widespread adoption would chart.

In this article, I pull on my work as a surveillance and carceral technologies researcher to highlight just a few ways this reach is expanding. Law enforcement is using DNA to speculatively generate suspects’ faces; tracking eye movements to detect deception; analyzing voiceprints to flag “risky” individuals; and using electronic monitoring data to forecast reoffending. Altogether, these examples illustrate how technology can stretch the carceral system into the future, shaping who is watched, judged, and targeted before anything has even happened.


While biometric data has historically been used for identification—matching fingerprints against a database, for example—recent developments in machine learning have enabled the emergence of forensic DNA phenotyping, where the facial appearance of a person is predicted based on their genetic material. The idea is that an approximation of someone’s face—typically sketched by a composite artist based on surveillance video or eyewitness recollection—might now be extrapolated from a DNA sample.

This approach raises serious concerns. As Race After Technology author Ruha Benjamin notes, “the relationship between genes and facial variation is not at all clear.” The link between genetic material and facial appearance remains scientifically tenuous at best. It leaves room for law enforcement to draw speculative and racially biased conclusions—often producing suspect images that amount to what ACLU analyst Jay Stanley described as “a very generic-looking Black man.” Rather than narrowing a suspect pool, DNA phenotyping often produces a new racialized figure of suspicion.

More from our decarceral brainstorm

Inquest—finalist for the 2025 National Magazine Award for General Excellence & cited in The Best American Essays 2025—brings you insights from the people working to create a world without mass incarceration.

 

Sign up for our newsletter to get the latest in your inbox every Saturday.

Newsletter

  • This field is for validation purposes and should be left unchanged.

One of the most well-known examples of DNA phenotyping occurred in Florida in 2015, when the Orlando Police Department paid private contractor Parabon NanoLabs to create a “Snapshot Phenotype Report” from DNA left at a crime scene fourteen years earlier. Parabon’s report predicted that the suspect had dark skin, dark eyes, no freckles, and African American ancestry. It even estimated the size and relative placement of facial features such as his nose, eyes, chin, mouth, and jaw. The Orlando Police Department released the computer-generated sketches to the public to solicit tips.

In 2020 a California detective asked to run a Parabon-rendered facial composite through a facial recognition database. It’s unknown whether the request was ever granted, but the case appears to be the first known instance of a police department attempting to use DNA phenotyping alongside facial recognition.


Another emerging use of carceral biometrics involves tracking eye movements to spot potential deception. EyeDetect, developed by the Utah company Converus, claims that it can identify whether someone may be lying by analyzing involuntary ocular movements during a thirty-minute AI-driven screening or a fifteen-minute “single issue” diagnostic. On its website, EyeDetect claims that it “accurately tests job applicants, employees, patients, parolees, drug users, athletes, criminal suspects and others about specific issues or crimes.”

The science underlying EyeDetect has been widely challenged. Contrary to EyeDetect’s slogan, it seems that the eyes do lie. Research shows no consistent way to link eye movements to lying. For decades, scientists have warned that lie detection based on physiological signals is deeply unreliable—essentially “junk science.” Because polygraphs depend on nearly arbitrary interpretations of data, they can become vehicles for racial bias. Like the more traditional lie-detection tools that came before it, eye-tracking technologies risk granting law enforcement scientific-looking cover for biased judgments.

Despite its shaky foundations, Converus’s clients include the federal government and twenty-one state and local law enforcement agencies. While most courts still reject polygraph results as evidence, EyeDetect has been admitted in at least one New Mexico case.


In prisons and jails across the United States, authorities now use automated systems to transcribe phone calls and visitation videos and to flag words or phrases deemed risky. For decades, correctional facilities recorded and reviewed calls manually, but AI-driven systems now allow authorities to scan millions of minutes of conversations in real time.

In the 2010s, prisons began using a biometric technology called voiceprinting, which identifies individuals based on the unique characteristics of their voices. It allows correctional facilities to identify who is speaking on any given call and to search for other calls featuring the same voice. Texas-based Securus Technologies, one of the largest providers of prison phone services in the United States, supplies sophisticated voiceprinting services to hundreds of correctional agencies.

There is no scientific consensus on the validity of automatic speaker recognition, and experts recommend exercising extreme caution when using voice recognition as evidence in court. Even Securus’s 2016 patent acknowledges that “each given person’s vocal tract characteristics actually vary in a number of ways depending on time of day, how much the person has been talking that day and how loud, whether or not the person has a cold,” and other factors. But prisons continue to collect voiceprints and build growing databases; at least 200,000 voiceprints have been stored thus far. Sometimes, prisons pressure incarcerated people to give up their voice samples by threatening a complete loss of communications privileges to those who decline. In other instances, they enroll incarcerated people in voice recognition programs without their knowledge or consent. New York alone, for example, had already enrolled 92 percent of its incarcerated population by 2019.

In some jurisdictions, voiceprinting systems can be used to identify both incarcerated people and the individuals who speak to them. As representatives from the Electronic Frontier Foundation point out, such technologies can potentially be used to “profile anyone who has a voice that crosses into a prison, including all the parents, children, lovers, and friends of incarcerated people.” Advocates are afraid that authorities might flag individuals who are in touch with multiple incarcerated people, searching for patterns and ways to crack down on prison organizing.


Today, a growing array of wearable technologies—ankle monitors, bracelets that measure blood alcohol levels, smartphones themselves—are used to track people at nearly every stage of the criminal legal process.

A new generation of compulsory biometric devices, however, pushes far into dystopian territory, raising questions about how much biological information the carceral state feels entitled to collect. Some of these tools, already being tested in U.S. jails and prisons, take the form of rigid wristbands that monitor heart rate, skin temperature, cortisol levels, and so-called “activity” or stress indicators. According to the ACLU, they represent “not just a privacy invasion but an assault on inherent human dignity and autonomy.”

In some research initiates, the data gathered by biometric devices is already being analyzed and operationalized. In Indiana, a team of computer scientists and developers at Purdue University utilized such data in 2020 to train an AI algorithm to predict recidivism. According to the team’s press release, the project—funded by the Department of Justice and conducted in collaboration with county-level corrections and law enforcement agencies—harvested data such as stress and heart rates via wearable bracelets and smartphones. The stated goal was to determine which physiological indicators are linked to an individual’s “risk of returning to their criminal behavior.”

But as scholar Brian Jefferson notes in Digitize and Punish, algorithms used for carceral means are not “simply mathematical objects” but rather “artifacts of governance designed to achieve specific objectives.” By focusing on internal, physiological states rather than structural conditions—such as access to housing, employment, health care or social support—these models dismiss decades of work investigating recidivism and its social and economic causes. Those causes, as AI researchers Os Keyes and Chelsea Barabas have noted, are already well understood. What remains unsettled is why emerging technologies continue to search for answers inside the body, rather than in the systems that shape people’s lives.


Across these examples, a shared pattern emerges: the encoding of the body as evidence, often without the knowledge, consent, or recourse of those involved. This process strips people of their autonomy, dignity, and right against self-incrimination. Whether through DNA, eye movements, or physiological indicators of stress, these systems recast human bodies as sites of suspicion, deception, threat, or risk. Rather than eliminating human bias, they redistribute and reinforce it.

“Crime prediction algorithms,” Ruha Benjamin aptly explains, “should more accurately be called crime production algorithms.” Biometric tools are likely to expand further across the criminal legal system as police departments, courts, and prisons increasingly turn to A.I.-driven surveillance and predictive technologies. These tools are being deployed most aggressively in communities that are already heavily policed and disproportionately criminalized. Preparing for—and resisting—this expansion requires a broader understanding of biometrics beyond facial recognition alone, including the many ways bodily data can be collected and put to use. Fighting to ban facial recognition is not enough; it must be part of the larger fight to stop carceral biometrics and advance digital abolition.

Image: Tao Yuan / Unsplash