Eyewitness identification has a well-deserved reputation for being unreliable and is known to have led to wrongful conviction, incarceration, and even execution of those incorrectly identified. Black Americans are much more likely to lose liberty or life due to mistaken identification than whites. With the ever-increasing availability of technological alternatives to eyewitness identification, one might think that the criminal legal system is on the verge of abandoning this deeply flawed form of evidence. But as recent experience with facial recognition technology teaches, the injustice wrought by eyewitness identification will not be undone by technology alone.
In the past decade alone, surveillance cameras have proliferated to an astronomical degree in public and private spaces alike. As of 2021, an estimated 85 million such cameras had been installed in the United States, to say nothing of the ubiquity of cell phone cameras. This unprecedented increase in video surveillance has coincided with increased demand among law enforcement agencies for facial recognition technology—computer programs that use algorithms to analyze images, ascertain their degree of similarity to one another, and suggest potential “matches” based on predetermined thresholds of similarity. The most recent publicly available estimates suggest that facial recognition technology is available to officers in a quarter of state and local law enforcement agencies and half of federal law enforcement agencies—but these figures (from 2016 and 2021, respectively) almost certainly understate the scale and scope of law enforcement access to facial recognition technology. For example, in May 2023, Clearview AI told the BBC that it had run nearly a million facial recognition searches on behalf of U.S. law enforcement agencies. In April 2025, the Milwaukee Journal Sentinel reported that police there were seeking free access to a facial recognition database, in exchange for providing its owner 2.5 million mugshot photos. A month later, the Washington Post reported that for a span of two years, New Orleans police had been secretly using a network of 200 facial recognition cameras to monitor wide swaths of the city in real time.
But these tools are extremely flawed and far from being a solution to the injustice and inequality that eyewitness identification has wrought over the years. They are limited by the racist and gendered biases of their training sets, by the poor quality of crime scene and surveillance camera footage, and by the limits of the technology itself. In other words, the use of facial recognition to identify witnesses is rife with the exact same problems as eyewitness identification—with the worsening factor of being presented as scientific, objective, and irrefutable. Given these limitations, it would be naïve to expect facial recognition to eliminate racially disparate wrongful arrests and convictions.
It would be just as naïve to think that facial recognition technology will ever cure the criminal legal system’s dependency on shoddy eyewitness identification practices. These practices have endured and likely will continue to do so in some form, not because they are particularly reliable, but because they are expedient to the mass incarceration of those whom society is prepared to accept as criminal—poor people and people of color, those least equipped to defend themselves against charges. When a new technology like facial recognition emerges, the old methods don’t fall by the wayside. Rather, they work in tandem toward the same goal.
Unfortunately, these concerns are not abstract. Faulty facial recognition matches are already contributing to arrests and convictions. Below I analyze five case studies of people being ensnared in the criminal legal system because of faulty facial recognition technology, usually in concert with incorrect eyewitness identification.
Porcha Woodruff was eight months pregnant when police arrived at the home she shared with her daughters (ages six and twelve) and fiancé to arrest her for a carjacking she did not commit. The carjacking had allegedly occurred two weeks earlier, as the driver was dropping off a woman he had picked up earlier the same day. The driver had taken the woman to a liquor store and a gas station before dropping her off at a third location, where someone took his vehicle and possessions at gunpoint. After reviewing gas station surveillance, the lead detective asked an analyst to run a facial recognition search of a woman who had returned with the victim’s phone after the carjacking. This search returned a possible match to a photo of Woodruff. A few days later, the detective asked the driver to review a six-person photo array that included Woodruff. The driver, in turn, identified her as the woman he had picked up. Police came to arrest Woodruff as she was getting her daughters ready for school. After being led out of her home in handcuffs, she was interrogated and jailed for eleven hours, during which time she suffered from sharp pains, spasms, and contractions. She was released after posting a $100,000 personal bond and eventually cleared of wrongdoing, perhaps owing to the fact that, unlike the suspect, she was in her third trimester of pregnancy.
More from our decarceral brainstorm
Inquest, finalist for the 2025 National Magazine Award for General Excellence, brings you insights from the people working to create a world without mass incarceration.
Sign up for our newsletter to get the latest in your inbox every Saturday.
Newsletter
Nijeer Parks was jailed and prosecuted for shoplifting, resisting arrest, and aggravated assault thanks to shoddy police work exacerbated by facial recognition technology. The incident occurred at a Hampton Inn in Woodbridge, New Jersey, when the police tried to arrest an alleged shoplifter for presenting them with a fake ID. Before the police could put their suspect in handcuffs, he escaped and fled the scene in a rental car, hitting a parked police car and just missing an officer, who jumped out of the way. By running the fake ID through multiple facial recognition databases, police learned of a potential match to Parks. After deciding that the fake ID matched Parks’s New Jersey driver’s license photo, the lead detective sought confirmation by presenting Parks’s photo to an eyewitness. After the eyewitness identified Parks, he was arrested and spent ten days in jail. His case was pending for months before the state dismissed it. In the process, he incurred roughly $5,000 in legal fees and seriously considered accepting a plea offer, despite having very strong alibi evidence in the form of Western Union receipt that placed him thirty miles away at the time of the incident.
In Christopher Gatlin’s case, a detective sought to identify the men who assaulted a security guard outside St. Louis by uploading blurry surveillance images of the suspects to a facial recognition database. For one assailant, whose face had been obscured by a surgical mask and a hood, the program returned a list of potential matches that included Gatlin—a twenty-nine-year-old father of four with no apparent ties to the incident or the place where it occurred. Although the security guard remembered little about the incident, detectives elected to present Gatlin’s photo to him anyway in a six-person grid. After narrowing the photos down to two—one belonging to Gatlin and another belonging to someone with a much lighter complexion—the security guard set Gatlin’s photo aside, identifying the other man as his attacker. In response, the lead detective told the security guard to “think about the characteristics” of his attackers, including hair and complexion. When the security guard mentioned one attacker had been wearing a hat or stocking cap, the lead detective told him to place a hand over the top of the photos to help him better visualize a head covering. After complying, the security guard picked Gatlin’s photo. This time, the detective responded “OK!” and told the security guard to circle and sign Gatlin’s photo.
Faulty facial recognition and eyewitness identification similarly combined to allow the arrest and prosecution of Michael Oliver for breaking a Detroit school teacher’s cell phone. The teacher had been using his phone to film a brawl outside of a school when one of the participants snatched it and threw it to the ground, cracking its screen. Although the teacher identified a former student named Terry as the perpetrator, police abandoned this lead without interviewing a single student after a facial recognition search yielded a potential match to a photo of Oliver. The police then included Oliver in a photographic lineup that they presented to the teacher. After the teacher chose Oliver from the lineup, the police arrested him as he was driving to work. As a result, he lost his job and his car, which was impounded. Oliver’s appearance differed from the perpetrator’s in a few key respects—his face was narrower, his hair was styled differently, and, unlike the perpetrator, he had tattoos on his arms and above his left eyebrow. Nevertheless, his case remained pending for months, while prosecutors insisted that he could have gotten his tattoos in the two months between the incident and his arrest.
Each of the above cases ended in a dismissal of charges, but many others end much less favorably for the accused. For example, Francisco Arteaga pled guilty to robbing a New York cellphone repair shop at gunpoint, despite his alibi that he was visiting relatives at the time of the offense. After the robbery, police submitted surveillance images of the suspect to the New Jersey Regional Operations Intelligence Center (NJ-ROIC), a division of the state police, for facial recognition analysis. NJ-ROIC found no matches but offered to repeat the search with a better-quality image if one could be produced. Rather than pursuing this course of action, detectives sent the raw surveillance footage to NYPD’s Real Time Crime Center (RTCC), which extracted several images and ran them through its own databases. This process yielded a potential match to Arteaga’s 2019 mugshot. Thereafter, two store employees identified Arteaga in separate photographic lineups. Arteaga moved to suppress these identifications and further sought detailed information about the reliability of the technology used to identify him. On appeal from the denial of the latter request, an appeals court held that Arteaga was entitled to the information he sought, given its potential relevance to the jury’s assessment of reasonable doubt. Arteaga was unable to capitalize on this victory, largely because the state succeeded in arguing that the records sought were beyond the New Jersey court’s jurisdiction to compel. After remaining in jail for nearly four years awaiting trial, he decided to plead guilty rather than wait any longer. He did so despite having an alibi, and no doubt wary of the eyewitness testimony he would face if he went to trial.
Most of the aforementioned cases involve notably obvious violations of the best practices that experts recommend for minimizing the risk of mistaken identification. There is no question that suggestive identification procedures can be used to launder bad facial recognition results. But preventing cases like these from becoming more commonplace will require more than mere adherence to best practices. After all, the problem is not just that suggestive identification procedures can lead an eyewitness unwittingly to confirm a bad facial recognition match but that eyewitness identification itself is already a deeply flawed practice simply being exacerbated by bad science.
The public’s perception that police have access to sophisticated facial recognition technology compounds a structural flaw in eyewitness identifications: eyewitnesses struggle already to identify people, and then feel reassured that facial recognition technology means that a lineup necessarily includes the most likely suspect. This assumption is dangerous because it makes eyewitnesses even less careful; they believe they are just there to confirm something that technology has already guaranteed. As many have written, people have a natural tendency to place blind faith in technologies, like facial recognition, that operate by means of automated processes that defy lay comprehension. Such “automation bias” may be heightened in cases like Arteaga’s and Oliver’s, where the eyewitness is aware that the police have recovered video of the perpetrator to aid in their investigation.
Action is needed to prevent eyewitness identification from becoming a rubber stamp on questionable facial recognition evidence. Some advocates and lawmakers have proposed banning or imposing moratoria on the use of facial recognition software altogether, with varying degrees of success. But these moratoria are fast expiring. The legislative appetite for such measures is fast diminishing and was never large to begin with.
Gary Wells, a prominent scholar in the field of eyewitness reliability, has said that “pairing facial recognition technology with an eyewitness identification should not be the basis for charging someone with a crime.” To this end, some police departments have adopted internal regulations requiring officers to corroborate any algorithmically generated leads about a suspect before presenting that suspect to an eyewitness in a lineup or photo array. If adhered to, such regulations would cut down on the number of “rubber stamp” identifications. In practice, however, it is not clear just how often police obey these regulations.
Given the considerable pressure on police to identify suspects and build cases against them, it would be naïve to expect internal regulation to be effective at curbing misuse of facial recognition technology. There can be no hope for meaningful regulation without accountability mechanisms external to departments themselves. Historically, society has depended on judges and civilian review boards to provide this kind of external oversight and accountability. However, for various reasons, it is hard to imagine either of these mechanisms being effective in this context without significant changes in the law and the way that we conceive of its application to identifications rendered by machines and humans alike. Even assuming widespread adoption of internal regulations requiring police to corroborate leads generated by facial recognition software, civilian review boards are limited in their ability to recommend discipline of officers who violate them.
Judges are limited in their ability to penalize and prohibit reliance on faulty facial recognition algorithms by the need to find that such practices violate the Fourth Amendment prohibition on unreasonable search and seizure. Reaching this conclusion may require courts to accept that facial recognition technology warrants closer scrutiny under the Fourth Amendment than courts have historically been willing to apply to other questionable sources of information, such as confidential human informants.
Meaningful regulation may further require judges and lawmakers to embrace the idea that the public has a right to control whether and how police use their pictures—an idea that has been under attack since at least 1973, when the Supreme Court held that, unlike in the context of an in-person lineup, the accused has no right to counsel’s presence when the police show his picture to a witness for identification purposes. In this regard, there is some hope, as a number of states have enacted biometric privacy laws, requiring consent for the collection of facial data and, in some cases, banning its resale.
Whatever obstacles may exist to reining in these practices, the urgency of doing so has never been clearer than it is now. Failure to regulate law enforcement use of facial recognition technology will yield more of what centuries of poor regulation and nonregulation in the context of eyewitness identifications have already produced: questionable convictions that are all but impossible to overturn given the confidence of the people (and companies) responsible for generating them.
Image: Lianhao Qu / Unsplash