In the spring of 2021, four political activists and two community-based organizations sued U.S. facial recognition company Clearview AI in Alameda County, California. They alleged that Clearview AI’s technology violates their privacy rights and enables the illegal government surveillance of protesters, immigrants, and communities of color in a way that “chills [their] free speech and association rights.”
This suit is not the company’s first legal controversy. Effectively pushed out of Canada and fined repeatedly in Europe, Clearview AI has also battled lawsuits across the United States since 2020 over its use of facial recognition technology.
Clearview AI, founded in 2017 by Australian entrepreneur Hoan Ton-That and U.S. politician Richard Schwartz, has become one of the world’s leading vendors of surveillance technology. By scraping the Internet for photos, drawing from social media sites such as Facebook, Instagram, and Venmo, Clearview AI has amassed a database of more than 70 billion facial images, which it sells to law enforcement for identification purposes. Most, if not all, of these images have been collected without the knowledge or consent of the individuals involved, including the plaintiffs of the 2021 California lawsuit, Renderos v. Clearview AI.
Beyond claims of economic harm and emotional distress, the plaintiffs say that Clearview’s business model has had a chilling effect on their speech. They argue that by selling their biometric identities to law enforcement agencies, the company has caused heightened anxiety around the threat of surveillance by law enforcement entities such as ICE. To fully grasp the scope of the harms this lawsuit hopes to remedy, it is helpful to examine how biometric information technologies and surveillance practices have historically functioned in this country.
In Dark Matters: On Surveillance of Blackness, scholar Simone Browne argues that the branding of Black bodies during the transatlantic slave trade functioned as an early form of biometric technology. Browne describes the branding iron as a tool of surveillance and corporeal punishment and also as a way of “making, marking, and marketing of the Black subject as commodity.” Branding—which was applied before boarding the slave ship, across the plantation, and in urban domestic settings—was a means of denying the Black body its humanity, severing it from self, and making it hypervisible as property to be owned, traded, and disciplined.
Today, the biometric technologies sold by companies like Clearview AI function as a form of modern-day branding, rendering bodies legible for identification and criminalization.
In April 2009, Alonzo King, a Black man living in Maryland, was arrested on first- and second-degree assault charges. Because the Maryland DNA Collection Act allowed law enforcement to collect DNA samples from individuals arrested for violent crimes, the police swabbed the inside of King’s cheek. His DNA was then uploaded to the FBI’s Combined DNA Index System (CODIS), a database connecting crime scene evidence with offender profiles, where it matched the sample with one from an unsolved 2003 rape. King was subsequently convicted of first-degree rape.
King appealed the conviction, arguing that the act and the resulting cheek swab violated his Fourth Amendment protections against unreasonable government searches. His challenge progressed through the lower courts and reached the Supreme Court in 2013.
In a narrow 5–4 decision, the Court upheld the constitutionality of the Maryland DNA Collection Act. Weighing “the degree to which [the search] intrudes upon an individual’s privacy” against “the degree to which it is needed for the promotion of legitimate government interests,” the majority concluded that DNA collection in the name of public safety was constitutionally acceptable. Writing for the Court, Justice Anthony Kennedy argued that the brief physical intrusion of a buccal swab was reasonable under the Fourth Amendment, likening it to routine booking procedures such as fingerprinting or photographing. The majority further emphasized that an arrestee’s involvement with the criminal justice system inherently diminished their expectation of privacy.
Justice Antonin Scalia, joined by Justices Ruth Bader Ginsburg, Sonia Sotomayor, and Elena Kagan, dissented, rejecting the majority’s comparison of DNA sampling to routine fingerprinting. They argued that the Fourth Amendment prohibits searching a person for evidence of a crime without individualized suspicion, and that King’s DNA was collected not to confirm his identity but to investigate unrelated crimes. Scalia warned that the majority’s reasoning effectively sanctioned the “general warrants” that the Fourth Amendment was designed to prevent. He further cautioned that the majority’s rationale placed no meaningful limits on the state’s ability to collect DNA from individuals accused of any offense, writing, “I cannot imagine what principle could possibly justify this limitation, and the Court does not attempt to suggest any.”
More from our decarceral brainstorm
Inquest—finalist for the 2025 National Magazine Award for General Excellence & cited in The Best American Essays 2025—brings you insights from the people working to create a world without mass incarceration.
Sign up for our newsletter to get the latest in your inbox every Saturday.
Newsletter
While DNA is but one form of biometric identification and the jailhouse is one example of a formal site of state violence, racialized bodies are constantly treated as suspect, their movements and identities meticulously scrutinized no matter where they appear. Biometric technologies, presented as neutral tools for verification or identification, do the work of producing “truths” about these bodies, often overriding people’s own assertions of self, and enforcing the state’s gaze upon those arbitrarily coded as suspect, unauthorized, or criminal.
A TikTok video filmed in Aurora, Illinois, last year captures this dynamic. In the video, masked Border Patrol agents jump out of an SUV to confront a sixteen-year-old teen riding a bike with his friend through a neighborhood. The agents immediately demand, “Why are you running?” The teen quips, “Why are you chasing me?”
As multiple masked agents circle him, they ask, “Where are you from?” When the teen responds, “I’m from here,” they press further: “From here? Where’s here?” The interaction grows increasingly coercive as the agents question whether he is a U.S. citizen and demand identification. The teen says he is a U.S. citizen but only has a school ID.
One agent instructs him to “relax,” assuring him that if he simply says he was born in the United States and provides an ID, he would be “good.” When the teen repeats that he has no ID, the agent turns to a colleague and asks, “Can you do facial?” Moments later, a masked agent has the teenager face a phone camera and photographs him. The officer repeatedly asks for the teen’s name, and shortly after the he responds, questioning the encounter, the recording cuts off.
While the precise technology used in the Aurora encounter remains unclear, investigative reporting by 404 Media has revealed that ICE deploys a mobile facial-recognition application known as Mobile Fortify, which can scan faces and fingerprints to verify identities in the field. Reporting shows that in Minnesota, ICE agents have been using Mobile Fortify, alongside a facial recognition program created by Clearview AI, social media monitoring, and other tech tools to identify and track undocumented immigrants, along with citizens protesting law enforcement’s presence. As of September 2025, ICE has become one of Clearview AI’s largest institutional clients, securing multiple contracts with the company over the years, including a recent $9.2 million agreement.
Before snapping a photo of the teen, the agent says, “We’ll take care of it real quick. We’ll be in and out.” And it is quick: in the video, the camera capture takes about ten seconds. In doing so, it not only eliminates time as a factor but also the minimal “light touch” of physical contact the Supreme Court deemed constitutionally insignificant in Maryland v. King. By treating a buccal swab as a negligible intrusion, the Court lowered the threshold for other biometric technologies that require no physical contact at all—tools capable of instantly capturing and analyzing a face from a distance, or from the Internet, without a person’s knowledge or consent.
Echoing King and related cases, courts have repeatedly affirmed the state’s asserted interest in identification as a legitimate justification for intrusive policing practices. That logic resurfaced in the Supreme Court’s recent shadow-docket decision in Noem v. Vasquez Perdomo (2025), which effectively allowed ICE to continue relying on racial profiling as a tool of immigration policing. As the dissent pointed out, the ruling permits agents to stop and momentarily detain individuals “who happen to look a certain way, speak a certain way, and appear to work a certain type of legitimate job that pays very little.” Blocking the practice, the Court reasoned, would cause the government irreparable harm by “chill[ing its] enforcement efforts” and deterring officers “from stopping suspects even when they have reasonable suspicion on other grounds.”
Like Scalia cautioned in King, doctrinal concessions made in the name of administrative convenience risk dismantling the tenuously erected privacy protections. In Noem, the Court effectively authorizes the momentary detention of individuals based on racialized markers. By legitimizing this initial seizure based on race, Noem intensifies and racializes the consequences of the subsequent intrusions by the state, including intrusions on the body as manifested by the use of facial recognition technology during these stops.
In this perilous terrain, the question remains of whether privacy retains any meaningful doctrinal force. The teen in Aurora was not under arrest, nor was he in any formal custodial contact with the criminal legal system in a way that would diminish his reasonable expectation of privacy. Nevertheless, he was deemed suspect, his privacy reduced by the state’s gaze and the racialized assumptions attached to his body. In this encounter, privacy was eroded by appearance alone—by visual and epidermal markers that coded him as “illegal” in the eyes of the state. Justice Sotomayor warned presciently of this danger in her dissent in Noem, cautioning against judicial deference to mass detention and the unchecked exercise of enforcement discretion that could be used to “seize anyone who looks Latino, speaks Spanish, and appears to work a low-wage job.”
From the intersection of race, visibility, and state suspicion emerges Frantz Fanon’s account of the epidermal racial schema. In Black Skin, White Masks (1952), Fanon describes epidermalization as the process through which the Black body becomes “an object in the midst of other objects”—a corporeal form made legible only through the gaze that fixes and defines it. Jamaican British scholar Stuart Hall elaborates on this idea, describing epidermalization as “literally the inscription of race on the skin,” a marking that denies subjectivity and confines the Black body within a white visual order.
Building on this lineage, Browne interprets “digital epidermalization” in the age of biometric information technologies as a rupture between body and humanness—a moment in which the racialized subject is refracted into an object of surveillance. Under this framework, the teenager in Aurora was never just a boy riding his bike through a suburban street, just as King was not “innocent until proven guilty.” Each encounter with the state’s gaze, each demand for “papers, please,” for proof of belonging or innocence, disembodies and reconstitutes these individuals as racial Others upon which Fourth Amendment protections attach arbitrarily.
At a 2019 congressional hearing on facial recognition technology, computer scientist and digital activist Joy Buolamwini warned that “our faces may well be the final frontier of privacy.” Yet for people of color in the United States, that frontier has long been compromised. As the biometric surveillance industry expands and tools like Clearview AI proliferate across thousands of federal and local policing agencies, the question of who is under watch, and why, extends beyond technical capability. It reveals the deep ties these carceral tools have with racialized power.
The plaintiffs in the Renderos case recognized this dynamic and explicitly named its harm. They were not simply pointing to the algorithmic inaccuracies and biases, nor asking for “better” state tools. Instead, they highlighted the racialized logic built into the surveillance apparatus itself: the tool by which the body is made readable. This harm, which has stretched across the long arc of state violence, reduces an individual to a “faceprint,” an “object in the midst of other objects,” forced to testify on its own behalf.
In Dark Matters, Browne calls on readers to engage in a critical biometric consciousness that contends with the historical entanglements biometric technologies have with regimes of racialized surveillance. Facial recognition technologies like those used by Clearview AI, the racialized stop in Aurora, and Supreme Court decisions like Maryland v. King and Noem v. Vasquez Perdomo all share the same racialized logic. Each, in distinct but mutually reinforcing ways, authorizes and upholds the infrastructure of identifying, categorizing, and controlling bodies of color.
Contending with the historical predecessors of these practices and tools allows for a grounded critique that situates facial recognition technology as the latest iteration of longstanding techniques and tools at the hands of the state that seek to sever the body of color from itself. This recognition provides the foundation upon which abolitionist frameworks and practices can build to strengthen the fight against these violent technologies in all their forms and iterations, across time and space.
Image: Jüri Palm, Esemete mütologeem (1966) / Unsplash