Facial Recognition & Policing

Facial recognition technology is racially biased, has led to wrongful arrests of innocent people, and represents an unprecedented expansion of surveillance power that should be banned or heavily regulated in law enforcement contexts.

Last updated: March 12, 2026

Domain

Technology & Civil Liberties → Surveillance Technology → Biometric Surveillance in Law Enforcement

Position

Facial recognition technology is demonstrably racially biased, has led to wrongful arrests of innocent people — disproportionately Black Americans — and represents a qualitative leap in government surveillance power that threatens fundamental civil liberties. Law enforcement use should be banned or subject to strict regulation including mandatory accuracy standards, bias audits, and warrant requirements.

Facial recognition algorithms misidentify Black women up to 34% of the time — nearly 49 times the error rate for white men. Multiple innocent people have been wrongfully arrested based on false matches, including cases resulting in days of incarceration. Despite this, police departments across the country use the technology with minimal oversight, and some have been caught circumventing existing bans by asking other agencies to run searches on their behalf.

Key Terms

  • Facial Recognition Technology (FRT): AI-powered systems that identify or verify individuals by analyzing facial features captured in photos or video footage. In policing, FRT typically compares surveillance footage or probe images against databases of mugshots, driver’s license photos, or other image repositories to generate potential matches for investigation.

  • Algorithmic Bias in FRT: The systematically higher error rates that facial recognition systems exhibit for certain demographic groups — particularly dark-skinned women, young people, and elderly individuals. This bias stems from training data that overrepresents white male faces, testing protocols that don’t adequately measure cross-demographic performance, and the underlying physics of how some systems detect features.

  • Biometric Surveillance: The use of biological characteristics — face, fingerprint, iris, gait, voice — to identify and track individuals, often without their knowledge or consent. Unlike other forms of identification, biometric data is permanent (you can’t change your face), ubiquitous (you can’t hide it in public), and can be collected at a distance and at scale.

Scope

  • Focus: Law enforcement use of facial recognition technology — accuracy and bias concerns, wrongful arrests, the surveillance implications, and the case for bans or strict regulation
  • Timeframe: NIST studies (2019) through current deployments and bans (2024–2026)
  • What this is NOT about: Commercial facial recognition (phone unlock, airport boarding — different risk profile), deepfakes, or facial recognition in private security — though the technology and bias concerns overlap

The Case

1. The Technology Is Racially Biased — and It’s Getting People Arrested

The Point: Facial recognition systems consistently perform worst on the people most likely to encounter police — Black Americans, young people, and women — creating a technology that automates and amplifies racial profiling.

The Evidence:

  • An MIT study of three commercial facial recognition systems found error rates of up to 34% for dark-skinned women — nearly 49 times the error rate for light-skinned men. The federal government’s own NIST evaluation confirmed that facial recognition systems “work best on middle-aged white men’s faces” with the worst accuracy for Black women (MIT Media Lab / NIST, 2019).
  • Multiple wrongful arrests have been documented: Randal Quran Reid was jailed for nearly a week based on a false facial recognition match. Robert Williams was detained for 30 hours in Detroit after a misidentification. In August 2025, the NYPD falsely arrested Trevis Williams based on a wrongful facial recognition match. Every known case of a wrongful arrest based on facial recognition has involved a Black person (ABA / ACLU / Scientific American).
  • The bias stems from structural causes: training datasets disproportionately represent white male faces, mugshot databases reflect the racial disparities of the criminal justice system (Black people are nearly four times more likely to be arrested for marijuana possession), and surveillance cameras are disproportionately installed in Black and Brown neighborhoods.

The Logic: The bias isn’t a bug that can be easily fixed — it’s deeply embedded in how the technology is built, trained, and deployed. Training data reflects existing racial disparities. Mugshot databases amplify them. Surveillance camera placement concentrates on communities of color. The result is a system that is most likely to misidentify Black faces, most likely to compare them against databases inflated by racially disparate policing, and most likely to surveill them in the first place. At every stage, the technology compounds racial inequality rather than operating neutrally.

Why It Matters: When a facial recognition system falsely identifies someone, police treat the match as investigative evidence. The person may be arrested, detained, interrogated, and forced to prove their innocence — a process that is traumatic, costly, and can have lasting consequences even when charges are dropped. The technology creates a system where being Black in a surveilled area means a higher probability of being falsely matched and wrongfully arrested.


2. Facial Recognition Represents an Unprecedented Expansion of Surveillance Power

The Point: Unlike any previous surveillance technology, facial recognition enables the real-time identification and tracking of every person in public spaces, fundamentally changing the relationship between citizens and the state.

The Evidence:

  • More than a dozen large cities have banned law enforcement use of facial recognition, including San Francisco, Boston, Minneapolis, Portland (OR), and others. Nearly two dozen states have enacted or expanded restrictions on facial recognition by 2025, recognizing the technology’s unique threat to civil liberties.
  • Despite bans, enforcement has been inconsistent — police in cities with bans have been caught asking other agencies (including federal agencies not subject to local bans) to run facial recognition searches on their behalf, effectively circumventing the restrictions (Washington Post, 2024).
  • One city council’s resolution stated that “the propensity for surveillance technology, specifically facial recognition technology, to endanger civil rights and liberties substantially outweighs the purported benefits, and that such technology will exacerbate racial injustice.”

The Logic: Every previous surveillance tool had practical limits. Wiretaps require individual warrants. Physical surveillance requires officers. Even CCTV cameras only record — they don’t identify. Facial recognition eliminates these limits: it can identify every person who passes a camera, in real time, without their knowledge. This isn’t an incremental improvement in policing tools — it’s a qualitative change in the nature of surveillance. The ability to retroactively track every person’s movements through a city by matching faces across camera networks is the infrastructure of a surveillance state, regardless of who deploys it today.

Why It Matters: Technology built for law enforcement doesn’t stay limited to law enforcement. Government surveillance powers expand over time — the Patriot Act was sold as counterterrorism and now covers ordinary criminal investigations. Facial recognition infrastructure, once built, can be turned against protesters, journalists, political opponents, and anyone the government wants to track. The time to prevent this is before the infrastructure is normalized, not after.


3. Bans Work — and the Alternatives Are Inadequate

The Point: Cities that have banned facial recognition demonstrate that policing functions perfectly well without it, while “regulation” short of bans has consistently failed to prevent misuse.

The Evidence:

  • San Francisco banned law enforcement facial recognition use in 2019. Boston followed in 2020. Minneapolis, Portland, and other cities enacted similar bans. None of these cities experienced increases in crime attributable to the loss of facial recognition — the technology was never essential to public safety.
  • Where regulation rather than bans has been attempted, the track record is poor. Police departments have circumvented restrictions by requesting other agencies to run searches. Audit requirements have been inconsistently enforced. The combination of high false positive rates and low accountability creates a system where harms accumulate without consequences.
  • The EU AI Act classifies real-time biometric identification in public spaces as a “prohibited” AI practice, with narrow exceptions for serious crime. This represents the international consensus that the technology’s risks outweigh its benefits in most law enforcement contexts.

The Logic: The “just regulate it” approach assumes that oversight mechanisms can effectively constrain a technology that operates in real time, at scale, and whose errors disproportionately affect communities with the least political power to demand accountability. Experience shows otherwise: police departments circumvent restrictions, audits are delayed or waived, and the technology’s errors create harms that are difficult to detect and remedy. A ban is cleaner, more enforceable, and more honest — it acknowledges that a technology with a 34% error rate for Black women shouldn’t be used to make arrest decisions, period.

Why It Matters: The ban vs. regulation debate has real consequences. Every wrongful arrest — every person jailed because a computer said their face matched — is a harm that could have been prevented by a ban and that regulation has failed to prevent. The question isn’t whether facial recognition could theoretically work fairly; it’s whether, given its demonstrated performance, its deployment in policing can be justified right now.

Counterpoints & Rebuttals

Counterpoint 1: “Facial recognition helps solve serious crimes — banning it means letting criminals go free.”

Objection: Facial recognition has been used to identify suspects in murders, sexual assaults, child exploitation, and other serious crimes. Banning it would take a powerful tool away from police and leave victims without justice.

Response: Facial recognition is not required to solve serious crimes — police solved murders, sexual assaults, and kidnappings for decades without it. The cities that have banned FRT haven’t seen crime clearance rates collapse. And the technology’s unreliability — especially for the communities most affected by crime — undermines its value: a tool that misidentifies Black women 34% of the time doesn’t help solve crimes in Black communities; it creates new injustices. The wrongful arrest cases demonstrate that facial recognition doesn’t just find criminals — it also produces innocent suspects, wasting investigative resources and traumatizing the falsely accused.

Follow-up: “But accuracy is improving — shouldn’t we use it for the most serious cases while continuing to improve the technology?”

Second Response: “It’s getting better” isn’t an accuracy standard for a tool used to deprive people of liberty. We don’t accept 34% error rates for medical devices, aircraft systems, or drug tests. Why should we accept them for a technology that puts people in handcuffs? If accuracy improves to the point where racial disparities are eliminated and error rates are genuinely negligible — which has not happened — the question can be revisited. Until then, deploying a demonstrably biased tool in a system that already disproportionately impacts communities of color is unconscionable.


Counterpoint 2: “The bias problem is being fixed — newer algorithms are much more accurate.”

Objection: The MIT study and NIST evaluations tested older algorithms. Technology improves rapidly, and newer facial recognition systems have significantly reduced accuracy disparities across demographic groups. Banning the technology based on old data ignores the progress that’s been made.

Response: Even NIST’s most recent evaluations show persistent demographic disparities, particularly for dark-skinned women and younger individuals. “Significantly reduced” isn’t “eliminated” — and in a criminal justice context, any demographic disparity means that some groups are systematically more likely to be misidentified than others. That’s the definition of discriminatory technology. And accuracy in laboratory conditions differs from accuracy in real-world deployment — where images come from grainy surveillance cameras, at bad angles, in poor lighting, often of people in motion.

Follow-up: “But you’re holding this technology to a higher standard than human eyewitness identification, which is far less accurate.”

Second Response: Yes — and eyewitness misidentification is the leading cause of wrongful convictions. That’s an argument for improving both, not for accepting a new source of error. At least with eyewitness identification, the defense can cross-examine the witness, challenge their perception, and present counter-evidence. With facial recognition, the algorithm is a black box — the defendant often doesn’t even know it was used, and there’s no way to effectively challenge how the match was generated.


Counterpoint 3: “Banning technology is always wrong — we should regulate, not prohibit.”

Objection: Banning any technology is a reactionary response that stifles innovation. The answer is always better regulation, accountability, and transparency — not prohibition. We don’t ban cars because they cause accidents; we regulate them.

Response: We ban technologies all the time when the risks outweigh the benefits. We banned lead paint, asbestos insulation, and certain pesticides. We ban civilian ownership of nuclear materials and biological weapons. The question isn’t “ban vs. regulate” in the abstract — it’s whether this specific technology, with this specific error profile, deployed in this specific context (policing communities of color), can be used safely. The evidence says no. When cities have tried regulation, police have circumvented the rules. When audits have been required, they’ve been delayed or waived. The regulatory approach has been tested and found wanting.

Follow-up: “But a ban prevents any beneficial use — what about finding missing children or identifying victims?”

Second Response: Bans can include narrow exceptions — San Francisco’s ban allows federal agencies to operate under separate rules, and proposals often carve out specific uses like post-mortem identification or missing persons. The question is whether the default should be deployment with exceptions, or prohibition with exceptions. Given the demonstrated racial bias, the wrongful arrests, and the surveillance implications, the safer default is prohibition — with carefully defined exceptions that require warrants, bias audits, and accountability mechanisms.

Common Misconceptions

Misconception 1: “Facial recognition is just like any other forensic tool — fingerprints, DNA, etc.”

Reality: Fingerprints and DNA require physical contact with a crime scene — they can’t be collected at a distance, in real time, on every person walking down a street. Facial recognition is fundamentally different because it can identify people passively, without their knowledge, at scale. It transforms every surveillance camera into an identification checkpoint. No other forensic tool has this surveillance capability.

Misconception 2: “Only criminals need to worry about facial recognition.”

Reality: Facial recognition has been used to identify protesters, track journalists, and monitor political activists in countries around the world. In the U.S., agencies have run facial recognition searches on protest footage. The technology surveils everyone in its field of view — the innocent and the guilty alike — and false matches mean innocent people are particularly at risk.

Misconception 3: “If cities ban facial recognition, criminals will just move to those cities.”

Reality: No city that has banned facial recognition has experienced crime increases attributable to the ban. The technology is one investigative tool among many — and not a particularly reliable one, given its error rates. Crime is driven by poverty, opportunity, and social conditions, not by the availability of a single surveillance technology.

Rhetorical Tips

Do Say

“Every known wrongful arrest from facial recognition has involved a Black person. A technology that misidentifies Black women 34% of the time shouldn’t be deciding who gets arrested.” Lead with the racial bias data and the specific wrongful arrest cases — they’re concrete and compelling. Use “biometric surveillance” to emphasize the scale of the threat.

Don’t Say

Don’t say “ban all technology” — it sounds Luddite. Don’t dismiss all policing technology; acknowledge that legitimate tools exist and that this specific one fails the accuracy and fairness tests. Don’t frame it as anti-police; frame it as protecting both communities and police from a flawed tool.

When the Conversation Goes Off the Rails

Come back to this: “The technology misidentifies Black women 34% of the time. Every known wrongful arrest from facial recognition has involved a Black person. This isn’t about being anti-technology — it’s about not deploying a racially biased tool in a system that already has racial bias problems.”

Know Your Audience

For conservatives, emphasize government surveillance overreach, the Fourth Amendment implications (identifying everyone in public without a warrant), and the technology’s reliability failures (bad tools lead to bad policing). For moderates, lead with the wrongful arrest cases and the accuracy data — fairness and accuracy are universally compelling. For progressives, emphasize racial justice, the expansion of the carceral state, and the historical pattern of surveillance targeting communities of color.

Key Quotes & Soundbites

“Facial recognition misidentifies Black women up to 34% of the time — nearly 49 times the error rate for white men. Every known wrongful arrest from facial recognition has involved a Black person. This isn’t a glitch — it’s the system working as designed.”

“We wouldn’t accept a drug test that gave false positives 34% of the time. We wouldn’t accept a medical device with that failure rate. Why should we accept it for a tool that puts people in handcuffs?”

“Cities that banned facial recognition didn’t see crime spike. What they did see is zero wrongful arrests from a racially biased algorithm.”

  • Police Reform & Accountability — Facial recognition expands police power without corresponding accountability (see criminal-justice/police_reform_accountability)
  • Data Privacy & Surveillance — FRT is part of a broader surveillance infrastructure built on unregulated data collection (see technology-civil-liberties/data_privacy_surveillance)
  • AI Regulation & Worker Protections — Facial recognition is an AI application that demonstrates the need for algorithmic accountability (see technology-civil-liberties/ai_regulation_worker_protections)
  • Mass Incarceration & Sentencing — Biased surveillance technology feeds a system already characterized by racial disparities (see criminal-justice/mass_incarceration_sentencing)

Sources & Further Reading