A report released by the Center on Privacy & Technology evaluates the reliability of face recognition as it is used by police in the United States. The report examines the myriad human and machine factors, and their interactions, that might lead to bias and error when law enforcement agencies use face recognition. As a biometric, forensic investigative tool, face recognition may be particularly prone to errors arising from subjective human judgment, cognitive bias, low-quality or manipulated evidence, and under-performing technology. These errors have real-world consequences — the investigation and arrest of an unknown number of innocent people and the deprivation of due process of many, many more. As the grassroots movement to ban police use of face recognition grows, invoking the many overarching ethical problems with this kind of surveillance technology, it is important to point out that face recognition doesn’t work well enough to reliably serve the purposes for which law enforcement agencies themselves want to use it.
Relying on the vast wealth of research and knowledge already present in computer science, psychology, forensic science, and legal disciplines, its key findings are:
- As currently used in criminal investigations, face recognition is likely an unreliable source of identity evidence.
- The algorithm and human steps in a face recognition search each may compound the other’s mistakes.
- Since faces contain inherently biasing information such as demographics, expressions, and assumed behavioral traits, it may be impossible to remove the risk of bias and mistake.
- Face recognition has been used as probable cause to make arrests despite assurances to the contrary.
- Evidence derived from face recognition searches are already being used in criminal cases, and the accused have been deprived the opportunity to challenge it.
- The harms of wrongful arrests and investigations are real, even if they are hard to quantify.
This report is meant to be a resource for researchers examining the potential risks of this new technology, defense attorneys whose clients were identified using face recognition, judges seeking to understand the scientific merit of face recognition-derived evidence, police departments seeking to minimize the harms its use, and advocates and organizers seeking to protect rights in an age of ever expanding police deployment of this technology.
It calls on these communities to question any and all assumptions that the current use of face recognition is adequately controlled and reliable. It warns that we have a narrow and closing window of time in which to repeat the mistakes of previous forensic disciplines and avoid judicial certification of fundamentally flawed or unreliable methods.
We have made available all publicly available sources cited to in the report. These can be found here.
Other related resources that may be of use to defense attorneys include:
- Affidavit of the author in a 2019 face recognition case (redacted to protect the defendant’s privacy).
- Brief of American Civil Liberties Union, ACLU of Florida, Electronic Frontier Foundation, Georgetown Law’s Center on Privacy & Technology, and Innocence Project as Amici Curiae Supporting Petitioner at 12–20, Lynch v. Florida, No. SC19-298, 2019 WL 3249799 (Fla. 2019).
- Notice of Motion to Suppress Identification Testimony filed before the Supreme Court of the State of New York (redacted to protect the defendant’s privacy).
- Face Recognition Discovery Wish List, as referenced in footnote 314.