Latest Work

Two New Privacy Center Reports Reveal Serious Problems With Police Use of Face Recognition

On May 16, 2019, the Privacy Center released "Garbage In, Garbage Out: Face Recognition on Flawed Data" and "America Under Watch: Face Surveillance in the United States," two new research reports that reveal new information on police use of face recognition. Read the full press release.

Washington Post: Facial Recognition Could Make Us Safer—and Less Free

The Washington Post's Editorial Board reiterated its call for Congress to step in and decide the line between acceptable and unacceptable uses of face recognition technology, after the Department of Homeland Security announced a real-time face surveillance pilot at the White House.

Safe Face Pledge

The Center worked with computer scientist Joy Buolamwini to create a pledge for vendors of automated facial analysis technologies to sign, signaling their commitment to responsibility and accountability.

New Yorker: Should We Be Worried About Computerized Facial Recognition?

The New Yorker talked to Senior Associate Clare Garvie for an in-depth examination into how face recognition is revolutionizing everything from farming in Ireland to policing in the United States.

NIST International Face Performance Conference

Senior Associate Clare Garvie spoke at the National Institute of Standards and Technology's first conference, on a panel about technical factors affecting the deployment and use of face recognition technology. Clare's remarks focused on the real-world consequences of differential error rates, including in the law enforcement context.

Defender Summer School’s Facial Recognition Software and Eyewitness Identification Series

Senior Associate Clare Garvie conducted a training on police use of face recognition technology to the Ninth Judicial Circuit 2018 Defender Summer School at Barry University in Orlando, FL.

The FAA Reauthorization Act of 2018

On October 5, 2018, a federal law was signed requiring privacy and racial bias assessments of the federal government's use of biometric technologies at airports—the first-ever federal law requiring artificial intelligence bias testing. The law was enacted following the Center's December 2017 report, Not Ready For Takeoff, which found privacy and bias problems in these deployments.

DHS OIG Audit of Biometric Exit

The Office of the Inspector General for the Department of Homeland Security conducted an audit of the agency's use of face scans at airport departure gates that closely tracked and validated many of the concerns raised in the Center's Not Ready for Takeoff report. OIG reported, among other things, that the program exhibits age bias, causes traveler delays, and may end up being far more costly than initial estimates.

Op-Ed: Facial Recognition Threatens Our Fundamental Rights

Senior Associate Clare Garvie wrote an op-ed for The Washington Post about how face surveillance technology risks changing our expectations of privacy, our right not to be investigated unless suspected of wrongdoing, and our freedom from deeply flawed policing practices.

Coalition Letter to Axon’s AI Ethics Board

The Center co-wrote a 42-organization coalition letter to Axon's new "AI Ethics Board." The letter urges the board to center the experiences of policed communities in its process, and argues that integrating face surveillance with body-worn cameras would be "categorically unethical."