Clare Garvie (L’15): From Student to Strategist

October 18, 2016

Examining “The Perpetual Lineup: Unregulated Police Face Recognition in America” — the new groundbreaking report by Georgetown Law’s Center on Privacy & Technology — readers will find the name of Clare Garvie (L’15) right at the top. A 2009 graduate of Barnard College who majored in political science and human rights, Clare spent three years working with the International Center on Transitional Justice, a human rights legal organization, before deciding to go to Georgetown Law.

“Prior to entering law school, my research focus had been on international law and human rights, particularly on the intersection — or lack of intersection — between military technology and humanitarian law,” she says. “I was particularly interested in examining how technology has taken military capabilities beyond the scope of the laws of war, most of which were framed in the 1800s.”

Clare came to the Center on Privacy & Technology as a law fellow after graduating from the Law Center in 2015. Since then, her work on the face recognition report has become a full-time job (more than a year’s worth of work by the Center staff went into it) and she was hired as a Center associate in September. We sat down with her to find out what it was like to work on such a significant project. Excerpts of the interview follow.

Your name is at the top of a groundbreaking report, less than 18 months after graduating from Georgetown Law. How did this happen?

It really speaks to the people who I studied and worked with at Georgetown Law, as well as to the work that the Center does. I was connected to the Center on Privacy & Technology and its executive director, Alvaro Bedoya, through Professor Laura Donohue, who I worked with as a research assistant during my 3L year. She’s one of Center’s faculty advisers, and recommended that I apply. When I was enrolled in the Federal Legislation Clinic with Professor Judy Appelbaum, I was also on a team representing the Leadership Conference on Civil and Human Rights, one of the Center’s coalition partners.

I was the first fellow that the Center has had. It’s been exciting to work with an organization that is relatively new, and still small — it means I’ve been integral component of the work the Center does from day one. If we were going to do a project as big as this face recognition report, I was going to be engaged with it from start to finish.

Why this particular project?

Alvaro and the Center became a point of contact for media and for advocacy organizations around face recognition technology, thanks to Alvaro’s previous work on the commercial use of face recognition and through his work on technology legislation on the Hill. We realized we know a lot about how the FBI uses face recognition technology. But there is very little information about how state and local law enforcement agencies use it. Face recognition use by police has the potential to impact millions of Americans — and yet there was almost no information on which agencies use it, how the technology was deployed, and what policies were in place to constrain this use. So we decided to investigate.

What is facial recognition? Why should Americans be concerned about it?

Facial recognition is a technology that turns our face into a kind of file that an algorithm can identify you with. A face recognition algorithm takes facial features — it can be the distance between the eyes, the texture of the skin, other face geometry — and makes that into a mathematical calculation that can be used to compare an unknown face to a database of known faces. In our report, we are looking at the way police use it. During an investigation or investigatory stop, an officer can take a picture of someone whose identity they don’t know and compare it against a database — usually of mug shots, but as we found, these databases are increasingly composed of driver’s license photos too.

So if I have a Maryland driver’s license, what does that mean?

What it means for a lot of people with driver’s licenses, including from Maryland, is that you are in a database police can search to identify someone from a photograph or surveillance video still. What it means is that you are in a database used for criminal justice purposes, even if you’ve never been arrested. Most of us probably don’t think of ourselves as being in a law enforcement database, and that our photo is being presented in a virtual lineup maybe multiple times a day.

This raises serious questions: Is this a search under the Fourth Amendment? Who is being searched — the unidentified person in the photo run against the database, or the people enrolled in the database? Should the public have been given notice about this enrollment when they applied for a driver’s license at 16 or 17 — that they are not just applying for the right to drive, but are actually giving the police permission to compare their photo against photos of criminal suspects?

Why should we be concerned, compared to fingerprints, for example?

Face recognition technology is very new. Fingerprinting has been around for 100 years now; face recognition as used by police today has been around for about 15 years. While the technology is improving rapidly, it’s still not considered as accurate a form of identification as fingerprints. To be fair, many police departments we spoke to do recognize that it should be used as an investigative lead only, and not as a form of positive ID. That being said, a face recognition system usually returns a list of possible candidates, not just the most likely suspect. So even if the right person is in that list, there could be 40 or 50 other completely innocent people.

So the question really becomes, what if you’re included as a likely suspect on this list — you have a high “face match” probability — and police begin to investigate? They may be using it as an investigative lead, but that means they may come talk to you, talk to your associates, figure out where you were at a certain time. They may even bring you in for questioning or arrest you, based on your inclusion on that list. In this way, the inaccuracies still inherent to face recognition have the potential to disrupt the lives of people who are completely innocent and may not even know that their face is being used in such a fashion.

What does this mean for racial justice? There are racial disparities?

There have been very few studies done examining possible racial bias in face recognition algorithms. Those that have been done show that some algorithms may perform up to 10 percent less accurately on people of color, as compared to white people. This means two things. One is that when an algorithm searches for an African American, it’s more likely to not find the correct person. Maybe that doesn’t sound so problematic. But these systems are usually designed not to return no results but to return a list of multiple possible candidates. This means they are also more likely to produce a list of entirely innocent people as possible leads — something that will happen more often when the suspect is African American.

Pair that with the fact that African American communities experience disproportionately high contact with law enforcement, and are likely to be arrested at rates much higher than their proportion of the population. And most recognition systems search mugshot databases. This means that face recognition is: (1) probably disproportionately used on African Americans; (2) on a disproportionately African American database; (3) with an algorithm that is likely to perform worse on precisely this demographic. This means that someone who is black is more likely to be misidentified as a possible suspect.

Walk me through the work that you did.

We filed 106 records requests with state and local law enforcement agencies across the country, asking for any face recognition policies they had, requests for proposals and other contracting documents, invoices for the technology — pretty much anything you could think of that would give us information about face recognition.

We also conducted over a dozen interviews with law enforcement officials, did two site visits, interviewed face recognition experts and company technologists. We also conducted a 50-state survey of the applicable laws, and a technical literature review.

In response to our records requests, we received around 17,000 pages of records. The process then became one of sifting through this information to figure out what we had, what the records said, and to begin to identify patterns, conclusions, and recommendations.

What began to emerge?

It sounds cliché, but the state of police face recognition in the U.S. is a “wild west.” It’s far more prevalent than most people think. There is wide variation in the levels of controls on the technology — whether or not there is a use policy, whether that policy has been made public, whether it requires reasonable suspicion or just a criminal justice purpose before running a search. From what we found, or didn’t find, we started drafting recommendations, such as the appropriate legal standard required and adequate levels of transparency and accountability.

How did you prepare yourself for the technological aspects of this?

A regret I have coming out of law school is that I didn’t take more of the classes offered by the Law Center in the privacy and technology space. I graduated before Professor Paul Ohm’s Coding for Lawyers class was in place. And I didn’t take Alvaro’s class [Privacy Legislation: Law and Technology Practicum, a joint class with students from MIT]. But we had an incredible technologist on staff, Jonathan Frankle who is one of the report authors.

It’s exciting to see that Georgetown Law is going to become a leader in legal education on the intersection between technology and privacy. There’s a real need for that.

What impact do you hope the report will have?

I hope it’s the beginning of a much needed conversation. I hope it encourages civil rights and privacy advocates, state legislatures, and police departments to take a hard look at how law enforcement uses face recognition technology — or if a jurisdiction hasn’t implemented it yet, how they do so in a way that’s transparent, responsible, and engages the public through their elected officials.

The report has a model state or federal bill, a model use policy, a list of questions for advocates to ask of their local legislatures, state summary pages — in the hopes that advocates on the ground have the tools they need to engage in these conversations and push for change. We are also releasing all the documents we received. There is so much in there that we didn’t even touch on — my hope is that people take this information, build on it, run with it and mold it to their particular message, use, and purpose.