Privacy advocates in the Bay Area have cause for celebration after San Francisco became the first municipality in the United States to pass an ordinance barring the city’s use of facial recognition technology because of the “propensity [of] facial recognition technology [to] endanger civil rights and civil liberties” and “exacerbate racial injustice.”
The ordinance prohibits the police and other San Francisco governmental agencies from using facial recognition technology for any purpose with an exception for “inadvertent or unintentional” access to or use of the technology. The ordinance also requires city departments seeking to acquire other kinds of surveillance technology, or entering into agreements to receive information from non-city owned surveillance technologies, to obtain Board of Supervisors approval and submit a “Surveillance Impact Report.” Importantly, the ordinance does not prohibit private citizens — including businesses — from using such technologies.
Oakland and Berkeley are considering similar bans, as is Somerville, Massachusetts. Other cities like Chicago and Detroit have moved in the opposite direction, implementing widespread real-time video surveillance as a means of crime prevention. New York and Orlando are considering similar roll-outs.
And this might be cause for concern, as the potential for abuse of facial recognition technology is great. Indeed, a report from Georgetown Law’s Center on Privacy & Technology noted that “[f]ace recognition is less accurate than fingerprinting,” and “without specialized training, human users make the wrong decision about a match half the time.”Moreover, errors and biases in facial recognition technology may disproportionately affect persons of color. This is in part because image databases used by law enforcement are disproportionately comprised of African Americans and other minority groups, and also because the facial recognition software misidentifies people of color at higher rates than whites.
Beyond the inherent flaws in current facial recognition technology, there are opportunities for misuse by law enforcement. For example, another report from Georgetown Law’s Center on Privacy & Technology describes how the NYPD’s Facial Identification Section (FIS) “got creative” when surveillance images of a man caught on video stealing from a store did not return any matches after detectives ran the images through the face recognition algorithm. An FIS officer observed that the man on the video bore a passing resemblance to the actor Woody Harrelson so he located an online photo of the celebrity and submitted it for possible matches instead of the images caught on the video to successfully come up with a match. The match however, was to the actor, not to the actual suspect caught on tape. And law enforcement agencies around the country have used police sketches that are either hand drawn or computer generated to look for matches in lieu of actual photos, despite compelling evidence that these sketches return accurate results less than 10% of the time.
The potential for such abuses underscores the need for more local, state, and federal regulation in the space. Currently, there are no federal laws that govern the use of facial recognition technology. Policy advocates at the Electronic Freedom Foundation have argued that regulations could be modeled after current laws governing other technologies, like the Wiretap Act or the Video Privacy Protection Act, and emphasized that any regulation should follow certain best practices like having clear rules for data collection, as well as limiting the types of data that can be stored and retained, and for how long. With few national rules in place, how police departments use facial recognition technologies is, for the time being, still a local concern.