“Beware the Kodak,” warned The Hartford Daily Courant in July 1888. “The sedate citizen can’t indulge in any hilariousness without incurring the risk of being caught in the act and having his photograph passed around among his Sunday school children.”1 Over a century later, the warning rings true. Today, facial recognition technology is everywhere: Facebook uses facial recognition to assist users in tagging photos and videos with the names of others; Sephora uses facial recognition to allow customers to virtually apply and test cosmetics; JetBlue uses facial recognition in lieu of a boarding pass and passport; law enforcement agencies use facial recognition to identify suspects and victims; and now, amidst the ongoing and rapidly evolving COVID-19 pandemic, facial recognition technology is being used for contactless access control credentials as well as contact tracing of people infected with the coronavirus.
As facial recognition technology invades every aspect of our lives, shutdown or not, technological advances, news coverage, and legal approaches concerning such technology continue to expand and evolve. And, while this technology clearly provides benefits to society and public safety, it can create legal issues from privacy to insurance coverage to racial bias.
As the use of facial recognition technology rises, so do privacy concerns. The technology is ubiquitous. When an individual walks out her own door, the technology can track her movements. While “the sedate citizen” may have become used to “having his photograph passed around,” the anonymity that was once associated with being photographed has been lost as a result of facial recognition technology. Privacy advocates thus believe that the widespread use of facial recognition should be regulated. There is no biometric privacy law at the federal level, but states, with Illinois doing the legwork, are filling the void.
BIPA did not gain notoriety until 2015, when a series of class action lawsuits were brought against Facebook and Shutterfly. The companies were accused of using facial recognition technology to collect and store biometric information in violation of BIPA.3 The risks associated with capturing biometric information for commercial use, however, are not limited to social media platforms. The initial cases against Facebook and Shutterfly have led to hundreds of other consumer-based class action lawsuits (e.g., Six Flags), as well as employee-based class action lawsuits (e.g., Southwest Airlines, Wendy’s) under BIPA. And, a significant decision by the Illinois Supreme Court last year, holding that an individual need not allege actual damage to establish that they are “aggrieved” under BIPA,4 has only further opened the floodgates. Would-be plaintiffs need only a “technical” violation of BIPA to have standing.5
Whether in the context of marketing or employee management, or anywhere in between, companies should understand whether they are capturing biometric information belonging to Illinois residents and, if so, ensure their policies and procedures comply with BIPA. As the Illinois Supreme Court explained in Rosenbach:
Compliance should not be difficult; whatever expenses a business might incur to meet the law’s requirements are likely to be insignificant compared to the substantial and irreversible harm that could result if biometric identifiers and information are not properly safeguarded; and the public welfare, security, and safety will be advanced.6
The onslaught of BIPA class action lawsuits has been substantial, and companies need to be prepared. Indeed, in July, Facebook offered $650 million – “a record-breaking settlement” – to settle the class actions brought against the company in 2015.7
With the increase in commercial use of facial recognition, Chubb – one of the world’s largest insurers – has warned about the risks of biometric privacy legislation. In Chubb’s Cyber InFocus Report, “Know the Latest Trends in Cyber Risks,” the carrier identified the rising number of lawsuits under Illinois’ BIPA as a significant digital issue in the insurance industry. Various Fortune 500 companies have encountered BIPA lawsuits. “Illinois courts have now seen an increase of BIPA-related litigation,” the insurer warned in its report. “Companies doing business in that state need to be aware of the law’s requirements, especially if the company regularly collects biometric information.”8 Chubb also noted a growing trend beyond Illinois: “Biometric data regulation varies at the state level and has been a focus of U.S. federal and international legislators and regulators, so it is imperative that companies understand the legal requirements of each state and of the countries in which they conduct business.”9
Chubb’s warning was perhaps warranted. On June 5, 2020, the American Guarantee and Liability Insurance Company (“AGLIC’) filed an action in Illinois state court seeking a declaration that there is no coverage for the lawsuit against a Burger King franchisee, Toms King LLC, in a lawsuit accusing it of violating BIPA. Toms King, which owns and operates Burger King restaurants in Illinois and other states, faces a putative class action over its requirement that employees use their fingerprints as a means of authentication to clock in and out of work, without written consent required under BIPA. The franchisee has commercial general and umbrella liability insurance with AGLIC, but the insurer argues that allegations in the lawsuit do not assert any claims within the coverage of its policies and cites several exclusions, including “employers liability,” “knowing violation of rights of another,” and “access or disclosure of confidential or personal information” exclusions.10 In a prior coverage dispute, however, the Illinois Appellate Court did find that an insurance company was obligated to defend an L.A. Tan franchisee against claims brought under BIPA because the customer’s claims alleged a “personal injury” and the “violation of statutes” exclusion did not apply.11
Insurance policies currently available on the market, including commercial general, cyber, directors’ and officers’ (“D&O”), and errors and omissions (“E&O”) liability policies, may not adequately cover the risks that accompany facial recognition technology and the potential violation of biometric privacy legislation. Thus, in light of Chubb’s warning and various insurance coverage-BIPA lawsuits, as well as the growing concern of racial bias addressed below, companies should review the terms and conditions of their insurance policies carefully to determine any necessary changes to address potential coverage gaps.
While facial recognition technology has improved over the past decade, it has been shown to suffer from racial bias (among other biases, such as gender bias), which can make the technology unreliable for law enforcement and ripe for potential civil rights abuses. Last year, a National Institute of Standards and Technology (“NIST”) study found “empirical evidence for the existence of a wide range of accuracy across demographic differences in the majority of the current face recognition algorithms that were evaluated.” Indeed, race-based biases were evident in the majority of facial recognition algorithms studied. For one-to-one matching (i.e., authentication), there were higher rates of false positives (up to 100 times more) for the faces of Asians and African Americans relative to Caucasian faces. Among U.S.-developed algorithms, there were similar rates of false positives for Asians, African Americans, and native groups12; the American Indian demographic had the highest rates of false positives. Notably, however, there was no dramatic difference in false positives between Asian and Caucasian faces for algorithms developed in Asian countries. For one-to-many matching (i.e., identification or search), there were higher rates of false positives for African American females in an FBI database of domestic mugshots.13
The government’s study did not include facial recognition technology from Amazon, which it sells to both government agencies and private entities. Nonetheless, “Rekognition,” Amazon’s program, has also been criticized for its racial bias. The American Civil Liberties Union, for example, previously found that Rekognition incorrectly matched 28 members of Congress to people who had been arrested. Using the same system that Amazon offers to the public, the ACLU built a face database and search tool using 25,000 publicly available arrest photos and then searched that database against public photos of every member of Congress at that time. The false matches were disproportionately of people of color; Rekognition misidentified lawmakers of color at a rate of 39% even though they only made up 20% of Congress. The false matches included 6 then-members of the Congressional Black Caucus, among them was the late civil rights leader Rep. John Lewis (D-Ga.). “An identification – whether accurate or not – could cost people their freedom or even their lives,” the ACLU said.14
After the death of George Floyd, and as racial bias in policing has become a greater part of our national discourse, the focus had turned to facial recognition technology and, according to some critics, the enablement of such racial bias.15 Congressional Democrats are seeking information from the FBI and other agencies to understand whether authorities are using facial recognition technology against protesters; states including New York are considering legislation to ban police use of the technology; and tech giants IBM, Amazon, and Microsoft are all edging away from their own technology. IBM, in a letter to Congress, stated that it will no longer offer its general purpose facial recognition or analysis software,16 Amazon announced a one-year moratorium on police use of Rekognition,17 and Microsoft said that it will not sell its facial recognition for police use until “a national law is in place, grounded in human rights, that will govern this technology.”18 Other companies, whether developing their own technology or utilizing third party technology, should note this weakness and potential liability of facial recognition.
But can facial recognition overcome its racial bias? According to Anil K. Jain,19 head of the Biometrics Research Group at Michigan State University, it can.20 “State-of-the-art face recognition methods are based on deep learning which requires millions of face images for training the recognition algorithm to achieve desired accuracy,” Dr. Jain said. “Inevitably, the distribution of training data has a significant impact on the performance of the resultant deep learning models. Where the number of faces in each cohort is unequal, it is well understood that face datasets exhibit imbalanced demographic distributions. And, models trained with imbalanced datasets lead to biased discrimination.”
According to Dr. Jain, “we need diversity of ‘labeled’ datasets of images for training.” By “labeled,” he means demographic factors (e.g., race, gender). These labels are also fittingly referred to as “ground truth.” Because of privacy concerns, however, it is becoming difficult to obtain the diverse datasets necessary to train facial recognition technology. Surprisingly, there is a conflict between privacy and racial bias. While privacy activists often cite racial discrimination in the criminal justice system and elsewhere as an argument against such technology, the bias could be overcome with larger and more diverse databases and, perhaps, “less” privacy.
The use of facial recognition is rapidly on the rise. In 2020, a year of unprecedented developments and change, it is up to companies to be aware of and address the many novel legal issues that facial recognition technology poses.