Welcome to the 2022 Q2 edition of the SPB Artificial Intelligence & Biometric Privacy Quarterly Review Newsletter, your go-to source for keeping you in the know on all recent major artificial intelligence (“AI”) and biometric privacy developments that have taken place over the course of the last three months. We invite you to share this resource with your colleagues and visit Squire Patton Boggs’ Data Privacy, Cybersecurity & Digital Assets and Privacy & Data Breach Litigation homepages for more information about our capabilities and team.
Q2 did not disappoint in the AI and biometric privacy space, with a number of noteworthy litigation, legislative, and regulatory developments having taken place in these two rapidly developing areas of law. Read on to see what has transpired over the last quarter and what you should keep your eyes on as we head into the second half of 2022.
Biometric Privacy Cases to Keep on Your Radar
Cothron v. White Castle System, Inc., No. 128004 (Ill. Sup. Ct.): As many familiar with BIPA know, currently pending before the Illinois Supreme Court is Cothron v. White Castle System, Inc. (covered extensively by SPB team member Kristin Bryan in CPW articles here, here, here, and here), which is set to provide much-needed certainty regarding the issue of claim accrual in BIPA class action litigation. “Claim accrual” involves when a claim “accrues” or occurs—either only at the time of the first violation or, alternatively, each and every time a defendant violates Illinois’s biometric privacy statute. If the Cothron Court rules that BIPA violations constitute separate, independent claims, then the associated statutory damages of $1,000 to $5,000 per violation would compound with each successive failure to comply with Illinois’s biometric privacy law. Under this scenario, liability exposure would likely expand exponentially for BIPA claims. As such, companies should pay close attention to how the Illinois Supreme Court decides the Cothron appeal, as the ruling could result in yet another drastic shift in the biometric privacy landscape. In the interim, companies should consult with counsel and re-assess their compliance with BIPA to ensure they are satisfying the full range of requirements to mitigate potential class action litigation risks.
Mahmood v. Berbix Inc., No. 1:22-cv-2456 (N.D. Ill.): “Selfie” identity verification has become extremely popular due to the benefits offered by this verification method in significantly reducing fraud and facilitating a fast, accurate verification process. At the same time, companies that develop and supply this technology have also become an increasingly common target for BIPA class action suits. In Mahmood v. Berbix Inc., the plaintiff filed a putative class action against Berbix Inc. for alleged BIPA violations after being required to upload a photo of her driver’s license and a “selfie” to rent a car, the manufacturer of which used Berbix’s identity verification service. This case is worth keeping an eye on, as the litigation will likely provide valuable insights on the contours of the extraterritoriality defense applicable in certain BIPA disputes where the alleged violations of Illinois’s biometric privacy statute do not occur “primarily and substantially” within the borders of the Prairie State.
Coss v. Snap Inc., No. 1:22-cv-02480 (N.D. Ill.): In early May, Snap Inc., the owner of popular social media platform Snapchat, was sued for alleged BIPA violations in connection with its “Face Lenses” feature, an augmented reality (“AR”) experience that uses innovative technology to modify and enhance users’ facial features to transform their appearance in photos and videos posted online. According to the complaint, Snap’s Lenses feature scans users’ faces and creates a detailed map or digital depiction of their facial features, during which time Snap collects their biometric data. This is another case worth watching, as the overlapping space between increasingly-popular image/video enhancement tools and efforts to ensure the privacy and security of biometric data is likely to lead to additional litigation moving forward.
Hess v. 7-Eleven, Inc., No. 1:22-cv-02131 (N.D. Ill.): On April 25, four 7-Eleven customers filed a class action lawsuit against 7-Eleven, alleging that—unbeknownst to consumers—the company collects facial geometry data through cameras and video surveillance systems in violation of BIPA. According to the complaint, numerous 7-Eleven locations use systems provided by Clickit, an intelligent video solution provider, to collect biometric data. Hess is an example of the high volume of BIPA class actions targeting retailers of all types and the wide variety of allegations that are being asserted against them in connection with purported violations of Illinois’s biometric privacy statute. As such, all retail brands—even those that have put practices in place to comply with BIPA—should consult with experienced biometric counsel to re-assess the effectiveness of their biometric privacy compliance programs and mitigate growing BIPA risks to the greatest extent possible, as the retail industry will continue to remain one of the primary targets for BIPA suits for the foreseeable future.
Theriot v. Louis Vuitton N.A., Inc., No. 1:22-cv-02944 (S.D.N.Y.): In April, shoppers filed a class action against Louis Vuitton in a New York federal court for alleged BIPA violations in connection with company’s virtual try-on (“VTO”) tool made available to visitors of its website. The complaint alleges that the company’s technology scans users’ face geometry, producing complete facial scans and images of customers’ faces—all without giving notice or obtaining consent when visitors try on its designer eyewear using the tool. As VTO facial recognition class actions continue to be a hot trend in BIPA litigation (as discussed in more detail below), retailers and other companies that utilize this “try before you buy” technology should ensure they are strictly complying with the mandates of BIPA to mitigate the significant class action risks associated with these tools.
New and Emerging Biometric Privacy Trends
BIPA VTO Litigation Wave Not Over Yet: BIPA litigation in 2021 was marked by a wave of class action suits filed against retailers—including fashion, eyewear, and makeup brands—in connection with virtual try-on (“VTO”) tools offered to online shoppers. As the name suggests, VTO tools, also known as “virtual mirrors,” allow shoppers to “try on” products using their camera-equipped devices, such as home computers, tablets, or mobile phones. Importantly, VTO technology is powered by a combination of AI and AR, as opposed to traditional facial recognition technology used to identify or verify an individual’s identity. Despite this, many brands found themselves the targets of BIPA class litigation, with plaintiffs arguing that their VTO technology performed scans of face geometry, thus bringing the tools under the scope of BIPA. While the pace of filings has slowed somewhat in 2022, VTO technology continues to be a main target for class actions, including a number of suits filed against retailers that utilize these tools during Q2. For more information on this hot-button topic, read the highlights from SPB team member David Oberly’s recent interview with Bloomberg Law here: As Virtual Try-On Fashion Technology Grows, So Do Legal Risks.
Increase in BIPA Suits Targeting Third-Party Vendors: Another notable trend seen during Q2 was a marked increase in the number of BIPA class actions targeting third-party vendors that offer biometric technology software and solutions, such as identity verification tools and employee timeclocks. Of note, these vendors do not maintain any direct relationship with the individuals who claim their biometric data was collected or used in violation of BIPA, but rather whose technology is merely utilized by their clients to facilitate the use of biometric data in commercial operations. Just two examples of this trend are the Berbix class action discussed above, as well as the Ronquillo case discussed below.
Contactless Fingerprinting Makes Strides Towards Adoption: While research around contactless fingerprinting technology is not new, recent advancements are drawing the attention and interest of law enforcement. The development of new, more advanced technologies used for identity verification purposes is on the rise, especially in the wake of COVID-19 and its associated health and safety concerns. Soon, phone cameras will be capable of scanning and capturing a person’s fingerprint—easily identifying all the lines and swirls on their fingertips—all without even having to touch a screen. While this technology may raise concerns amongst civil liberty and privacy groups, law enforcement is already looking into ways to harness its potential—and you can be sure the private sector will be soon to follow.
Significant Biometric Privacy Class Action Decisions & Related Developments
Zellmer v. Facebook, Inc., No. 3:18-cv-1880, 2022 U.S. Dist. LEXIS 60239 (N.D. Cal. Apr. 1, 2022): A California federal court issued a notable BIPA opinion in Zellmer v. Facebook, Inc. (covered by SPB team members Kristin Bryan and David Oberly in this CPW article), which could have significant implications moving forward for companies seeking to limit their scope of liability exposure in BIPA class action litigation. In Zellmer, the court granted summary judgment to Facebook on the Section 15(b) notice and consent claim asserted in the case, finding that non-users were precluded as a matter of law from maintaining an actionable claim under Section 15(b). The court reasoned that a Section 15(b) claim could not exist for non-users because it would be patently unreasonable to construe BIPA to require companies to provide notice to, and obtain consent from, non-users who were for all practical purposes total strangers to the company, and with whom the company maintained no relationship whatsoever. Rather, a Section 15(b) claim can be in play only where there is at least a minimum level of known contact between a person and the entity that might be collecting biometric information. While the opinion itself was short—comprising only eight pages—the Zellmer court’s reasoning may have a noteworthy impact on the scope of Section 15(b) claims moving forward.
Sosa v. Onfido, Inc. No. 1:20-cv-04247, 2022 U.S. Dist. LEXIS 74672 (N.D. Ill. Apr. 25, 2022): In Sosa v. Onfido, Inc. (covered by SPB team member David Oberly for Law360 here), an Illinois federal court rejected the argument that BIPA exempts biometric data extracted from photographs, finding instead that faceprints derived through photographic means can constitute “biometric identifiers” regulated by Illinois’s biometric privacy statute. The Onfido opinion is significant, as it likely shuts the door on a defense that has, until now, been broadly utilized by a wide range of targets of BIPA class action suits.
Ronquillo v. Doctor’s Assocs., LLC, 1:21-cv-04903, 2022 U.S. Dist. LEXIS 62730 (N.D. Ill. Apr. 4, 2022): As courts continue to expand the scope of BIPA class action liability exposure, they have been especially unforgiving to third-party technology vendors—despite the challenges that these non-consumer facing entities have with satisfying the requirements of Illinois’s biometrics law. Such was the case for HP Inc., which in early April saw its motion to dismiss a BIPA class action denied by an Illinois federal court—even though the company lacked any kind of direct relationship with the individual who filed suit. In Ronquillo, an employee at Subway restaurants brought suit against HP and Doctor’s Associates, LLC (“DAL”), alleging that the defendants captured and stored her fingerprints without her informed consent through a Subway point-of-sale system to clock in and out of work, and to unlock cash registers. DAL and HP took the position that they did not actively collect employees’ biometric data; rather, at most, they merely possessed such data. As such, according to DAL and HP, they fell outside the scope of the biometrics law. The court disagreed, finding that in making this argument, the defendants were “attempt[ing] to rewrite the complaint to avoid its actual allegations, which allow for the reasonable inference that DAL and HP played more than a passive role in the process.” Id. at *8. While also noting that it was leaving the question of whether the plaintiff would actually be able to provide DAL’s and HP’s role in collecting her biometric data for another day and with a more developed record, the court concluded that, at least at the motion to dismiss stage, the complaint sufficiently alleged that Section 15(b) applied to DAL and HP. In addition, the court also expressly rejected the argument that Section 15(b) did not apply to third-party vendors of technology used by employers to obtain workers’ biometric data, finding that there was nothing in BIPA’s text that the law was intended to apply only to employers, but not to parties without any direct relationship to the plaintiff. Importantly, the Ronquillo decision deals a significant blow to one of the third-party vendors’ primary arguments against BIPA liability while at the same time demonstrating how courts continue to interpret the statutory text of BIPA in an extremely broad, plaintiff-friendly manner.
Johnson v. Mitek Sys., Inc., No. 0:22-cv-01830, 2022 U.S. Dist. LEXIS 80851 (N.D. Ill. May 4, 2022): While arbitration continues to remain a powerful defense in BIPA class actions, not all attempts at dismissing BIPA claims through the pursuit of motions to compel arbitration are successful. Such was the case in Johnson v. Mitek Sys., Inc., where ID verification firm Mitek Systems, Inc. recently lost its bid to force BIPA plaintiffs to resolve their claims out of court and through individual, binding arbitration. Mitek arose in connection with the company’s age and identity verification service, which was used by rental car service HyreCar and required the plaintiff to upload his driver’s license and photograph. According to the plaintiff, this verification process was completed with the assistance of facial recognition technology, which unlawfully collected her biometric data without providing notice or obtaining his consent. The court denied Mitek’s motion to compel arbitration, finding that the company was not a party to the arbitration agreement between the rental company and its customer and further that the third-party beneficiary exception to the general rule that non-signatories to an arbitration agreement cannot be bound by such contracts was inapplicable to force arbitration against the plaintiff. The Mitek decision should serve as a reminder for all companies that use biometric data in their operations to ensure they have a robust arbitration agreement of their own in place and to avoid relying solely on the agreements of their clients or vendors.
Rogers v. BNSF Ry. Co., No. 1:19-cv-03083, 2022 U.S. Dist. LEXIS 10934 (N.D. Ill. June 21, 2022): At the same time (and just the opposite of Ronquillo), courts continue to cast a wide liability net for allegedly improper biometric data collection and possession practices, ensnaring even those companies whose involvement with biometrics systems is tenuous at best. Such was the case for BNSF Railway Company, which hired external security contractors to operate its biometric-powered access control system at its Illinois rail facilities and later found itself the defendant in a BIPA class action. In June, an Illinois federal judge refused to certify an interlocutory appeal filed by BNSF following the court’s denial of its motion for summary judgment, which had rejected the railroad’s preemption argument and found that a jury must decide whether the railroad’s connection with its fingerprint access control technology operated by its third-party vendor was sufficient to trigger liability for improper biometric data collection and possession practices under BIPA Sections 15(a) and 15(b). The company had sought a Seventh Circuit review of the district court’s decision involving the issues concerning federal preemption and vicarious liability, but the district court refused to allow the appeal to proceed, basing its decision primarily on what it characterized as a “misreading of [the district court’s] ruling” and a failure to raise the arguments it looked to assert on appeal in its prior summary judgment briefing.
Barton v. Walmart Inc., No. 1:21-cv-04329 (N.D. Ill. May 31, 2022): In May, an Illinois federal court refused to dismiss a class action involving allegations that Walmart violated BIPA by requiring Illinois warehouse workers to use voice recognition software. In Barton, Walmart workers alleged that they were required to submit their voiceprints by reading into biometric-powered inventory computer systems known as “Pick Task-Voice Template Words.” Walmart, however, contended that its voice system did not identify specific employees by their voices but instead only recognized words spoken into the headsets. According to Walmart, the identification of specific worker identities came from workers’ employee numbers that were manually entered into the system—not based on their voice patterns. The Barton decision further underscores the lack of clarity regarding the precise definition of “biometric identifiers” under BIPA, which will remain one of the most hotly-contested issues in BIPA class litigation for the foreseeable future—and until courts provide more guidance on this key matter.
Rivera v. Google Inc., No. 2019-CH-00990 (Ill. Cir. Ct. Cook Cnty.): In late April, Google settled its longstanding Rivera BIPA dispute, agreeing to pay $100 million to resolve allegations that it improperly collected individuals’ facial biometric data through its cloud-based Google Photos feature in violation of Illinois’s biometric privacy statute. While notably less than 2020’s record-breaking $650 million BIPA settlement involving one of the world’s largest social media companies, the $100 million figure agreed to by Google to put an end to the Rivera litigation will only give plaintiff’s attorneys even more motivation to pursue BIPA class action litigation for the foreseeable future. And, although the size of the Rivera settlement is not by any means indicative of normal settlement figures in BIPA cases, the plaintiff’s lawyers will almost certainly use this settlement as a measuring stick to value other BIPA disputes—likely causing inflated settlement figures moving forward, at least in the immediate term. Importantly, this settlement should serve as a cautionary tale and reminder of the critical need for companies to maintain comprehensive, flexible biometric privacy programs to minimize potential liability exposure.
Artificial Intelligence & Biometric Privacy Legislative/Regulatory Developments
Majority of Biometric Privacy Bills Fail (With One Notable Exception): While the number of biometric privacy bills introduced by state and municipal legislatures in 2022 increased significantly as compared to the year prior, the vast majority of those proposals failed during the legislative process and did not make their way into law. With that said, one piece of proposed legislation remains currently pending that could bring wholesale changes to the biometric privacy legal landscape if enacted this year. That legislation, California’s HB 1189, provides for a private right of action almost identical to that of BIPA, which would likely bring with it a tsunami of class litigation to California on part with what has taken place in Illinois for several years now. Not only that, HB 1189 is one of several “hybrid” biometric privacy bills introduced in 2022 that blend traditional biometric privacy legal principles and other compliance requirements and limitations which, until now, were ordinarily confined exclusively to broader, comprehensive state consumer privacy statutes. Importantly, these hybrid requirements would significantly increase compliance burdens for all companies that collect and use biometric data while also ushering in a correspondingly-high increase in liability exposure risks.
EEOC Issues Guidance on Use of Artificial Intelligence by Employers: On May 12, 2022, the U.S. Equal Employment Opportunity Commission (“EEOC”) issued important guidance regarding the use of algorithms and AI in the context of hiring and employment decisions. The guidance follows on the heels of the EEOC’s Initiative on Artificial Intelligence and Algorithmic Fairness, which was launched by the Commission in late 2021. The guidance itself provides a detailed discussion regarding how employers’ reliance on AI and algorithmic decision-making in the employment context may run afoul of the Americans with Disabilities Act (“ADA”). In addition, the guidance also provides several recommended “promising practices” for employers to consider to mitigate the risk of discriminating against individuals with disabilities when using algorithmic decision-making tools and similar AI technologies. All employers that are currently using AI for any purpose—or intend to do so in the future—should familiarize themselves with the guidance if they have not already done so.
CFPB Issues Guidance on Use of Artificial Intelligence by Creditors: Also in May, the federal Consumer Financial Protection Bureau (“CFPB”) issued guidance of its own, Circular 2022-03, “Adverse action notification requirements in connection with credit decisions based on complex algorithms,” focusing on the need for creditors to comply with the Equal Opportunity Credit Act’s (“ECOA”) requirement to provide a statement of specific reasons to applicants against whom adverse action is taken when making credit decisions based on complex algorithms. Importantly, the CFPB clarifies that compliance is required even when using algorithms—sometimes referred to as “black-box” models, that prevent creditors from accurately identifying the specific reasons for denying credit or taking other adverse actions. The guidance illustrates that the EEOC and its adverse action requirements will be enforced by the CFPB irrespective of the technology that is utilized by creditors and that creditors cannot excuse their noncompliance based on the mere fact that its technology used to evaluate applications is too complicated or opaque in its decision-making to understanding. The recently-issued guidance, along with a statement issued by CFPB Director Rohit Chopra in conjunction with the Circular, provides a key window into the aggressive tact that the CFPB will likely take in enforcing improper AI practices that may run afoul of the ECOA. All creditors (and other entities subject to the CFPB’s jurisdiction) that currently use AI—or intend to do so in the future—should familiarize themselves with the guidance if they have not already done so.
Federal Trade Commission Back at Full Strength: On May 11, 2022, privacy law expert and then-head of Georgetown University Law School’s Center on Privacy and Technology, Alvaro Bedoya, was confirmed as the newest FTC Commissioner. Bedoya replaces former FTC Commissioner Rohit Chopra, who now heads the Consumer Financial Protection Bureau (“CFPB”). Bedoya is an expert in facial recognition and is widely recognized for his role in co-authoring a 2016 study that is credited as the impetus for a number of recent state and local laws limiting the use of facial recognition by the public sector. During his late 2021 confirmation hearing testimony, Bedoya advocated for increased FTC scrutiny over facial biometrics and its privacy-related impacts, especially as it relates to minorities, noting its reputation for misuse and abuse. At the same time, he also noted his support for potential FTC privacy rulemaking. With the FTC now back at full strength and with a Democratic majority, companies should anticipate an aggressive privacy enforcement agenda by the Commission, including increased scrutiny of both facial recognition practices and potential bias and discrimination concerns relating to AI and algorithmic decision-making.
FTC Issues Report to Congress on Use of AI to Combat Online Harms: On June 16, 2022, the FTC issued a report to Congress, Combatting Online Harms Through Innovation, warning about the use of AI to combat online problems and urging lawmakers to exercise “great caution” about relying on AI as a policy solution. While the Report does not break any new ground in terms of how the FTC may pursue investigations or enforcement actions against private sector organizations that utilize AI in their day-to-day operations, the Report nonetheless provides several key takeaways for all entities that currently rely on this advanced form of technology or intend to do so in the future. To learn more about the Report and its major takeaways, read our recent CPW blog post here.
Automated Decision-Making and Profiling Conspicuously Absent From Initial Draft CPRA Regulations: The California Privacy Rights Act (“CPRA”) places significant power in the hands of the California Privacy Protection Agency (“CPPA”) to shape the future of privacy regulation in the United States, including with respect to how automated decision-making and profiling is regulated throughout the country. For this reason, the CPPA focused a significant amount of its preliminary rulemaking activities on these two interrelated issues. These efforts began last fall when automated decision-making and profiling were included as part of nine topics on which the CPPA sought public comment. In May, the CPPA held stakeholder sessions over the course of three days, during which time three hours were devoted exclusively to allowing stakeholders to comment on issues relating to automated decision-making and profiling. Notably, however, the CPPA’s draft CPRA Regulations—issued at the end of May—do not address automated decision-making or profiling in any fashion whatsoever. With that said, companies should anticipate that these issues will be addressed in subsequent iterations of the Regulations.
Connecticut Enacts New Privacy Statute Encompassing Biometric Data: On May 10, 2222, Connecticut Governor Ned Lamont officially signed into law Public Act No. 22-15, “An Act Concerning Personal Data Privacy and Online Monitoring.” More commonly referred to as the “Connecticut Privacy Act,” the statute becomes the fifth law of its kind to be enacted in the U.S. and will go into effect on July 1, 2023. In addition to affording Connecticut consumers a range of new privacy rights, the law also governs the collection and use of “biometric data,” which is defined as any data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that are used to identify a specific individual.
FTC Investigates ID.me: In May, Oregon Senator Wyden urged the FTC to investigate the identity verification company ID.me for potential deceptive practices which may have misled consumers and government agencies. A company experiencing growth in the midst of the pandemic, ID.com uses a mixture of selfies, document scans, and other methods to verify identities online. ID.me is currently the subject of other government investigation. Senators are particularly concerned about the potential confusion between two different types of technology – one which involves a one-time comparison of two images to confirm an applicant’s identity and involves one-to-many recognition, where millions of innocent people have their photos included as a comparison in a digital “line up.” Wyden and others fear that the company made “multiple misleading statements” about “superior” facial recognition use, which may be potentially damaging to consumer understanding.
EU’s Artificial Intelligence Act Receives Support From Privacy Advocates: In April 2021, the European Commission released the initial draft version of its proposed Artificial Intelligence Act (“AIA”), which seeks to implement a first-of-its-kind comprehensive regulatory scheme for AI technologies. Like the EU’s General Data Protection Regulation (“GDPR”), the territorial scope of the AIA would be expansive, governing not just EU organizations that utilize AI but also companies located outside the EU that operate AI within the EU, as well as organizations whose operation of AI impacts EU residents. Recently, European Digital Rights (“EDRi”) and dozens of other privacy advocacy organizations penned an open letter not just supporting efforts to enact the AIA but to expand the legislation to include a ban on remote biometric identification (“RBI”) systems, such as facial recognition, in all public spaces. Companies that currently deploy AI in their operations—or may do so in the future—should keep tabs on future developments regarding the AI Act moving forward, which will have wide-reaching implications extending far beyond the EU if the legal framework becomes law.
The Final Word
While Q2 provided us with a number of significant developments in the areas of AI and biometric privacy, companies are sure to see many additional litigation, legislative, and regulatory developments during the second half of 2022 as well. As we progress closer to 203, be sure to stay tuned; CPW will continue to serve as your go-to source for staying on the forefront of all new developments in real-time.