How are public bodies and regulators responding to the rise in use of facial recognition technology?
The key takeaway
The UK Information Commissioner’s Office (ICO) has fined US tech company Clearview AI Inc (Clearview) over £7.5m for failing to comply with UK data protection laws in its collection of images and data from the internet and its creation of a facial recognition database.
Clearview is a US facial recognition company which has gathered over 20bn images of people’s faces and publicly available data from the internet. Clearview uses this data to provide its customers, who range from companies to law enforcement bodies around the globe, with an extensive online facial recognition database.
In 2020, the ICO and the Office of the Australian Information Commissioner (OAIC) took part in a joint investigation into Clearview and its data processing practices. The investigation centred around Clearview’s facial recognition app where users could upload photos of a person onto the app, match the face to similar images in Clearview’s database and locate where the image was originally found on the internet. The bodies sought to determine how Clearview used the data “scraped” from the internet and social media platforms and how it used biometrics for facial recognition, particularly in relation to data from residents in the UK and Australia.
The ICO and OAIC concluded their investigation in November 2021, with the OAIC determining that Clearview had failed to comply with the requirements of various Australian Privacy Principles and that it had interfered with the privacy of Australian individuals by failing to gain their consent or to notify them of the collection of personal information.
The ICO subsequently issued a notice of intent to fine Clearview for its actions. It preliminarily determined that Clearview’s practices involving the processing of data from UK individuals may have seriously breached UK data protection laws. The ICO also issued a preliminary enforcement notice preventing Clearview from further processing personal data of UK individuals and requiring that the company deleted any data it held.
In May 2022, the ICO fined Clearview over £7.5m for its use of images gathered from the internet to create its facial recognition database. It also issued an enforcement notice meaning that Clearview can no longer obtain or use the personal data of UK residents collected from the internet and it must delete any such data from its systems. In its investigation, the ICO found that Clearview had contravened UK data protection laws by:
- “failing to use the information of people in the UK in a way that is fair and transparent, given that individuals are not made aware or would not reasonably expect their personal data to be used in this way
- failing to have a lawful reason for collecting people’s information
- failing to have a process in place to stop the data being retained indefinitely
- failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR)
- asking for additional personal information, including photos, when asked by members of the public if they are on their database. This may have acted as a disincentive to individuals who wish to object to their data being collected and used.”
The ICO concluded that Clearview’s database would likely include a “substantial amount of data” from residents in the UK due to the high number of people who interact with the internet and social media in the country.
New EDPB guidelines
The focus on facial recognition technology can also be seen in the recent adoption of new guidelines by the European Data Protection Board (EDPB). These emphasise that facial recognition technology should only be used in strict compliance with the Law Enforcement Directive and only when necessary and proportionate.
It explicitly stated its opposition to the use of certain aspects of the technology including live facial recognition (LFR) in publicly accessible spaces, using facial recognition to categorise individuals based on protected characteristics and processing personal data in a law enforcement context that would rely on a database made up of data scraped from the internet (as was the case for Clearview).
Why is this important?
Clearview’s fine concludes the joint investigation between the ICO and the OAIC, which was conducted under the Global Privacy Assembly’s Global Cross Border Enforcement Cooperation Arrangement and a Memorandum of Understanding. It demonstrates the willingness of public bodies to work across borders to enforce data protection laws where they identify a significant global threat to personal data privacy.
As demonstrated by the EDPB’s adoption of new guidelines, the rapid advancement of facial recognition technology is already on the radar of public bodies and regulators worldwide and the stance taken is strict but varied. In June 2021, the Information Commissioner’s Opinion “The use of live facial recognition technology in public places” outlined the key requirements to use LFR in a way that protects the public’s data. The UK’s Advertising Standards Authority (ASA) built on this, explaining how LFR could fall within the remit of the CAP Code “where the technology involves the processing of personal data to serve ads to consumers”. It will be important for businesses using advanced facial recognition technology in public places to stay alert to any changes in this very active space.
Any practical tips?
The ICO’s decision clearly sets out its position against the collection and retention of personal data scraped from the internet and against using facial recognition tools to identify individuals on mass databases. It follows that businesses should be mindful of how they use facial recognition tools and keep a close eye on decisions from regulatory bodies from within the UK and beyond. As Clearview found out, the regulators are clearly not averse to issuing substantial fines for breaches in this sensitive area.