Following our article last year concerning the joint UK and Australian investigation into Clearview AI’s alleged breaches of data protection and privacy laws, the ICO has subsequently fined Clearview more than £7.5 million.

The ICO has fined US facial recognition firm Clearview £7,552,800 for breaches of UK data protection laws, including collecting images of individuals in the UK without their knowledge and using them for its online global database. Additionally, the ICO issued an enforcement notice requiring Clearview to cease the collection of personal data of UK residents and to delete any data that has already been stored.

Clearview’s actions and reasons for the decision

Clearview AI provides a service whereby users can upload an image of an individual’s face to find matches on their global database, which they coin the ‘the World’s Largest Facial Network’. This service is mostly used by law enforcement agencies as the database contains links showing where the individuals’ images appear, allowing them to monitor individuals’ behaviour.

The ICO reported that Clearview has collected more than 20 billion images of individuals’ faces without their knowledge, uploading these images into its online database. The images were taken from a range of sources, including social media platforms.

The ICO based its decision on the following breaches of UK data protection law:

  • failure to use personal data in a way that is fair and transparent;
  • failure to show a lawful reason for collecting the personal data;
  • failure to have a process in place to stop the data being retained indefinitely;
  • failure to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR); and
  • asking for additional personal information, including photos, when asked by members of the public if they are on their database.

Clearview may challenge the ICO’s decision and so we await further developments. The UK is the fourth country to have fined Clearview AI along with Italy, France and Australia.

Implications of the ICO’s decision

The ICO decision comes at a time when regulators have been issuing significant fines in an effort to punish non-compliance, and to highlight to businesses the importance of ensuring accountability when it comes to using personal data. It also demonstrates the continued focus of regulators on holding large technology companies to account when it comes to the protection of personal information. Earlier this year the Irish Data Protection Commission (DPC) issued a fine of €17 Million to Meta on the basis of insufficient security measures concerning EU users’ data. Google Ireland faced an even larger fine (€90 Million) from the French data protection authority (CNIL) in relation to its cookie consent features on YouTube. Looking ahead, it seems that regulators will continue on the path of wielding their powers of enforcement, especially given one of the aims of the National Data Strategy is ensuring public trust in the use of data.

However, achieving these aims is not all about the power of robust enforcement. Earlier this month, the ICO launched an AI and data protection risk toolkit, designed to help organisations better understand the risks of using AI and the effect on individual’s rights and to enable businesses to self-assess whether their AI services are compliant with data protection law. This forms part of the ICO’s strategic priority and commitment to ensuring good practice when it comes to AI.

The key take away from the ICO’s enforcement action is for businesses to consider the measures and safeguards they have in place to protect individual’s personal data and to take appropriate steps to mitigate potential issues arising from any areas of vulnerability.