As Automatic Facial Recognition Technology (FR Tech) becomes smarter and cheaper, a controller that stores images of customers or employees on its system may be exposing those individuals, and itself as a controller, to significant privacy risks.
A separate, more acute privacy concern arises where controllers use live camera images in conjunction with FR Tech to identify individuals moment by moment.
What are the security risks around using FR tech?
The threat to our cyber security and privacy has led to an increased urgency for more complex security measures to protect information. It is now standard practice for individuals to use a myriad of different and lengthy passwords, each one involving uppercase, lowercase, numeric and symbolic characters, to access all of their online profiles from shopping sites to podcast apps. Individuals can conceal passwords in carefully hidden (and easily forgotten) places but we cannot password protect our face, at least where FR Tech is concerned.
The exciting opportunities presented by the recent improvement in FR Tech are dampened by the chilling possibilities for malicious users to invade individuals' privacy. For example, Police in New Delhi recently trialled facial recognition technology and identified almost 3,000 missing children in four days. On the darker side, app developers in Russia have produced a free, publically available way of identifying a person on the street simply by taking their picture, as easily as Shazam-ing a catchy tune you can't remember the name of.
What is the legal position around facial recognition?
A photograph clearly displaying an individuals' face will constitute personal data under data protection legislation since it allows individuals to be identified. Processing of personal data using specific technical means to analyse the physical, physiological or behavioural characteristics of a natural person that allows the unique identification or authentication of a natural person constitutes the processing of biometric data which makes up one of the special categories of personal data identified under the General Data Protection Regulation (GDPR). On that basis, the use of FR Tech to identify an individual will constitute processing special categories personal data and attract more stringent processing obligations on the part of controllers using the information notably around identifying a legal basis for the processing.
The ICO has expressed concerns about the lack of transparency in respect of FR Tech used by the Police in the UK and the resulting breach of public trust for law abiding individuals who are monitored by FR Tech without their knowledge. In the private sector, particularly retail, the use of live FR Tech breeds immense marketing opportunities; individuals could be identified on entering a store and directed to items similar to those purchased previously. Any such use would require a controller to carry out a data protection impact assessment to demonstrate it had properly balanced the intrusiveness of such processing with the benefits for all parties.
The penalty for breaching the GDPR can be significant, so any controller that uses FR tech to identify individuals needs to understand what their legal obligations are, particularly when it comes to mitigating a data incident. Controllers should document the practical considerations arising from these obligations in a policy document, which covers the following:
- How individuals subject to FR Tech identification are notified this is happening
- The lawful basis for processing in particular when identifying individuals using FR Tech
- The specific purpose(s) of processing
- Procedures for mitigating and responding to a data incident
It is important to remember that creating a policy is only half the battle. Once such a document is in circulation it is important that the steps identified are put into practice to ensure compliance.