The ICO has warned organisations to carefully assess the risks if considering using emotion analysis technologies as they will be investigated if they fail to act responsibly, posing risks to vulnerable people, or if they fail to meet ICO expectations.

Whether it is monitoring where you are looking on a screen (gaze tracking), your facial expressions or skin moisture, tech is looking more closely than ever at our biometric data. Examples of emotional analysis technologies include using wearable devices to track worker health or screening body position, speech and eye/head movements to register students for exams.

The ICO is concerned that these technologies use a range of personal data (e.g. emotional responses) in a way that it describes as ‘more risky’ than traditional biometric technologies that are used to verify or identify a person. Algorithms which are not sufficiently developed are also unable to detect emotional cues which means there is a risk of systemic bias, inaccuracy and possibly also discrimination.

The ICO’s Deputy Commissioner, Stephen Bonner acknowledged that developments in the biometrics and emotion AI market are immature but said that “while there are opportunities present, the risks are currently greater….. As it stands we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area.”

As well as warning of the risks in this area, the ICO has published two new reports to help businesses navigate the use of emerging biometrics technologies and has confirmed that it is intending to publish new biometric guidance in Spring 2023. This would cover facial, fingerprint and voice recognition which are already used in a range of industries and sectors. For example, my child’s school uses fingerprint technology in their canteen, financial companies are using facial recognition to verify human identities, and airports are aiming to streamline passenger journeys by using facial recognition at check-in and at boarding gates. The ICO is keen to ensure that these technologies are developed and implemented with privacy concerns in mind. It will be interesting to see what the guidance suggests, particularly around facial recognition, given the heated discussions we have seen in the EU around whether certain biometric technologies should be banned or classified as high risk in its AI Act.

As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements (Stephen Bonner - Deputy Commissioner, ICO).