How can those engaged in the development of artificial intelligence (AI) systems utilise the Information Commissioner’s Office’s AI and data protection risk toolkit (the Toolkit) to better ensure compliance with data protection legislation?
The key takeaway
The Toolkit provides a useful way for organisations to better gauge their compliance with data protection legislation in their development of AI systems; however, it does not replace the need for a Data Protection Impact Assessment or guarantee compliance.
AI is quickly playing a bigger part in our lives, fuelling virtual assistants, analytics and many other online services that we use on a nearly daily basis. Many companies are looking to join in by developing their own AI systems. Due to their complexity, there are risks that such systems may breach the UK General Data Protection Regulation (UK GDPR) and its protections of individuals’ rights and freedoms.
These developments have led the ICO to create the Toolkit, which it has recently launched to help those developing AI systems have better confidence in their compliance with the law.
On 4 May 2022 the ICO launched the Toolkit which is designed to provide practical support to organisations to reduce the risks to individuals’ rights and freedoms caused by AI systems that they develop. The ICO took into account feedback it received on the beta version of the Toolkit, which it launched over a year ago.
The Toolkit aims to:
- identify risks to individual rights and freedoms
- connect those risks to the UK GDPR and its specific provisions, and
- provide practical steps to help mitigate the risks identified and make compliance with the UK GDPR more likely.
The Toolkit ties the various AI lifecycle stages in with the relevant GDPR provisions that might impact it, allowing for organisations to rate their risk in relation to those provisions. The Toolkit then provides them with means of controlling those risks and specific practical steps they can take to reduce that risk throughout the development stages.
Why is this important?
As is clear when looking at the Toolkit, the different lifecycle stages of AI development pose a raft of potential risks to organisations and, as such, they need to keep a keen eye on compliance with the UK GDPR.
Use of the Toolkit is not compulsory, but the ICO will undoubtedly look at its use favourably in the instance of any potential enforcement action for breaches of data protection legislation.
Any practical tips?
Organisations developing AI systems should utilise the Toolkit during the entire lifecycle of their development, and especially in the early stages to allow for the mitigation of any risks at the earliest stage possible.
The ICO is also keen to receive feedback on the Toolkit as organisations use it, which will undoubtedly feed into future versions.