The ICO has published new AI guidance, setting out its top tips on how to use AI and personal data appropriately and lawfully. It can improve how your organisation handles AI and personal data. It is also intended to help regulators understand how AI can be lawfully and appropriately used, both to help them regulate its use and when using AI themselves.

Tip 1: Take a risk-based approach when developing and deploying AI: You first need to assess whether you need to use AI, as it is generally considered a high-risk technology. If you do, assess the risks and put in place measures to mitigate them. This includes carrying out a data protection impact assessment (DPIA) and consulting affected groups. Note: if you identify a risk that you cannot mitigate, you are legally required to consult the ICO before any processing takes place.

Tip 2: Think about how you can explain the decisions made by your AI system? While this can be difficult, particularly where machine learning or black box AI is used, you must still provide a meaningful explanation to those individuals. You also need to think about what people may expect your explanation to look like and how you will handle individual rights requests. The ICO has a dedicated

Tip 3: Only collect the data you need to develop your AI system and no more: AI systems often need lots of data, which can seem at odds with the GDPR principle around data minimisation. However, you can still use AI - you just need to ensure that data is accurate, adequate, relevant and limited. While the latter two in particular may seem hard to achieve, an FAQ section at the back of the guidance discusses this further, suggesting, for example, mapping out the areas of the AI pipeline where you may use personal data and scheduling in a time at each significant milestone to review whether you still need the data for that purpose. It also points to the data minimisation section of its main AI guidance and advises you to consider if there are any privacy enhancing technologies that can help.

Tip 4: Address risks of bias early on: You need to assess whether the data you are gathering is accurate, relevant and representative of the population that you will apply the AI system to. You should also map out the likely effects and consequences of the decisions made by the AI system for different groups and assess whether these are acceptable.

Tip 5: Take time and dedicate resources to preparing the data appropriately: This will result in better outcomes, as the quality of the output is dependent on the quality of the data in. Having clear criteria and lines of accountability about the labelling of data involving protected characteristics, special category data (or both) can help as appropriately labelled data can lead to fairer outcomes.

Tip 6: Ensure that your AI system is secure: AI systems can exacerbate known security risks or create new ones. However, you must still implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk. To help manage this, you could carry out a security risk assessment, including maintaining an inventory of all AI systems you use, and/or carry out model debugging (i.e. finding and fixing the problems in your model).

Tip 7: Ensure that any human review of decisions is meaningful: Some AI models make decisions, for example around recruitment or credit applications. However, people have the right not to be subject to solely automated decisions. They can request a human review of the decision made about them but that human review must be meaningful. Human reviewers must therefore be trained to interpret and challenge the AI outputs and senior enough to override them and take into account additional factors.

Tip 8: If you are buying in your AI, work with your AI supplier to ensure your use is appropriate: You are still responsible for your AI use, even if you procure the AI system from a third party. You should therefore do your DD to ensure you chose an appropriate supplier (e.g. check they took a privacy by design approach), collaborate with the supplier to carry out an assessment (e.g. your DPIA), agree and document your roles and responsibilities (e.g. who will answer individual rights requests) and consider whether there will be any international transfers.

Comment:

The ICO recognises the unique challenges AI can present to DP compliance. Over the past few years it has produced a number of long, detailed, pieces of guidance addressing these challenges – its main AI guidance is over 80 pages long, and the guidance it produced jointly with the Alan Turing Institute on explaining decisions made with AI is in three parts with task lists, checklists and annexes. By contrast, this latest guidance is relatively short and high level making it a more accessible start point for organisations. However, it still manages to cover quite a bit of ground, and is supplemented by a set of FAQs, as well as by links to relevant sections of the more detailed AI guidance. Together with the AI risk mitigation toolkit it provides further practical support for organisation who are, or are thinking about, using AI.

Increasingly, we are seeing an uptake in applications of artificial intelligence (AI). The use of AI has the potential to make a significant difference to society. However, to realise the benefits, there needs to be confidence that AI is being deployed appropriately and lawfully. (ICO - How to use AI and personal data appropriately and lawfully, Nov 2022)