G20 nation moves to modernised privacy code for online platforms, including binding rules. The proposed scope – and stakes for industry players – is substantial.

On 25 October 2021, the Australian Attorney-General’s department released, for public consultation, an exposure draft bill introducing amendments to the Privacy Act 1988 (Cth) (the Privacy Legislation Amendment (Enhancing Online Privacy and Other Measures) Bill 2021 (Cth) “Online Privacy Bill”) and a discussion paper seeking submissions on broader reforms to Australian privacy legislation (“Discussion Paper“). Our overview of the Online Privacy Bill and Discussion Paper is available here.

One of the main amendments proposed by the Online Privacy Bill is the introduction of a framework allowing the Office of the Australian Information Commissioner (OAIC) to register an OAIC – or industry – developed, enforceable online privacy code (“OP Code”) that would be binding on all large online platforms, social media services and data brokerage services providers (“OP Organisations”). This would supplement the current provisions under Part IIIB of the Privacy Act dealing with the development and registration of, and compliance with, APP codes that set out how one or more of the Australian Privacy Principles (APPs) will apply to a particular entity or class of entities (and may impose additional requirements). Currently there are two registered APP codes: one developed by the OAIC for Australian government agencies, and one developed by the Association of Market and Social Research Organisations (now the Australian Data and Insights Association) for its members.

Large online platforms and social media services are broadly defined in the Online Privacy Bill. This means a wide range of organisations with online operations could be affected by the proposed OP Code, going beyond the ACCC’s recommendation in its 2019 digital platform inquiry final report to create a privacy code enforceable against social media platforms, search engines and other digital content aggregation platforms.

Along with the removal by the Bill of the condition that a foreign organisation has to collect or hold personal information in Australia to be subject to the Privacy Act, this would also include an organisation that collects personal information of Australians from a digital platform that does not have servers in Australia.

KEY TAKEAWAYS

Submissions on the new Online Privacy Bill close on 6 December 2021. In engaging with the consultation and preparing for the implementation of the OP code, impacted organisations should have regard to the following issues:

  • The proposed OP Code will prescribe how OP Organisations must comply with certain APPs (including the description of uses and disclosures of personal information in privacy policies, as well as notice and consent requirements). It will also impose further requirements on OP Organisations to stop using or disclosing information on reasonable requests, and with respect to their interaction with children or other vulnerable individuals.
  • Many of the changes that the Online Privacy Bill proposes to introduce through the OP Code in respect of OP Organisations echo similar reforms contemplated in the context of the discussion paper for the broader economy (e.g. introducing a right to object, and amending the Privacy Act to expressly provide that consent should be voluntary, informed, current, specific, and unambiguous and privacy notices be clear, current and understandable).
  • A breach of the OP Code would be treated as an interference with the privacy of an individual, exposing OP Organisations to strengthened penalties (of up to the greater of $10 million, 3 times the value of that benefit if determinable or 10% of the relevant yearly turn over) and reinforced enforcement mechanisms otherwise contemplated in the Online Privacy Bill and the Discussion Paper.
  • Particular restrictions regarding the use of the personal information of children align with similar rules under overseas data protection regimes including the EU General Data Protection Regulation (GDPR) and reflect a global regulatory focus on the safety of children using social media and the internet generally.

Our full briefing, which focuses on the implications under the Online Privacy Bill for a potential new OP Code and identifies the various types of organisations that will likely qualify as OP Organisations, can be found here.

Executive Summary

  • On 12 October 2021, the Information Commissioner’s Office (“ICO“) opened its consultation in relation to the use of the beta version of its AI and data protection risk mitigation and management toolkit (the “Consultation“).
  • The Consultation runs until 1 December 2021 and the ICO is seeking responses from all industry sectors and from organisations of all types that engage in the “development, deployment and maintenance of AI systems that process personal data”.
  • The AI and data protection risk mitigation and management toolkit (the “AI Toolkit“) provides organisations with a framework against which to assess internal AI risk by identifying potential risks for consideration and offering practical, high-level steps on how organisations can mitigate such risks.

In this article we highlight some noteworthy aspects of the AI Toolkit, including the key practical steps that organisations should consider when processing personal data in connection with developing and operating AI systems, and flag key elements that respondees to the ICO’s consultation may want to consider.

Background

The initial, alpha version of the AI Toolkit was launched in March 2021. The Toolkit forms part of the ICO’s commitment to enable good data protection practice in AI and incorporates elements of the ICO’s Guidance on AI and Data Protection (the “Guidance“). This Guidance is designed to assist organisations to mitigate risks posed by the use of AI from a data protection perspective. It is intended to be used in, and considered before embarking upon, AI projects to ensure that organisations have devoted enough time to considering the impact that their data protection obligations will have on the development of the AI system or application concerned.

Following feedback received on the alpha version, the current, beta version of the AI Toolkit was launched on 20 July 2021. To test the Toolkit’s effectiveness and practical application, the ICO is currently applying the Toolkit to a range of live AI systems that process personal data. Alongside the responses gained from the Consultation, the results of this testing will inform the final version of the Toolkit which is due to be published in December 2021.

Scope of the AI Toolkit

The AI Toolkit provides organisations with a framework against which to assess internal AI risk by identifying potential risks for consideration and offering practical, high-level steps on how organisations can mitigate such risks. It should be noted that the Toolkit itself is drafted to reflect the auditing framework employed by the ICO’s internal assurance and investigation teams. Consequently, by applying the guidance in the AI Toolkit to their use of AI applications that process personal data, organisations can satisfy themselves that their use of AI is aligned with the ICO’s expectations in relation to data protection-related compliance.

The Toolkit has been designed with both technology specialists and those responsible for an organisation’s compliance with data protection laws (such as data protection officers, general counsel, and senior management) in mind, thereby encouraging organisations to build data protection considerations into the development stage of any AI project, rather than to consider these issues as an afterthought.

The ICO has made it clear that “there is no set way to use the toolkit” and that it is flexible enough to be applied to each and every stage of the development of an AI system. Nonetheless, the ICO explains that the Toolkit addresses four stages of the AI Lifecycle:

  1. Business requirements and design
  2. Data acquisition and preparation
  3. Training and testing
  4. Deployment and monitoring

The Toolkit itself is then divided up into two distinct sections that invite the user to review a series of risk statements for each stage of the project, and then use the corresponding “practical steps” guidance to put into place effective mitigation strategies to address these risks. A selection of the key risk domain areas and accompanying practical steps are outlined below.

Practical steps for organisations

  • Accountability and governance

Demonstrating an AI system’s compliance with the UK GDPR accountability principle has traditionally been particularly difficult for organisations, largely due to the technical complexity of AI systems. To this end, the Toolkit recommends that organisations carry out suitable risk assessments (such as Data Protection Impact Assessments), conduct sufficient due diligence checks of any AI systems providers and agree appropriate responsibilities with any third party suppliers.

  • Lawfulness and purpose limitation

In considering the issue of lawfulness and purpose limitation, the AI Toolkit reinforces the distinction between the development and deployment stages of an AI project and highlights the risks of conducting unlawful processing and contravening the purpose limitation principle when the different purposes involved in each stage of the project are not adequately considered. To mitigate such risks, the Toolkit advises organisations to conduct data flow mapping exercises at the start of any AI project and to continuously monitor and review their documented lawful bases for data processing to ensure that such bases are still relevant to each stage of the project.

  • Fairness, preventing and monitoring bias

AI systems must be sufficiently statistically accurate and avoid discrimination in order to be considered ‘fair’. Where insufficiently diverse or discriminatory data is used in the training and development of AI systems, organisations are at risk of producing AI systems that create inaccurate outputs or decisions. To mitigate such risks, the AI Toolkit recommends that organisations document the minimum success criteria needed to proceed to the next step in a development lifecycle, ensure that datasets do not reflect past discrimination and take additional measures to increase data quality and improve model performance in instances where disproportionately high errors are recorded for a protected group.

  • Transparency

It is crucial that the processes, services and decisions that are delivered by AI systems to individuals are capable of being clearly and easily communicated to those affected individuals. A failure to do so may expose an organisation to the risk of regulatory action. To guard against this, the AI Toolkit recommends that organisations ensure that their policies, protocols and procedures are easily accessible and understandable to the staff working on an AI project in the first instance, and then consider what information should be provided to data subjects about how their personal data will be used by the AI system before ensuring that the effectiveness of such explanations are periodically tested to check they are sufficiently clear.

  • Security

As with the accountability principle discussed above, an AI system’s compliance with the security requirements of the GDPR or the UK GDPR can potentially be more challenging than with other, more established technologies. Key risks identified in the AI Toolkit include the risk of unauthorised or unlawful processing and accidental loss, destruction or damage caused by AI systems that do not have appropriate levels of security. To address these concerns, the Toolkit recommends that organisations deliver appropriate security training to their AI project staff, develop an AI incident response plan, document all movements and storing of personal data from one location to another and proactively test the system and investigate any anomalies immediately.

  • Data minimisation

AI systems generally require large amounts of data to operate effectively. Nonetheless, the AI Toolkit highlights the risks posed by excessive collection and processing of personal data and the potential for such activities to breach Article 5(1) of the UK GDPR which requires that all personal data be adequate, relevant and limited to what is necessary in relation to the purposes for which the data is processed. To comply with this data minimisation requirement, the AI Toolkit recommends that organisations consistently assess whether the data they are collecting to train their AI systems is relevant for the purpose intended, carry out reviews during the project’s testing phase to assess whether all the data is needed and whether the same result can be achieved with a subset of that data, and periodically assess whether training data is still adequate and relevant to the prescribed purpose.

  • Individual data subject rights

The AI Toolkit emphasises that the rights of data subjects enshrined in the UK GDPR will apply wherever personal data is used, at any stage of an AI system’s development and deployment lifecycle. Failure to recognise when such rights are applicable is a key risk faced by organisations and the AI Toolkit recommends that organisations design and apply a policy or process that defines how information requests (and other data subject right requests) by individuals will be dealt with. Additionally, organisations should index the personal data used in the AI system concerned so that such data is easy to locate in the event that a request is received.

  • Meaningful human review

To the extent that organisations rely on human reviews in order to take certain processing activities outside of the scope of automated decision making and Article 22(1) of the GDPR and UK GDPR, the AI Toolkit identifies the risks posed by conducting “tokenistic” human reviews. To ensure that adequate human reviews are being undertaken in this context, the Toolkit suggests that all human reviewers are adequately trained to interpret and challenge outputs made by the AI system and that the reviewers should always have meaningful influence on any decision made. Specifically, human reviewers should take into account factors, such as local contextual factors, in addition to those considered by or put into the AI system and maintain the authority and competence to overrule any automated recommendation by the AI system.

The AI Toolkit and the National AI Strategy

The AI Toolkit and its recommendations should also be considered in light of the UK’s National AI Strategy. On 22 September 2021, the Department for Digital, Culture, Media and Sport (“DCMS“) published the UK’s National AI Strategy in partnership with the Department for Business, Energy and Industrial Strategy (the “Strategy“). Whilst the Strategy does not contain concrete legislative proposals, it affirms the UK government’s intention to harness the potential of AI and thereby ensure the UK’s position as an international market leader in the development of AI technologies. The Strategy will be discussed further in our upcoming blog post.

The Strategy is segmented into three “pillars”, the third of which is devoted to ensuring the “UK develops an appropriate national and international governance framework for AI technologies to encourage innovation, investment and protect the public and fundamental values”. Consequently, although it should be emphasised that the AI Toolkit is not legislation but instead functions as best practice guidance, it can still considered an integral part of the UK’s developing AI governance framework and will likely play an increasingly important role when its final iteration is published later in December 2021. Indeed, one of the key actions listed under pillar 3 of the Strategy is to “explore with stakeholders the development of an AI technical standards engagement toolkit to support the AI ecosystem to engage in the global AI standardisation landscape”. The AI Toolkit therefore has the potential to make a major contribution to the AI standardisation landscape.

Conclusion

Whilst the AI Toolkit will likely play an increasingly important role in the development of the UK’s approach to AI regulation in 2022, at this stage it is important for a wide variety of organisations to contribute to the Consultation. As responses will directly inform the drafting of the final version of the AI Toolkit, having a diverse pool of responses to draw from will ensure that the Toolkit’s guidance can be as widely applicable as possible.

Once the final AI Toolkit is published later in December 2021, organisations should carefully review the guidance and use the Toolkit’s framework as a guide to structure all AI projects that process personal data.