In 2021, countries in EMEA continued to focus on the legal constructs around artificial intelligence (“AI”), and the momentum continues in 2022. The EU has been particularly active in AI—from its proposed horizontal AI regulation to recent enforcement and guidance—and will continue to be active going into 2022. Similarly, the UK follows closely behind with its AI strategy and recent reports and standards. While our team monitors developments across EMEA, this roundup will focus on summarizing the leading developments within Europe in 2021 and what that means for 2022.

The Proposed EU AI Act

In April 2021, the European Commission published its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Commission Proposal”). The Commission Proposal sets out a horizontal approach to AI regulation that establishes rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU (see our previous blog post here). The proposal is currently under negotiation between the co-legislators, the European Parliament and the Council of the European Union (“Council”).

Slovenia held the Council Presidency for the last six months of 2021, and France assumed the Presidency in January 2022. During its Presidency, Slovenia published a partial compromise text of the EU AI Act, focusing on edits to the classification of high-risk AI systems. The French Presidency circulated additional proposed amendments on 13 January 2022, focusing on the requirements for high-risk AI systems. Notable amendments in each version include:

Slovenian Council Presidency:

  • Scope. New Article 52a (and corresponding Recital 70a) would clarify that “general purpose AI systems” do not fall within the scope of the Act. Although the compromise text does not define this term, Recital 70a states they are “understood as AI system[s] that are able to perform generally applicable functions such as image / speech recognition, audio / video generation, pattern detection, question answering, translation etc.”
  • Social scoring. Article 5(1)(c) (and corresponding Recital 17) would extend the prohibition on AI systems used for social scoring as set out in the Commission Proposal, which is limited to public authorities, to private actors as well. Also, while the Commission Proposal limits the prohibition to social scoring used to evaluate the “trustworthiness” of natural persons, the Slovenian Presidency text removes this limitation, which would thereby broaden the scope of the prohibition.
  • Biometric identification. Amendments to Article 3(33) would broaden the definition of “biometric data” to include systems that do not “uniquely” identify people, while other amendments would make the Act apply not only to “remote” biometric identification systems, but to biometric identification systems broadly. For instance, Article 5 would prohibit law enforcement use of any biometric identification systems in publically available spaces, subject to certain exceptions.
  • High risk AI systems. Annex III would add to the list of AI systems qualifying as “high risk” those that are intended to be used to control “digital infrastructure” or “emissions and pollution.”

French Council Presidency:

  • Risks. Amendments to Article 9 (Risk management system) would clarify that high-risk AI systems must have a risk-management system allowing for the identification of known / foreseeable risks “most likely to occur to health, safety and fundamental rights in view of the [system’s] intended purpose.”
  • Trade-offs. Amendments to Article 9(3) specify that risk-management measures must aim to “minimis[e] risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.”
  • Error tolerance. Amendments to Article 10(3)—concerning training, validation, and testing data—slightly relax the requirement that the data be “free of errors and complete”, now requiring data sets to be so “to the best extent possible.”
  • Human oversight. Amendments to Article 14(4) make clear that the supplier of high-risk AI systems must enable the system to allow for human oversight by natural persons.

The EU AI Act is also being considered by the European Parliament. Although it is listed as a high-priority piece of legislation in the Commission’s 2022 work program (see here), it may be some time before it is finalized.

EU Recommendations, Consultations and Reports on AI

In addition to activity on the EU AI Act, the EU has published additional recommendations, consultations and reports on AI:

  • The Council of Europe published a Recommendation (see here) that responds to the changes in profiling techniques in the last decade. It recognizes that profiling can impact individuals by placing them in predetermined categories without their knowledge and that the lack of transparency can pose significant risks to human rights. The recommendation encourages EU Member States to promote and make legally binding the use of a ‘privacy by design’ approach in the context of profiling, and sets out additional safeguards that should be imposed on profiling.
  • The European Commission published a public consultation (see here) to adapt product liability rules to ensure that they sufficiently protect consumers against the harms of new technologies, including AI. The consultation is split into two parts and gathers views on: (i) how to ensure that consumers and users continue to be protected against the harm caused by AI systems, particularly with respect to compensation, and (ii) how to address the problems purportedly linked to certain types of AI (e.g., where there is difficulty with identifying the potentially liable person, or proving that person’s fault or proving a product’s defect and the causal link with damage). The consultation period has ended, and the Commission intends to propose an update to the Product Liability Directive by the end of the third quarter of 2022.
  • On 6 October 2021, the European Parliament voted in favor of a resolution banning the use of facial recognition technology (“FRT”) by law enforcement in public spaces (see our previous blog post here). The resolution forms part of a non-legislative report on the use of AI by the police and judicial authorities in criminal matters (“AI Report”) published by the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) in July 2021. The AI Report will be sent to the European Commission, which has three months to either (i) submit, or indicate it will submit, a legislative proposal on the use of AI by the police and judicial authorities as set out in the AI Report; or (ii) if it chooses not to submit a proposal, explain why.

Enforcement on Clearview AI

From an enforcement perspective, in 2021, a number of EU data protection authorities (“DPAs”) have taken enforcement actions on specific AI use cases, particularly relating to FRT. The most significant action has been the investigation against Clearview AI Inc. (“Clearview AI”) in relation to their personal information handling practices, especially the company’s use of data scraped from the internet and the use of biometrics for facial recognition. The UK Information Commissioner’s Office (“ICO”) and the Office of the Australian Information Commissioner (“OAIC”) conducted a joint investigation. In November 2021, the ICO issued a provisional intention to fine Clearview AI over £17 million for its breach of data protection laws, and its final decision is expected in 2022 (see here). Additionally, the French privacy regulator ordered Clearview AI to cease collecting images from the internet and to delete existing data within two-months (see here in French). Due to the significant processing of personal data involved in AI, DPAs have taken an interest in applying the GDPR to AI.

AI Activity in the United Kingdom

Following the UK’s exit from the EU on 1 January 2021, the UK government announced plans to reform UK data protection law and published its own National AI Strategy in September 2021 (see here and our previous blog post here). According to the UK’s AI strategy, the Office of AI is expected to publish a White Paper on regulating AI in early 2022. Further to this, the UK government has published a number of reports and standards relating to AI, for example:

  • The UK government’s Central Digital and Data Office (“CDDO”) published the Algorithmic Transparency Standard (see here) as part of the UK AI Strategy’s commitment to delivering greater transparency on algorithm-assisted decision making in the public sector. The Algorithmic Transparency Standard seeks to help public sector organizations provide clear information about the algorithmic tools they use, and why they use them.
  • The UK government’s Centre for Data Ethics and Innovation (“CDEI”) published an independent report setting out the roadmap to an effective AI assurance ecosystem (see here).
  • A new AI Standards Hub was launched by the Office of AI, supported by the British Standards Institution, in January 2022 (see here) to develop AI standards.