On 21 April 2021, the European Commission published the long-awaited proposal for a Regulation on Artificial Intelligence[1](“AI Regulation”).

The proposed AI Regulation introduces a first-of-its-kind, comprehensive, harmonized, regulatory framework for Artificial Intelligence. The ambition is to provide the legal certainty needed to facilitate investment and innovation in AI, whilst at the same time establishing a framework to safeguard fundamental rights and ensure AI applications are used safely. The risk based construct of the draft regulation will resonate with those familiar with the CE regime regulating e.g. product safety and privacy (GDPR), given the hallmarks: harmonisation of rules under a single regulation, extra-territorial effect, turnover based fines and focus on proactive controls on transparency, risk management and demonstrable accountability. Whilst the proposed text of the AI Regulation is only the first step in a long legislative process, it gives us an important early insight into the model the EU is looking to adopt.

Material and territorial scope

The AI Regulation adopts a broad regulatory scope, covering all aspects of the lifecycle of the development, sale and use of AI systems, including:

(i) placing AI systems on the market;

(ii) putting AI systems into service; and

(iii) making use of AI systems.

All those involved in undertaking these activities – whether as a provider, user, distributor, importer, or resellers – will be subject to a level of regulatory scrutiny. This also extends to providers or users of AI systems who are located outside the EU if they are placing AI systems into service in the EU or using the outputs derived from the AI systems operating in the EU. There are parallels here with the extra-territorial effect of the GDPR.

There is a tiering of regulatory requirements depending on the inherent risk associated with the AI system / practices that is being used. These are explained in more detail below :

  • Prohibited AI practices. At the highest risk level are prohibited AI practices. These are particularly intrusive methods of deploying AI which the EU have determined must not be allowed to take place, and include AI used for social scoring; large scale surveillance; adverse behavioural influencing.
  • High Risk AI systems. The AI Regulation creates a separate tier of high risk AI systems. These are technologies anticipated to present significant risk of harm and so permitted only on a restricted basis, with specific controls in place to support safe use. The list of high risk AI systems (which may be expanded by the European Commission in due course), covers a wide range of applications including AI systems deployed in relation to credit scoring; essential public infrastructure; social welfare and justice; medical and other regulated devices; transportation systems.
  • Lower risk AI systems. AI systems which are outside the scope of those identified as ‘high risk’ and are not deployed for a prohibited practice, are subject to a transparency regime.

Definition of AI system

The definition of an AI system is intended to be technology-neutral and future-proof, while providing legal certainty. It is based on the OECD’s 2019 Recommendation on Artificial Intelligence and covers:

  • Software;
  • Developed with one or more of the specified techniques and approaches in Annex I to the AI Regulation (which the Commission can amend over time through delegated acts). Currently these techniques include:
    • Machine-learning approaches;
    • Logic- and knowledge-based approaches; and
    • Statistical approaches;
  • Which can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

Prohibited AI practices

The AI Regulation prohibits specific AI practices (rather than AI systems) which are considered to create an unacceptable risk (e.g. by violating fundamental rights). These cover:

  • AI-based dark patterns: AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm (e.g. playing an inaudible sound to generate certain behavior);
  • AI-based micro-targeting: AI systems that exploit the vulnerabilities of a specific group of persons in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm (e.g. toys with interactive features pushing children towards irresponsible or unwanted behavior);
  • AI-based social-scoring: AI systems used by public authorities for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behavior or personality characteristics, with the social score leading to detrimental/unfavorable treatment in social contexts unrelated to the context in which the data was gathered;
  • the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except (i) concerning targeted searches for specific potential victims of crime, (ii) to prevent a threat to life or terrorist attack, or (iii) to detect, localize, identify or prosecute a perpetrator or suspect of certain serious crimes. In this regard, the nature of the situation and the consequences of the use (seriousness, probability and scale of the harm) should be taken into account, necessary and proportionate safeguards should be complied with and a prior authorization by a judicial/administrative authority should be obtained.

High-risk AI systems

High-risk AI systems are permitted provided the strict controls set out in the regulation to mitigate risk are in place. Much of this part of the regulation follows the approach taken in existing EU legislation to manage product safety risk.

a) The definition of a high-risk AI system

High-risk AI systems are defined by a classification model that focuses on the risk associated with the product itself:

  • A first category covers AI systems intended to be used as a safety component of products (or which are themselves a product). These systems are listed in Annex II to the AI Regulation.
  • A second category covers stand-alone AI systems whose use may have an impact on fundamental rights. These systems are listed in Annex III and cover, by way of example, real-time and ‘post’ biometric identification systems. This list identifies AI systems whose risks have already materialised or are likely to materialise in the near future. It may be expanded in the future to cover other AI systems which the EC consider to present similarly high risks of harm.

b) Requirements applicable to high-risk AI systems

The key regulatory controls on high risk AI systems fall on providers[2] of the system, as summarised below.

  • Transparency: High-risk AI systems must be designed and developed in such a way to ensure that operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately. Clear user documentation and instructions should be provided to the user, which must contain information on the identity of the provider, the characteristics / capabilities / limitations of the AI system and the human oversight measures.
  • Security: A high level of accuracy, robustness and security must consistently be ensured throughout the high-risk AI system’s lifecycle. Serious incidents and malfunctioning of the high-risk AI system must be reported to the market surveillance authorities of the Member State where the incident occurred.
  • Accountability:
    • Complete and up-to-date technical documentation must be maintained (and drawn up by providers before the placement on the market/putting into service) to demonstrate compliance with the AI Regulation. The outputs of the high-risk AI system must be verifiable and traceable throughout the lifecycle, including the automatic generation of logs (which must be kept by providers, when under their control).
    • The system must be registered in an EU database on high-risk AI systems before being placed on the market or put into service.
    • Where no importer can be identified, providers established outside of the EU shall appoint an authorized representative.
  • Risk management: A risk management system must be established, implemented, documented and maintained as part of an overall quality management system. Risk management must comprise a continuous iterative process run throughout the entire lifecycle of the system.
  • Testing: Any data sets used to support training, validation and testing must be subject to appropriate data governance and management practices and must be relevant, representative, free of errors and complete and have the appropriate statistical properties to support the system use.
  • Human review: AI systems must be designed and developed in such a way that there is effective human oversight. This element of human oversight can also be found in article 22 GDPR on automated decision-making that provides for a right to obtain human intervention.

In addition to the above, providers must also:

  • Set-up, implement and maintain a post-market monitoring system (in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system);
  • Ensure the system undergoes the relevant conformity assessment procedure (prior to the placing on the market/putting into service) and draw up an EU declaration of conformity;
  • Immediately take corrective actions in relation to non-conform high-risk AI systems (an inform the national competent authority of such non-compliance and the actions taken);
  • Affix the CE marking to their high-risk AI systems to indicate conformity;
  • Upon request of a national competent authority, demonstrate the conformity of the high-risk AI system.

Importers, distributors, and users of high-risk AI systems are subject to even more limited regulatory control requirements. Most notable for users of high risk AI systems are the requirements to (i) use the systems in accordance with the instructions given by the provider; (ii) ensure all input data is relevant to the intended purpose; (iii) monitor operation of the system and inform the provider/distributor of suspected risks / serious incidents, or malfunctioning and (iv) keep logs automatically generated by that high-risk AI system, where those logs are within their control.

c) Conformity assessments and notified bodies / notifying authorities

The AI Regulation includes a conformity assessment procedure which has to be followed for high-risk AI systems – with two levels of assessment to apply.

  • If the high risk AI system is already regulated under product safety rules, a simplified conformity assessment regime applies, effectively as an extension to the existing regime.
  • For other high risk AI systems (ie those listed in Annex III), the new compliance and enforcement regime applies, conformity to which the provider is expected to self-assess other than in the case of remote biometric identification systems – these will be subject to third party conformity assessment.

The conformity assessment regime is supported by a network of notified bodies and notifying authorities to be designated or established by Member States as independent third parties in the conformity process.

Rules for other (Low Risk) AI systems

AI systems which are not deployed for a prohibited practice and fall outside the scope of a high-risk system will be subject to a number of basic controls that apply to all AI systems, in particular:

  • If the AI system is intended to interact with an individual, the provider must design the system to ensure the individual is aware they are interacting with an AI system (except where this is obvious, or it takes place in the context of the investigation of crimes);
  • If the AI system involves emotion recognition or biometric categorization of individuals, the user must inform the individual that this is happening;
  • If the AI systems generates so-called ‘deep fakes’, the user must disclose this (i.e. that the content has been artificially created or manipulated).
  • Codes of Conduct are encouraged in order to encourage those providing and using lower risk AI systems to comply with the letter and spirit of the rules applicable to high-risk AI systems.

Governance, enforcements and sanctions

a) European Artificial Intelligence Board

The AI Regulation provides for the establishment of a European Artificial Intelligence Board (“EAIB”), to help advise and assist the EC in relation to matters covered by the AI Regulation to (i) contribute to the effective cooperation of the national supervisory authorities and the Commission, (ii) coordinate and contribute to guidance and analysis by the Commission and the national supervisory authorities and other competent authorities on emerging issues and (iii) assist the national supervisory authorities and the EC ensure consistent application of the rules. The EAIB construct is clearly modelled on the tasks and responsibilities of the European Data Protection Board (EDPB) under the GDPR.

b) National competent authorities

Member States must designate national competent authorities and a national supervisory authority responsible for providing guidance and advice on implementation of the AI Regulation, including to small-scale providers.

c) Enforcement

The AI Regulation requires Member State authorities to conduct market surveillance and control of AI systems in accordance with the product safety regime in Regulation (EU) 2019/1020. Providers are expected to co-operate by providing full access to training, validation and testing datasets etc.

If market surveillance by an authority gives reason to believe that an AI system presents a risk to the health or safety or to the protection of fundamental rights of persons, the authority shall carry out an evaluation of the AI system and where necessary, require corrective actions.

d) Sanctions

Infringement of the AI Regulation is subject to monetary sanctions of up to €10m – €30m (depending on the nature of the infringement), or (if higher) a turnover based fine at 2% – 6% of the global annual turnover.

Unlike the GDPR, the AI Regulation is enforced by supervisory authorities and does not provide for a complaint system or direct enforcement rights for individuals.

Conclusions

The AI Regulation is a groundbreaking piece of legislation which sets a clear regulatory marker that will have implications not only within the EU, but also in many other countries who are likely to closely follow the EU approach. The regulation as drafted will impact a wide range of organisations – from users of AI in across healthcare, government agencies, transportation and financial services, through to the underlying technology developers, many of whom may be located outside the EU. Given the novelty and breadth of the proposal we can expect healthy scrutiny, lobbying and debate to follow as the draft proceeds now through the trilogue legislative process. We will watch the path closely and provide our ongoing commentary with interest.