On April 21, 2021, the European Commission (EC) issued its eagerly awaited draft proposal on the EU Artificial Intelligence Regulation (Draft AI Regulation) – the first formal legislative proposal regulating Artificial Intelligence (AI) on a standalone basis. The Draft AI Regulation is accompanied by a revision of the EU’s rules on machinery products, which lay down safety requirements for machinery products before being placed on the EU market. The new draft Machinery Products Regulation – proposed by the EU Commission on the same day – intends to tackle safety issues that arise in emerging technologies. The Draft AI Regulation (which appears to have borrowed a number of principles from existing EU legislation, including the EU General Data Protection Regulation 2016/679 (GDPR)) has an intentionally broad scope, and regulates the use of AI in accordance with the level of risk the AI system presents to fundamental human rights and other key values the EU adheres to. AI systems that are considered to present an “unacceptable” level of risk are banned from the EU, and “high-risk” systems are subject to strict requirements. AI systems which are considered to present a lower risk level are subject to transparency requirements or are not regulated at all. Companies engaged in the development, manufacturing, importation, distribution, servicing, and use of AI – irrespective of industry – should assess to what extent their products are implicated and how they will address any regulatory requirements they are subject to. The Draft AI Regulation foresees maximum administrative fines of up to €20m or 4% of total worldwide annual turnover in the event of non-compliance.
How is AI Defined? The Draft AI Regulation proposes a definition for “[AI] systems” as follows: ”software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” The techniques listed in Annex I include machine learning approaches, logic and knowledge based approaches and statistical approaches (including search and optimization methods). However, developing and agreeing on a definition for AI that is able to capture all relevant systems and technology, and can withstand the test of time, presents many challenges, and we expect this proposed definition will be the subject of discussion moving forward.
Why is AI Being Regulated? Whilst the use of AI technologies has the potential to bring significant benefits to a variety of stakeholders, policy makes are concerned that, without appropriate regulation, the use of AI technology can pose risks to the rights and freedoms of individuals.
Who Does the Draft AI Regulation Apply to? Leveraging concepts from the GDPR, the Draft AI Regulation has a broad scope of application and applies to:
- providers (i.e., the entity / person that develops or has an AI system developed) who place AI systems on the EU market or put them into service in the EU irrespective of whether they are established within the EU. Notably, where the provider is not established in the EU and where an importer cannot be identified, they must appoint an authorised representative in the EU;
- users of AI systems (i.e., the entity / person using an AI system) who are established in the EU – except where the AI system is used in the course of a personal non-professional activity; and
- providers and users of AI systems established outside the EU where the output produced by the AI system is used in the EU.
Certain limited requirements in the Draft AI Regulation also apply to distributors and importers of AI systems.
What Does the Draft AI Regulation Address? The Draft AI Regulation addresses the following four key areas: (i) rules for the placing on the market, the putting into service and the use of AI systems in the EU; (ii) prohibitions of certain AI practices; (iii) specific requirements for “high-risk” AI systems; (iv) transparency rules for AI systems; and (v) rules on market monitoring and surveillance.
- Prohibited AI: The Draft AI Regulation lists out a number of AI practices which are prohibited. These include, for example, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except where certain exemptions apply.
- High-risk AI: The Draft AI Regulation imposes specific requirements with respect to “high-risk” AI systems. An AI system would be considered high-risk under the proposed draft Regulation where it: (i) is intended to be used as a product or safety component of a product and is covered by certain listed legislation (e.g., medical devices and children’s toys) which requires a third-party conformity assessment to be carried out; or (ii) it has been included in the list in Annex III which includes, for example, AI systems intended to be used for: (a) remote biometric identification of individuals, (b) recruitment, promotion and termination of employees, and (c) evaluating the creditworthiness of individuals.
- Providers of high-risk AI systems are, for examples, required to:
- Perform a conformity assessment to demonstrate that the AI system is complaint with the Regulation;
- report serious incidents of any malfunctioning of the high-risk AI system to the competent authority immediately and no later than 15 days after becoming aware;
- establish and document a risk management system, a quality management system and a post-market monitoring system;
- develop detailed technical documentation and maintain automatically generated logs;
- register the AI system in the EU Database (maintained by the Commission).
- Providers of high-risk AI systems are, for examples, required to:
- Transparency obligations: Providers must ensure that all AI systems intended to interact with individuals are designed and developed in such a way to ensure individuals are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.
What is the Liability Regime under the Draft AI Regulation? Although the proposed Regulation does not set out additional civil redress/liability mechanisms, the Commission’s Coordinated Plan on AI (also published today, available here) states that, in 2022, the Commission will propose “EU measures adapting the liability framework to the challenges of new technologies, including AI.” The Commission further states that such EU measures “may include a revision of the Product Liability Directive, and a legislative proposal with regard to the liability for certain AI systems.”
Who Is Responsible for Enforcing the Draft AI Regulation? The application of the Draft AI Regulation would be overseen by the (new) European Artificial Intelligence Board, whose formation, tasks and competencies mimic those of the European Data Protection Board, and who will be established under the auspices of the EC. However, actual enforcement will be the responsibility of national competent authorities competent for AI matters. Interestingly, unlike in the GDPR, no one-stop shop mechanism has been foreseen on the basis of which one national authority could claim competence in the case of cross-border enforcement.
What are the Consequences of Non-Compliance? In line with the maximum fines for non-compliance under the GDPR, the Draft AI Regulation proposes administrative fines of up to €20m or 4% of total worldwide annual turnover for the most serious infringements.
What Happens Next? The Draft AI Regulation will be discussed in the EU Parliament and Council and, once there is an agreement on the final text, the Regulation will be formally adopted. Once adopted, it enters into force 20 days after publication in the EU Official Journal. The current Draft AI Regulation has left a placeholder with respect to the transition (grace) period – although, this is usually two years after entry into force.