A wide scope of application.

The European Commission’s “Proposal for a Regulation laying down harmonized rules on Artificial Intelligence”, published on April 21, 2021 (the proposal) represents an opportunity to reaffirm the role of the European Union in defining global standards and promoting the development of artificial intelligence that is reliable and consistent with the values and interests underlying the European Union itself. Consequently, the proposal creates the perfect opportunity to recreate the so-called “Brussel-effect” already experienced with the GDPR: the European legislation seen and used as a model at (one might say) the global level.

The European Commission has chosen the instrument of the regulation, and not of the directive, in order to ensure that the new rules are applied as uniformly as possible throughout the Union. It will also apply to all non-European subjects, including large and well-known US and Chinese players, who will be subject to the application of the regulation on artificial intelligence.

The proposal would be applicable to anybody who places on the market or puts into service artificial intelligence systems in the European Union – regardless of whether they are established within the Union or in a third country. It would also apply to developers and users of artificial intelligence systems, who are located in a third country, if the result produced by the intelligent system is used within the European Union:

This Regulation applies to (a) provider(s) placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; (b) users of AI systems located within the Union; (c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union”.

The classification of artificial intelligence systems.

This broad subjective scope of application is counterbalanced by a much narrower objective scope.

The proposal maintains and adopts the risk-based approach already recommended in the White Paper on Artificial Intelligence published on February 20, 2020.

Artificial intelligence systems are distinguished and classified into:

  • Systems which present an unacceptable risk to the health and security of individuals, such as systems focused on the conditioning of human behavior through the use of subliminal techniques or the exploitation of vulnerability;
  • High risk – such as systems destined to be used for the recruitment or selection of personnel;
  • Low risk, such as chatbots;
  • Minimal or negligible risk, such as anti-spam filters.

The proposal prohibits the use of artificial intelligence systems presenting an unacceptable risk.

The remainder of rules of the proposal concern and apply only to high-risk systems, with the exception of the transparency requirements prescribed for the use of low-risk artificial intelligence systems.

No rules are introduced for systems of artificial intelligence, which present a negligible level of risk, even though the European Commission itself has specified that the vast majority of the systems of artificial intelligence fall into this last category. It seems almost as if it wants to demonstrate its trust in artificial intelligence and confirm that there is no intention of over-regulation (which would create an indirect disincentive to the development and use of artificial intelligence systems). We shall see if the European Union succeeds in keeping faith with this approach…

The prevention and risk management approach.

High-risk artificial intelligence systems – the only ones to be regulated by the proposal – are identified according to two different criteria.

As a general rule, an artificial intelligence system is considered high-risk if it has the following two characteristics:

  • It is intended for use as a safety component of a product, or is itself a product covered by the European Union harmonization legislation listed in Annex II to the proposal;
  • The product – whose safety component is the artificial intelligence system – or the artificial intelligence system itself as a product is required to undergo a third-party conformity assessment before being placed on the market or put into service, pursuant to the European Union harmonization legislation listed in Annex I to the proposal.

There is also a special criterion, according to which, all systems that are identified in a specific list attached as Annex III to the proposal are high-risk. The classification, in this case, is based on the intended use of the intelligent system, and it is foreseen that the European Commission can and will update the list periodically.

All high-risk artificial intelligence systems, regardless of the criterion by which they are identified, are subject to the obligations set out in the proposal. These compliance obligations aim to prevent and manage risks to the health and safety of individuals, rather than compensate for them. In fact, the proposal does not contain any rules on the subject of responsibility for the actions and omissions of intelligent systems. The rules proposed by the European Commission include only obligations of compliance, vigilance and control, which apply throughout the entire life cycle of the artificial intelligence system and to all the subjects involved in the chain – from the developer to the final user – who are therefore responsible.

Basically, the proposal does not take an ex-post approach, based on risk remediation: but rather an ex-ante approach, based on risk prevention, identification and management.

However, the European Commission has already confirmed that it will soon look into the responsibility of artificial intelligence: the resulting regulatory context will most likely be complete. It remains to be seen whether it will also be sufficient to convince those, who still nurture diffidence about the usefulness and goodness of artificial intelligence tools.