As discussed in our mid-year review in 2021, the EU Commission published a proposal for a Regulation on Artificial Intelligence, now referred to as the EU draft Artificial Intelligence Act (the AIA). In that report we noted that the EU is backing itself that consumers will affirm its strategy by ultimately demanding and only using AI products that are trustworthy and held to the standards set out in the AIA. 

In recent news, on 3 May 2022 the European Parliament adopted the final recommendations of its Special Committee on Artificial Intelligence in a Digital Age (AIDA). The key message is that MEPs would like the EU to be a global standardsetter in the world of AI. A study commissioned by AIDA looks at the potential benefits of AI, to health, the labour market, and the environment, while cautioning on the risks of mass surveillance. In this note, we look at the challenges consumers face with AI products and how the AIA addresses these issues through enforcement.

Challenges

“Artificial Intelligence (AI) entails a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives, or being used for criminal purposes.”

According to the EU Commission White Paper published in February 2020, there are some specific features of Artificial Intelligence such as opacity which will make enforcement of the AIA more difficult

The lack of transparency due to the opaqueness of AI makes it difficult to identify and prove possible breaches of laws, including legal provisions that protect fundamental rights, attribute liability and meet the conditions to claim compensation.

Therefore, in order to ensure an effective application and enforcement, it may be necessary to adjust or clarify existing legislation in certain areas. The other challenge facing enforcing bodies is keeping up to speed with the market as Artificial Intelligence evolves.

Enforcing bodies 

Under the current version of the text, Member States hold a key role in the enforcement of the Regulation. It is proposed that each Member State should designate one or more national competent authorities for the purpose of supervising the application and implementation of the Regulation. A national supervisory authority should be appointed to act as an official point of contact for the public and other relevant institutions within the state and Union.

For certain specific infringements, Member States should consider the margins and criteria set out in the Regulation. It is also proposed that the European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation.

Territorial scopev

“In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union.” 2

In order to avoid circumvention of the AIA and to ensure effective protection of those living in a Member State, the latest amendments to the draft text of the AIA proposed by the current EU Presidency (the Compromise Text) seek the expansion of its territorial scope. Another key point raised in the Compromise Text on scope is that the AIA should not undermine research and development and should respect freedom of science.

The text in bold are the suggested amendments to Article 2 (1) in the Compromise Text setting out to whom the Act is applicable: 

  • Providers placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are physically present or established within the EU or in a third country 
  • Users of AI systems who are physically present or established located within the EU 
  • Providers and users of AI systems that who are physically present or established located in a third country, where the output produced by the system is used in the EU 
  • Importers and distributors of AI systems 
  • Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark 
  • Authorised representatives of providers, which are established in the Union

Penalties  

Member States are tasked with setting out rules on penalties, including administrative fines, applicable to infringements. These penalties shall be “effective, proportionate, and dissuasive.” Notably the AIA pays due regard to the economic viability of SMEs and start-ups by requesting that Member States take into particular account the interests of small-scale providers, start-ups and their economic viability.

The AIA penalty threshold structure resembles the penalty structure set out in the General Data Protection Regulation. Administrative fines apply as follows:

I. Up to €30,000,000 or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year, whichever is higher for non-compliance with

a) The prohibition of the AI practices laid down in Article 5, and

b) The AI system with the requirements laid down in Article 10 (data and data governance)

II. Up to €20,000,000, or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is higher for non-compliance of AI systems with any other requirement under the AI Regulation, apart from Article 5 and Article 10

III. Up to €10,000,000 or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher, for supplying incorrect, incomplete, or false information to notified bodies and national authorities. 

In line with the sliding scale of risks introduced by the AIA, the highest-level fine will apply to prohibited systems classified as an unacceptable risk in the AIA, and for data governance breaches.

Conclusion 

The key point is that each Member State will be responsible for enforcing the AIA. The penalty structure is set on a sliding scale based on the risk system introduced by the AIA. The proposed text has yet to be adopted and may be modified further when under consideration by the European Parliament. The latest amendments proposed by the current EU Presidency are promising in that they recommend that the EU Commission regularly report to the European Parliament on the need for changes to the list of AI techniques and high-risk AI systems.