Here we are, right after Christmas, and ready to start afresh for 2023.

What did you wish to find under your Christmas tree?

Those of you who replied “the final version of the AI Act”: well, you were probably disappointed. And it isn’t your (or Santa’s!) fault.

Not getting your AI Act or Christmas isn’t about whether you were naughty or nice—regulating AI is no easy task. It requires ensuring safety and fundamental rights protection without stifling innovation. And indeed, as soon as the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on AI (AI Act) was issued on 21 April 2021, it launched a debate, and many amendments have been discussed aiming at finding the very best balance between regulating without over-regulating.

It’s no coincidence that when, on 6 December 2022, the Council adopted its common position (general approach) on the AI Act, the Czech Deputy Prime Minister for digitalization and minister of regional development Ivan Bartoš stressed: “Artificial Intelligence is of paramount importance for our future. Today, we managed to achieve a delicate balance which will boost innovation and uptake of artificial intelligence technology across Europe. With all the benefits it presents, on the one hand, and full respect of the fundamental rights of our citizens, on the other.”

The debate is nevertheless still quite far from being resolved. According to recent news, the European Parliament has already started its analysis and review of the general approach of the Council and is ready to propose and discuss further amendments (under the trilogue process), namely on the criteria to determine whether an AI system is high-risk and on the revision powers of the Commission regarding these criteria, while provisions on general purpose AI systems will be discussed at a later stage.

The recent proposal for a directive of the European Parliament and of the Council on adapting noncontractual civil liability rules for artificial intelligence (AI Liability Directive), issued on 28 September 2022, will also have to be examined and agreed by European institutions.

The AI Liability Directive is intended to ensure that people enjoy the same level of protection as in cases that don’t involve AI systems (thereby also contributing to strengthening trust in AI and encouraging AI uptake in the EU). It provides for rebuttable presumptions and disclosure obligations to ease – without reversing - the burden of proof without exposing providers, operators and users of AI systems to higher liability risks, which could hamper innovation.

Even if these tools (i.e. rebuttable presumptions and disclosure obligations) were chosen as the least interventionist ones, concerns have already been raised as to whether they are concretely suitable to find an effective balance between the interests of victims of harm related to AI systems and the interests of businesses active in the sector.

We have been discussing the AI Act and the AI Liability Directive with clients and friends, who did not deny their fear of getting lost in a sea of regulations, considering that the EU is currently issuing numerous rules of paramount importance for the digital world (the DMA, the DSA, the DGA and the Data Act for starters). Thus, with all these rules, why stay abreast of developments of the AI Act?

Don’t worry: We’ve got you covered. We’ve prepared our list of the three things every business should know about how the EU is regulating AI and our three reasons why it is crucial for businesses to understand how the EU is regulating AI.

Spoiler alert: Whether or not your business falls under the application of the AI Act, the European regulation of AI will nevertheless be relevant for you as—similarly to what happened with the GDPR—it will likely become a model for defining other legislation in other jurisdictions. This is in fact the very first reason for monitoring developments of the EU regulation on AI, which may also impact countries outside the EU’s borders.

Stay tuned if you want to stay in the loop on AI in the EU (and on many other things!) and, for any further concerns, questions or curiosity about regulating AI in the EU, don’t hesitate to contact us!

Three things you need to know about how the EU is regulating AI …

1. What is the relationship between the AI Act and the AI Liability Directive?

The AI Act and the AI Liability Directive support, complement and reinforce each other.

As clarified in the explanatory memorandum to the AI Liability Directive, “safety and liability are two sides of the same coin.”

Indeed, the AI Act guards the safety side, providing for safety-oriented rules aimed at reducing risks and preventing damage, as well as requiring the establishment of a quality management system, various documentation, a conformity assessment procedure, registration obligations, cooperation and information duties, human oversight mechanisms and a post-market monitoring system.

As risks cannot be eliminated in their entirety, liability rules are needed to ensure that individuals can obtain effective compensation in the event of damage caused by AI systems. Such liability rules can be found in the AI Liability Directive, which aims to provide tools to overcome the difficulties faced when trying to prove the causal link between the fault of the defendant and the output of the AI system causing damage.

Difficulties in determining liability may arise from the complexity, autonomy and opacity of certain AI systems as, due to such features, explaining the inner functioning of the AI system may be very difficult in practice (the “black box effect”).

2. What is the approach taken by the EU to regulate AI?

The AI Act follows a risk-based approach, which classifies AI systems into (now) five categories:

  • Prohibited AI systems
  • High-risk AI systems
  • Low-risk AI systems
  • Minimal-risk AI systems
  • General-purpose AI systems (newly added).

As the risks increase, so do the measures to be taken: The highest level of risk triggers a ban on the use of the AI systems, while for less risky AI systems the focus is on transparency obligations aimed at ensuring that users are aware that they are interacting with an AI system (and not with a human being).

Most obligations of the AI Act apply to high-risk AI systems. According to the general approach of the Council, high-risk AI systems are products or safety components of products that are subject to a third-party conformity assessment before being placed on the market or put into service, or they are AI systems intended to be used for certain purposes identified by the AI Act.

Moreover, general-purpose AI systems that can be used as high-risk AI systems, or as components of high-risk AI systems, have similar obligations to those provided for high-risk AI systems.

As for the AI Liability Directive, it does not provide for liability rules, nor does it aim at harmonizing general aspects of civil liability (e.g. the definition of fault and causality). Those will remain regulated in different ways by national laws.

The proposed AI Liability Directive introduces a rebuttable presumption of the existence of the causal link between fault and damage, under certain conditions. In very brief terms, the national court can presume that the fault caused the damage if the claimant can prove that (i) someone was at fault for not complying with a certain duty of care relevant to the damage, (ii) the output of the AI system or the failure of the AI system in providing an output gave rise to the damage, and (iii) it is reasonably likely, based on the circumstances of the case, that the fault has influenced the output or the failure of the AI system to produce the output,

The Ai Liability Directive also empowers national courts to order the disclosure of information about high-risk AI systems where the damaged party has taken all proportional attempts at gathering the relevant evidence for supporting the claim for compensation.

3. When will the AI Act be applicable?

Adoption of the general approach allows the Council—once the European Parliament adopts its own position—to enter into negotiations with the European Parliament with a view to reaching an agreement on the proposed regulation. This will still take some time.

Once ready for publication in the Official Journal, the final version of the AI Act will apply in all EU member states 36 months after its entry into force, except for (i) the rules on AI governance systems at European and national levels and (ii) the rules on penalties for breaches of the AI Act, which will apply 12 months after the entry into force of the AI Act. Specific provisions describe cases where the AI Act shall or shall not apply to those AI systems already placed on the market or put into service before the date of application of the AI Act.

...and three reasons why you need to know how the EU is regulating AI

1. It’s more likely than not that your business will fall within the scope of application of the AI Act.

The AI Act will apply to AI systems providers (and their authorized representatives), importers, distributors and users, as well product manufacturers that place an AI system on the market or put one into service together with their product and under their own name or trademark.

Consequently, the AI Act will affect almost everyone that places on the market or puts into service in the EU an AI system (whether they are physically present or established in the EU or in a third country), or that uses in the EU the output of an AI system.

The obligations that providers must comply with will vary, based on the level of risk carried by the AI system. Consistently with the based-risk approach embraced by the AI Act, the higher the risk, the stricter the requirements.

Most rules will apply to high-risk AI systems and their providers, but there are also provisions that will apply to providers and users of low- or minimal-risk AI systems. Moreover, the criteria to determine when an AI system is high-risk will vary in time, as the Commission is empowered to adopt delegated acts to add high-risk AI systems, provided that certain conditions are met.

Additionally, users may become subject to the obligations of providers, under certain conditions.

2. Fines for noncompliance with the AI Act will be even higher than under the GDPR.

Although the AI Act empowers member states to lay down their own rules on penalties—including administrative fines—for infringements of the AI Act, it also provides for certain administrative fines for the most severe breaches:

  • Fines of up to €30 million or, if the offender is company, up to 6 percent of its total worldwide annual turnover for the preceding financial year, whichever is higher, will apply in cases of noncompliance with the prohibition of artificial intelligence practices (e.g. the placing on the market or putting into service a prohibited AI system);
  • Fines of up to €20 million or, if the offender is a company, up to 4 percent of its total worldwide annual turnover for the preceding financial year, whichever is higher, will apply in cases of noncompliance with certain obligations for high-risk AI systems and general purpose AI systems, as well as with the transparency obligations for low or minimal risk AI systems;
  • Fines of up to €10 million or, if the offender is a company, up to 2 percent of its total worldwide annual turnover for the preceding financial year, whichever is higher, will apply in cases where incorrect, incomplete or misleading information is provided to notified bodies and national competent authorities in reply to a request.

For small and medium enterprises and startups, these fines will follow the same risk categories as for large companies and will amount to up to 3 percent, 2 percent and 1 percent, respective to the severity of the offence.

3. Noncompliance with the AI Act increases the risk that your business will be held liable for damage caused by the AI system.

Among other things, the AI Liability Directive uses the same definitions as the AI Act and makes the documentation and transparency requirements of the AI Act operational for liability—through the right to disclosure of information (i.e. failure to comply with an order for disclosure will imply application of the rebuttable presumption of noncompliance with a relevant duty of care).

Furthermore, failure to comply with the requirements of the AI Act for high-risk AI systems will constitute an important element triggering the rebuttable presumptions provided under the AI Liability Directive.

This piece is based on the general approach of the Council on the AI Act on 6 December 2022, and on the AI Liability Directive on 28 September 2022.