The proposed AI Liability Directive will modernise the EU civil liability framework, laying down uniform rules around civil liability for damage caused by AI systems.

Once in place, the proposed AI Liability Directive (“Directive”)1 will adapt and harmonise certain non-contractual civil liability rules where the damage caused involves the use of AI systems. It aims to:

  • ensure that victims of damage caused by an AI system obtain equivalent protection to victims of damage where no AI system is involved;
  • reduce legal uncertainty of businesses developing or using AI regarding their possible exposure to liability, particularly in cross-border cases; and
  • prevent the emergence of fragmented AI-specific adaptations of national civil liability rules.

The AI Liability Directive complements the upcoming Artificial Intelligence Act (“AI Act”), which is currently making its way through the EU legislative process.2 That legislation aims to reduce risk and prevent damage associated with AI systems. The AI Liability Directive steps in when that damage materialises.

What is an AI system?

The definition of an AI system will come from the upcoming AI Act and is not yet finalised. A recently proposed iteration of the definition describes a system that is designed to operate with elements of autonomy. Based on machine or human-provided data and inputs, it infers how to achieve a given set of objectives using machine learning or logic- and knowledge based approaches. It produces system-generated outputs such as content, predictions, recommendations or decisions, influencing the environments with which the AI system interacts.3

Why is the Directive needed?

Currently, when a person seeks compensation for damage, Member States’ general fault-based liability rules usually require that person to prove a negligent or intentionally damaging act or omission (“fault”) by the person potentially liable for that damage, as well as a causal link between that fault and the relevant damage.

However, when AI is interposed between the relevant act or omission and the damage, the characteristics of certain AI systems, such as opacity, autonomous behaviour and complexity, can make it very difficult, if not impossible, for the injured person to meet the required burden of proof.

It may be very difficult to prove that a specific input, for which the potentially liable person is responsible, caused a specific AI system output, which led to the damage.

Ultimately, if the challenges of AI make it too difficult to obtain redress for damage, there will be no effective access to justice. In turn, this could lead to a lower level of societal acceptance and trust in AI and impede the transition to the digital economy. AI use is also seen as a critical enabler for reaching sustainability goals of the European Green Deal.

What will the Directive do?

The Directive will ease the burden of proof on an injured party by introducing a rebuttable “presumption of causality” in respect of the damage concerned, once certain conditions are satisfied. The application of the presumption may vary depending on the circumstances of the case.

In addition, there will be harmonised rules on the preservation and disclosure of evidence by providers or users of high-risk AI systems.4 This can include non-parties if the plaintiff has already undertaken all proportionate attempts at gathering the relevant evidence from the defendant.

The disclosure will be subject to appropriate safeguards to protect sensitive information, such as trade secrets. Any orders made by a court here should be “necessary and proportionate” in the circumstances. Blanket requests for information will be impermissible.

Where a defendant is the subject of an order here but does not comply, the court should presume non-compliance with the relevant duty of care that the requested evidence was intended to prove. The defendant should be able to rebut that presumption.

The Directive follows a minimum harmonisation approach here. Plaintiffs will still be able to invoke more favourable rules of national law, if available.

Where will the Directive apply?

The new rules will apply to claims brought by any natural or legal person against any person for fault that influenced the AI system, which caused the damage. Damage can be any type of damage recognised under national law, including resulting from discrimination or breach of fundamental rights like privacy.5 Subrogated claims and representative actions will also be possible.6

What will the Directive not do?

The Directive will not harmonise general aspects of civil liability, which may be regulated in different ways in Member States. For example, the definition of fault or causality, the different types of damage that give rise to claims for damages, the distribution of liability over multiple tortfeasors, contributory conduct, calculation of damages or limitation periods.

The Directive does not harmonise national laws in relation to the burden or standard of proof, except where it lays down certain presumptions, as set out above.

It will not affect any rights, which an injured person may have under national rules implementing the Product Liability Directive. It will also not cut across liability rules in the transport sector, the Digital Services Act or GDPR. The Directive does not apply to criminal liability.

Transposition and future developments

As things stand, Member States will have two years to transpose the finalised Directive. It will only apply to damage that occurs after the date of transposition.

This may not be the last legislative intervention as regards AI liability. Following stakeholder consultation, the Commission has set out its plans for a staged approach. The Directive is seen as the first “minimally invasive” stage. A second stage will involve assessing the need for more stringent or extensive measures.

To this end, the Directive provides for a monitoring programme to provide the Commission with information on incidents involving AI systems. A targeted review will assess whether, having regard to certain factors such as risk, additional measures such as a strict liability regime or mandatory insurance for operators should be put in place.

Comment

The proposed legislation will now move through the EU legislative process and may be amended along the way, particularly given its interaction with and reliance on the AI Act. While Member State implementation still looks some distance off, those potentially caught by the legislation, such as providers and users of AI systems, should have it on their radar now.