On 28 September 2022, the European Commission published its draft for a new Directive on liability issues related to the use of AI systems. Once implemented into Member State law, this will create new rights for users of AI systems (including consumers, businesses and government agencies) to bring claims for compensation against providers of AI systems where they allege they have suffered damage. As we go on to explain, this new civil liability regime for AI systems also seeks to make it easier for such claims to succeed, for example by reversing the burden of proof in certain circumstances.
The draft Directive is part of a broader package of measures – including a reform of the Product Liability Directive (see our blog post here) – aiming at adapting liability rules to the digital age. The new Directive is particularly closely related to the AI Act published in April 2021, filling in some of the gaps regarding liability for the use of AI systems.
Scope and rationale
The new Directive is a minimum harmonisation measure, which seeks to create minimum rights for users of AI systems to bring non-contractual civil liability claims for damage caused by using AI systems. The draft Directive expressly provides that Member States may choose to go further.
The Commission’s view is that the measures are required in order to improve legal certainty for all stakeholders, and to address what the Commission considers to be inequalities between claimants and defendants in cases involving complex AI systems.
- New evidence rights for claimants. The draft Directive requires that Member States allow “potential claimants” to require providers of high-risk AI systems to disclose any evidence in their possession that relates to (among other things) training, validation and testing records, technical documentation and protocols, quality management and any corrective actions as defined in the AI Regulation.
- Presumption of breach in cases of document destruction or non-disclosure. Where a defendant fails to disclose evidence that it is required to share, or where this has not been preserved, then, according to the draft Directive, the relevant court must presume that the defendant failed to comply with the duty of care “that the evidence requested was intended to prove”. In other words, there is a (rebuttable) presumption that the defendant breached the relevant duty.
- Reversal of the burden of proof where breach is established. The draft Directive also requires Member States to ensure that their courts “presume the causal relationship” between (i) any non-compliance by the defendant with a duty of care pursuant to the AI Act and/or any other Union or national law which is directly intended to protect against the damage that occurred, on the one hand, and (ii) the output of the AI system in question, or its failure to produce that output, on the other. This presumption – again, rebuttable by the defendant – would apply in cases where the functioning of the AI system would otherwise need to be explained in order to establish the causal link. Although the drafting of the relevant provisions may well improve as the draft Directive goes through the legislative process, the intention appears to relieve the claimant of the burden of explaining how an AI system produced the result that it did, where it has been shown (either by the claimant, or as a result of the presumption that will apply in cases of document destruction or non-disclosure) that the defendant breached a relevant duty of care.
Interplay with other EU pieces of legislation
Alongside this initiative, the EU is currently negotiating other pieces of legislation which may overlap with some of the key provisions of this proposal. While the intention is to accompany the AI Act as a framework to lay down common rules for claiming compensation for the potential damage caused by an AI system, key definitions such as the concept of “AI systems”, “providers” and “users” would need to be consistent with the definitions agreed under the AI Act.
Close cooperation between co-legislators will be key to ensure legal certainty and consistency with other pieces of legislation which are closely interlinked, such as the General Product Safety Regulation (see our blog post here) which aims at only placing safe products on the market and is currently under inter-institutional negotiations. In addition, this initiative shall not affect the exemptions from liability and the due diligence obligations laid down in the Digital Services Act. The requirements around evidence preservation will also create potential conflicts with the GDPR requirements around data minimisation, i.e. AI systems must store enough data to avoid the presumption of a breach but on the other hand data storage must not be excessive.
On the same day as this draft Directive was announced, the Commission also announced its proposal to reform the Product Liability Directive, which imposes no-fault civil liability on producers of consumer products (whether or not they are AI-enabled). For certain products, the no-fault regime will operate alongside the civil liability regime proposed in this draft Directive – which has the potential to make compensation claims relating to those products very complex indeed.
What does it mean for businesses?
The draft Directive has been highly anticipated after it had been widely criticised that this key area had been carved out from the AI Act. If adopted in its current form, it is intended to make it significantly easier for users of AI systems to obtain compensation in cases where harm is alleged to have been caused by an AI system/a high risk AI system. The introduction of new information rights and of new “rebuttable presumptions” as to breach and causation have the potential to tilt the balance firmly in claimants’ favour. That said, as a minimum harmonisation measure, the draft Directive will leave it to Member States’ existing civil liability systems and procedural rules to determine other matters that are likely to be key to the success or failure of any claim, such as the standard of proof required to establish breach of duty, the types of damage recoverable by claimants, and the causal link between the output of the AI system and any damage allegedly suffered.
The new disclosure obligation will place a premium on the retention of relevant documentation and information by businesses that market AI-enabled products, systems or services. Failure to comply with a court’s order to produce this information may lead to presumptions of breach, which in turn may be used to presume causation.
Overall, the draft Directive is doubtless intended to encourage businesses that market AI-enabled products, systems or services to comply with the provisions of Europe’s wider AI regulatory framework, but combined with other, pro-claimant measures (such as the Representative Actions Directive), there must be a risk that it will encourage unnecessary litigation and damage innovation.
The draft Directive – which will be negotiated in parallel to the AI Act – still needs to pass through the European legislative process. This means that the European Parliament and the Council, representing the 27 Member States, will now scrutinise the proposal and propose amendments.
We expect the legislative process to take 12 to 18 months. The finalisation of this text will depend on the outcome of the negotiations around the AI Act, which are currently far away from completion and may keep ongoing until at least mid-2023.
Once the final text is agreed by the EU institutions, the text would need to be transposed into national law by the EU Member States within two years. Five years after its entry into force, the Commission will assess the need for no-fault liability rules for AI-related claims. While companies should be looking at this initiative now, full compliance will likely not be required until mid-2025 / beginning of 2026.
With the kind help of Katharina-Isabelle Prenzel (Research Assistant).
Czech Commissioner for Values and Transparency Věra Jourová said, “With today's proposal on AI civil liability we give customers tools for remedies [...] so that they have the same level of protection as with traditional technologies [...].”