In the first of our Laws for a Digital Future segment of our Digital Transformation Series, Barry Scannell analyses new laws pertaining to artificial intelligence technologies and outlines what businesses need to consider in the coming months and years.
The AI Act is an EU Regulation that introduces a regulatory framework for artificial intelligence (AI). The AI Act will regulate the development and use of "high risk" AI systems by establishing rules and obligations for developers, deployers and users of AI technologies and place an outright ban on other AI systems which are harmful to humans. It is estimated that the legislation will affect up to 35% of AI systems used in Europe, applying to everything from banking systems, healthcare, toy safety, HR monitoring, aviation safety and emotional manipulation. Obligations on businesses will be determined based on the category of risk triggered by the relevant AI developed - be that "unacceptable risk", "high-risk", or "minimal risk".
The Act applies to a wide range of users and providers of AI systems. In it, the definition of AI is purposefully broad and distinguishes AI from classic IT, and there are ongoing discussions in the EU Parliament on whether or not a definition for General Purpose AI systems should be included. It is purposefully drafted as technologically neutral and future-proof as possible. It is likely to have as large an impact as GDPR had on providers. Enforcement for non-compliance is up to €30m or 6% of the provider/users global turnover for infringements of prohibited practices.
The AI Act is expected to enter into force in late 2023 or early 2024 (there will be an additional transitional period of eighteen months). The AI Act is not anticipated to be applicable to operators before the second half of 2024 at the earliest. The AI Act is part of the European approach to AI, which centres on excellence and trust in AI. It aims to increase research, capacity and safeguard fundamental rights.
Impact on Businesses
William Fry's clients, as market leaders and innovators in cutting edge technology, will need to firstly assess whether or not the systems they are providing/using fall within the proposed legislation's definition of AI. They will then need to carry out risk assessments to ascertain whether or not any AI systems are prohibited or high-risk systems. If it is decided that they are providing/using high-risk AI systems, clients will need to put in place a regulatory regime which should include:
- regular formal risk assessments;
- data processing impact assessments; and
- onerous record keeping requirements.
The AI Liability Directive
On 28 September 2022, the Commission published its Proposal for an Artificial Intelligence Liability Directive (AILD). The purpose of the of the AILD is to set down uniform rules for certain areas of non-contractual civil liability for damage caused where AI systems are involved.
Current liability rules do not cater for the complex nature of AI and the inherent difficulties associated with trying to identify liability.
Of note, the proposal contains a rebuttable provision that presumes a causal link in the event of fault with respect to damage caused by AI systems. The purpose of this provision is to provide affected consumers with an easier and effective avenue to claim compensation.
Further, under Article 3(1) of the proposed AILD, the court has the power to order the disclosure of relevant evidence from the provider of an AI system about high-risk systems if it is alleged to have caused damage.
Impact on Businesses
This substantially increases the liability risk for William Fry's clients which incorporate AI systems into their products and/or services. If damage is allegedly caused by a system which incorporates AI, the victim of damage will not need to prove that the damage was caused by the AI system, but rather, the deployer/owner of the AI system will have to prove that the AI system did not cause damage. There will be a rebuttable presumption that the AI system caused the damage. This means that clients providing products or services incorporating AI systems will need to reconsider their contracts, particularly in relation to warranties, indemnities, and caps and exclusions from liability. Insurers may also need to reassess how they insure businesses incorporating AI systems.
Product Liability Directive
On 28 September 2022, the Commission proposed a new Product Liability Directive (PLD). The purpose of the PLD is to modernise the rules and provide an effective compensation system at an EU level to those that suffer physical injury or damage to property as a result of defective products. The PLD aims to provide protection to natural persons regardless of whether the defective product was manufactured inside or outside of the EU by always ensuring that there is a business based within the EU that can be held liable.
The PLD takes into account changes in how products are produced, distributed and operated since the introduction of the first EU PLD in 1985 (Directive 85/374/EEC). The PLD expands the concept of 'product' to include ''digital manufacturing files and software''.
The PLD now covers all digital products, and the rules are modified to work for new and emerging technologies. The PLD covers cyber weaknesses and updates to software and AI systems.
The PLD also covers products in the circular economy (e.g. reused, refurbished or remanufactured products).
Impact on Businesses
Similar to the AI Liability Directive, this piece of legislation will affect William Fry's clients particularly in relation to how they contract with other parties. Cyber security issues and failure to provide necessary software updates will be considered "defects" for the purposes of product liability cases. This will need to be reflected in clients' indemnity clauses, liability clauses, insurance coverage, warranties and exclusion clauses. Clients will need to reassess their risk management in light of this new legislation.