Following multiple amendments and discussions, the EU Member States – the Council of the EU – approved a compromise version of the proposed Artificial Intelligence Regulation (AI Act) on December 6, 2022.

Once adopted, the AI Act will be the first horizontal legislation in the EU to regulate AI systems, introducing rules for the safe and trustworthy placing on the EU market of products with an AI component. The Regulation’s extraterritorial scope (i.e., application to providers and users outside the EU when the output produced by the system is used in the EU) and its exceptionally high fines of the higher of up to €30 million or up to 6 % of the company’s total worldwide annual turnover for the preceding financial year, are expected to shape the regulatory requirements outside of the EU borders as has been the case with the European General Data Protection Regulation (GDPR).

The first proposal for an AI Act was published by the European Commission (Commission) in April 2021.

Current version

The current version of the AI Act will next have to be adopted by the European Parliament (Parliament).

Below we examine the main changes and the points that may particularly impact life sciences companies:

  • Scope of AI systems: One topic of debate has been which systems will fall under the definition of “AI systems.” The Council is in favor of a narrower definition, which may be further specified by the Commission so as not to capture all types of “traditional” software. The proposed definition includes systems with an “element of autonomy.” However, this component of the definition is not qualified or quantified and leaves room for interpretation and legal uncertainty.
  • General purpose AI systems: These are systems that may be used for a number of different purposes and may be integrated into high-risk AI systems or environments. The current version of the AI Act does not clarify what obligations manufacturers of such AI systems would be subject to, or the extent to which they will share these obligations with the high-risk AI system developers. This will be of direct relevance to developers and users of such AI systems, for example in personalised drug development, and patient and engagement apps, which may be based on general purpose AI systems but then licensed for specific uses and integrated as high-risk AI systems.
  • Conformity assessment and harmonized standards: The original AI Act proposed that high-risk devices be subject to conformity assessment and envisaged the adoption of harmonized standards to support compliance. This key part of the AI Act has been maintained in the current version. Industry is particularly concerned about this aspect due to potential risks of overlap with existing conformity assessment rules, e.g., for medical devices. In addition, the current lack of any harmonized standards by which a developer may determine compliance and lack of designated Notified Bodies for AI conformity assessments are additional hurdles to developers of such systems. The current proposal envisages only a three year transitional phase – a timeframe which recently has proven to be too short for a proper implementation of the EU Medical Devices Regulation. The lack of Notified Body availability would further increase if Notified Bodies are to dedicate already limited resources to new conformity assessment services. Providers and users of high-risk AI systems are advised to closely monitor these developments, and to the extent possible, build safeguards into their contractual arrangements with providers and developers of AI systems.
  • Regulatory sandboxes: The AI Act proposes the possibility of setting up controlled environments for the development, training, testing and validation of innovative AI systems in real world conditions, so-called “sandboxes.” This has been identified by industry and regulators alike as crucial for innovation to ensure proportionate obligations and certainty for developers with reduced costs and improved stakeholder engagement. It has been suggested that there should be reduced or even no liability for solutions put through the sandboxes at Member State level, and a more lenient approach overall. This is aimed in particular to assist SMEs and start-ups, the main innovators in this space.
  • Supervision and coordination: The AI Act foresees leaving enforcement to national surveillance authorities, whereas the overall supervision of the AI Act would be overseen by an AI Board, which in the latest draft AI Act is proposed to have greater autonomy and a stronger role. The AI Board would be primarily tasked with ensuring the consistent application and enforcement of the AI Act among EU member states and is intended to be agile and able to interact with stakeholders promptly. The emphasis on improved stakeholder engagement ties in with the desire to make the sandboxes functional and help drive AI innovation in the EU.

Despite these uncertainties and the final form it will take, the AI Act is fast approaching. Companies deploying AI systems are encouraged to familiarize themselves with the proposal and consider how their current uses would be classified and what regulatory rules would apply to them. In addition, a practical approach to the implementation of AI rules and data governance will increase data integrity and make for an agile integration once the final AI Act is adopted.

Companies should also consider the draft AI Liability Directive, which includes provisions on the responsibilities and obligations of actors in the AI supply chain.

Next steps

The Parliament is scheduled to vote on the draft AI Act by end of March 2023. Following this vote, discussions between the Member States, the Parliament and the Commission (so-called trilogue) are expected to commence in April. If this timeline is met, the final AI Act should be adopted by the end of 2023.