Introduction
Necessity of legal assessment
Definitions and qualifications
Legal overlap and blind spots
Tailormade risk assessment


Introduction

Artificial intelligence (AI) systems are increasingly applied in more and more disciplines, one of which is healthcare. When creating AI systems, developers focus first on their functionality and effectiveness. However, another challenge that developers may face is legal uncertainty.

This uncertainty is, to some degree, addressed in the proposal for an Artificial Intelligence Act published on 21 April 2022 (the AI Act).(1) Due to its general and broad scope, although the AI Act does not specifically address AI in healthcare, it will also apply to medical devices to which the EU Medical Devices Regulation (MDR) applies. Currently, the AI Act is being discussed in the European Parliament's committees.(2)

This article is part of a series on developing medical AI systems.

Necessity of legal assessment

While developing an AI system dedicated to medical purposes, different risks may present themselves. An obvious risk relates to non-compliance. The manufacturer or entities placing the AI system on the market may face penalties or restrictions from authorities. Additionally, the AI system itself may constitute a risk to patients, such as via a missed diagnosis of life-threatening conditions (eg, in cardiology) or a false diagnosis, resulting in the treatment of healthy patients.

Another risk that may appear is a liability risk and its division between developers, entities placing an AI system on the market, physicians using the AI system and other parties involved. Such liability risk is neither clearly addressed by current legislation nor the AI Act proposal. However, a new legislative proposition addressing non-contractual civil liability was published in September 2022.(3) This may fulfil an important unregulated gap, as misconduct related to an AI system in the healthcare sector may:

  • harm people's lives and health;
  • undermine the trust of users in the AI system; and
  • raise questions on constitutional and human rights.

Overall, people may be discouraged from using AI systems, and the development of and future investments into AI systems in general may be affected. Therefore, these new rules on non-contractual civil liability should be taken into account when developing AI medical devices, and a close legal assessment of the current legislation and guidelines is necessary to exclude risks as far as possible early on.

Definitions and qualifications

While developing an (AI) system for use in the healthcare sector, developers should first consider whether the respective software or system falls within the definition of medical devices. Irrespective of the answer, it also needs to be considered whether such software or system qualifies as an AI system.

Software as medical device
The MDR(4) directly applies to software that is used for specific medical purposes. Software qualifies as a medical device if the process, analysis, creation or modification of medical information by the software is governed by a medically intended purpose.(5)

The MDR itself does not contain a legal definition of software. However, the European Commission has issued a series of guidance documents providing aid in the qualification of software as a medical device.(6) In one such guidance document, software is defined as "a set of instructions that processes input data and creates output data".(7) Input and output data are defined as well. In practice, the intended purpose and the type of use of the software in the medical environment are also crucial. If the software has a more general purpose, even when used in a healthcare setting, or if the software is intended for lifestyle and wellbeing purposes, it is not a medical device.(8)

If the software directly controls a (hardware) medical device (eg, radiotherapy treatment software), provides immediate decision-triggering information (eg, blood glucose meter software) or provides support for healthcare professionals (eg, electrocardiogram interpretation software), the software will likely be treated as a medical device or part of a medical device. Further, if the software enables images to be searched for a specific purpose that is to support a diagnosis made by a physician or to support a proposed treatment, such software is also likely to be qualified as a medical device. Similarly, if the software enhances the search or display of an image analysis, it may be the basis for a decision on specific treatment, and the above applies.

However, not all software used within healthcare is to be qualified as a medical device. Altering the presentation of data for cosmetic or compatibility purposes (eg, library purposes) does not fall within the scope of the MDR and thus such software does not qualify as a medical device.

The European Commission has prepared a decision tree that helps with this classification.(9) The developer should assess whether:

  • the system falls under the software definition provided in the guidance;
  • the software falls within at least one of the following categories:
    • a device pursuant to Annex XVI of the MDR;
    • an "accessory" for a medical device according to article 2(2) of the MDR and/or the EU In Vitro Device Regulation (IVDR); and
    • software driving or influencing the use of a hardware medical device;
  • the software performs an action regarding data other than simply storage, archive, communication or search; and
  • the action is for the benefit of individual patients.

Software as AI
Regardless of the conclusion of the MDR assessment and regardless of whether the software qualifies as a medical device, an additional separate assessment is required under the AI Act.

What developers understand as AI may not be the same as the new legal definition included in the AI Act proposal. According to the proposal, an AI system is characterised by three elements:

  • receiving data and input (machine and/or human-based);
  • using specific techniques such as learning, reasoning or modelling, as well as other specifications that infer how to achieve a given set of human-defined objectives;(10) and
  • generating output in the form of content (generative AI systems), predictions, recommendations or decisions, which influence the environments with which it interacts.(11)

It is notable that the AI definition pursuant to the AI Act also covers rule-based algorithms that, by most technical definitions, would not be considered AI – for example, algorithms that have a well-defined sequence of rules based on which a decision, forecast or recommendation is made. This may also cover systems that, for example, calculate an amount of social security payments or future grades on an admission exam.

Legal overlap and blind spots

AI systems may fall within the scope of the MDR and of the AI Act. A challenge will be that the MDR was passed in 2017 and thus lacks the mechanisms to address the dynamic nature and continuous development of medical AI technologies, as well as the identification of algorithm biases. The latest AI Act proposal, however, was published in 2021 and thus was at least influenced by the events and developments of recent years.

Producer responsibility and self-assessment
The lack of a specific regulatory solution for AI in healthcare makes it harder for developers to make sure that the AI system will not end up being non-compliant due to the results it produces, such as bias that may be qualified as non-compliant. If the legislation focuses only on penalising the results, the developers or producers are left with the responsibility to make sure that the approach and measures they implement will result in AI systems that work in a compliant manner. Self-assessment checklists may be helpful in this regard.(12)

Continuous learning
One of the great advantages of (machine learning) AI systems is their ability to continuously learn. However, this may be also a legal challenge. Certification, assessments and other elements are supposed to make sure that the product is safe and properly controlled, stable over time and not continuously changing. This is why the proposed rules refer to AI systems using technologies involving the training of models with data. Such systems must be developed based on training, validation and testing data sets that meet certain quality criteria regarding the following,(13) among other things:

  • the relevant design choices;
  • a prior assessment of the availability;
  • the quantity and suitability of the data sets that are needed;
  • the outcome of an examination regarding possible biases;
  • the identification of any possible data gaps or shortcomings; and
  • how those gaps and shortcomings can be addressed.

For high-risk AI systems, continuous learning after being placed on the market or being put into service and changes to the AI system itself and its performance must be predetermined by the developer at the moment of the initial conformity assessment and are part of the information contained in the technical documentation. They do not require an assessment update because they are not qualified as substantial modifications.(14)

Tailormade risk assessment

Both regulations, the MDR and the AI Act, require an assessment before placing the medical device AI on the market. The crucial point is risk assessment. Both regulations have a different approach, which should be taken into account simultaneously.

MDR risk assessment
According to the MDR, manufacturers shall establish, document, implement and maintain a system for risk management,(15) as well as draw up and keep up to date the technical documentation for the respective devices.(16) This is part of the certification process.

A huge part of the risk assessment is the classification of medical devices (I, IIa, IIb and III).(17) Most crucial for AI systems are the rules dedicated to software. Software intended to provide information that is used to take decisions (diagnosis or therapeutic purposes) is generally classified as class IIa. However, if such decisions have an impact that may cause death, an irreversible or serious deterioration of a person's state of health, or surgical intervention, it will be class III or IIb, accordingly.

Software intended to monitor physiological processes is classified either as class IIa or IIb. If the parameters are vital, the class is higher. All other software is classified as class I.(18) Software that drives a device or influences the use of a device will fall within the same class as the reference device. If the software is independent of any other device, it will be classified on its own.(19)

AI Act risk assessment
Risk assessment is a central measure introduced in the AI Act proposal. AI systems are divided according to the level of associated risk into:

  • unacceptable risk;
  • high risk;
  • limited risk; and
  • minimal risk.

Such classifications are not connected to the MDR classes. Therefore, the developer needs to perform a separate assessment based on the MDR regulation and the AI Act.

According to the AI Act, AI systems that constitute unacceptable risk are prohibited – these are, for example, technologies that deploy subliminal techniques beyond a person's consciousness with the objective to materially distort a person's behaviour in a manner that causes that person's physical or psychological harm.(20)

High-risk systems require additional safeguards. If an AI system is required to undergo a third-party conformity assessment with a view to place the AI on the market or put it into service pursuant to the MDR, the AI will be automatically qualified as high-risk AI.(21) For the aforementioned scope, the AI Act makes a clear reference to the MDR.

Even if the AI system is not classified as a medical device, it might be considered high risk if there is:

  • a biometric identification system intended to be used for the "real-time" and "post" remote biometric identification of natural persons without their agreement; or
  • an AI system intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.(22)

For further information on this topic please contact Jowita Prokop or Malte Scheel at Eversheds Sutherland LLP by telephone (+49 89 54565 0) or email ([email protected] or [email protected]). The Eversheds Sutherland LLP website can be accessed at www.eversheds-sutherland.com.

Endnotes

(1) Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts (COM/2021/206 final). The presidency published a modified text in November 2021. In May 2022, the European Parliament passed the resolution on artificial intelligence in a digital age (European Parliament resolution of 3 May 2022 on artificial intelligence in a digital age (2020/2266(INI)).

(2) The Internal Market and Consumer Protection and the Civil Liberties, Justice and Home Affairs committees.

(3) Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence, COM/2022/496 final.

(4) Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices.

(5) Article 2(1) of the MDR.

(6) For example, see:

  • the infographic "Is your software a Medical Device?" of March 2021;
  • the MDCG 2020-1 "Guidance on clinical evaluation (MDR) / Performance evaluation (IVDR) of medical device software" of March 2020; and
  • the MDCG 2019-16 rev 1 "Guidance on cybersecurity for medical devices" of December 2019.

(7) The MDCG 2019-11 "Qualification and classification of software - Regulation (EU) 2017/745 and Regulation (EU) 2017/746" of October 2019, p 5.

(8) Recital 19 of the MDR.

(9) Infographic "Is your software a Medical Device?" of March 2021.

(10) Annex 1 of the AI Act.

(11) Article 3 of the AI Act.

(12) For example, see:

  • Future AI, Assessment Checklist; and
  • Independent High-Level Expert Group on Artificial intelligence set up by the European Commission, The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self Assessment, 17 July 2020.

(13) Article 9 of the AI Act.

(14) Article 43(4) of the AI Act.

(15) Article 10(2) and Annex I, section 3 of the MDR.

(16) Article 10(5) of the MDR.

(17) Article 51(1) of the MDR.

(18) Annex VIII No. 6.3, rule No. 11 of the MDR.

(19) Annex VII No. 3.3 of the MDR.

(20) Article 5(1) lit a) of the AI Act.

(21) Article 6(1) and Annex II of the AI Act.

(22) Based on the Annex III of the AI Act.