In April 2021, the European Commission presented its plans for regulating artificial intelligence (“AI” for short) within the EU and published a first draft of the so-called “Artificial Intelligence Act” (“AIA”). With this draft legislation, the Commission takes a horizontal regulatory approach to AI, which applies equally to all sectors of industry. In addition to widely known forms of AI, such as autonomous driving, AI medical devices are also covered by the AIA. In contrast to autonomous driving, however, a comprehensive regulatory framework has already existed for medical devices at the European level for some time in the form of the European Medical Device Regulation (“MDR”). Since the MDR also covers AI medical devices, there is now a considerable overlap with the AIA in this area. This overlap should not in itself present a problem but it is unfortunately coupled with the fact that the European legislator has failed to interact the two regulations in such a way to avoid contradictions. That is why AIA in its current form creates considerable legal uncertainty.

Almost all AI medical devices are classified as high-risk

Article 6 in conjunction with Annex II No. 11 AIA stipulates that all AI medical devices that must undergo a conformity assessment procedure by a Notified Body are classified as a “high-risk AI system” within the meaning of the AIA. However, this overlooks the fact that AI medical devices are usually software and are mostly subject to classification IIa or higher (Annex VIII, Chapter III, Rule 11 MDR). As a result of this classification, almost all software used in medicine must therefore undergo a conformity assessment procedure, which is why almost all AI medical devices are classified as “high-risk AI systems” within the meaning of the AIA.

This has significant consequences for manufacturers. For example, in the case of high-risk AI systems it is necessary to introduce an additional risk management system (Art. 9 AIA). Furthermore, it is necessary to comply with stricter requirements regarding technical documentation (Art. 11 AIA), records (Art. 12 AIA), human supervision of the product (Art. 14 AIA) and cyber security (Art. 15 AIA).

The (almost) blanket classification of all AI medical devices as high-risk AI systems therefore does not seem appropriate. Recital 31 of the AIA makes it clear that just because a (medical) product is classified as “high-risk” within the meaning of the AIA, this does not automatically mean that this product also poses a high risk in the context of its use. However, it is precisely this circumstance that should be taken into account to a greater extent. For example, it does not seem reasonable to impose the same (high-risk) regulations under the AIA on a manufacturer whose product merely evaluates data independently as on a manufacturer whose product performs independent surgical activities on patients and is therefore far more dangerous in use.

There is a threat of double post-market control for AI medical devices

Medical device manufacturers are also subject to strict post-market controls with regard to health-related risks of their products within the framework of the vigilance system of the MDR. The manufacturers are subject to extensive monitoring obligations with regard to the safety, quality and performance of their products, which are also monitored by the competent authority. In case of non-conformity of the products, the worst case scenario is a recall by the manufacturers themselves (Art. 10 (12) MDR) or by the competent authorities (Art. 95 (4) MDR).

Manufacturers whose products fall within the scope of the AIA - e.g. manufacturers of AI medical devices - are also subject to strict post-market controls. According to Art. 65 in conjunction with Art. 67 (1) AIA, the powers of the market surveillance authorities again extend to a request to recall the product. Manufacturers of AI medical devices are thus threatened with a double follow-up inspection (pursuant to AIA and MDR), whereby both times a recall can be threatened as the ultimate sanction. In view of the enormously high safety standard of the MDR’s vigilance system, however, such a (double) follow-up control does not seem objectively justified.

In addition, the preconditions for official intervention also differ. In contrast to the MDR, which requires an “unacceptable risk to health or safety” for a recall (Art. 94 (1) MDR), the AIA allows the authority to order a recall if “aspects of the protection of public interests” are affected (Art. 67 (1) AIA). The threshold for an official intervention is therefore set significantly lower under the AIA than it is under the MDR. There is consequently a risk that the (recall) system of the MDR, which is specifically tailored to the patients’ need for protection, may be circumvented with the help of the AIA.


In addition to the problems just outlined, the draft AIA raises a whole series of other questions that have not yet been fully answered. However, the EU has already announced that it will present an (improved) draft regulation at the end of November 2021. It therefore remains to be seen whether this is suitable for creating the hoped-for clarity in the area of AI medical devices as well, or whether the current legal uncertainties will remain.