In this final instalment of our series of blogs on the European Commission’s plans for AI and data, announced on 19 February 2020, we discuss some potential effects on companies in the digital health sector. As discussed in our previous blog posts (here, here and here), the papers published by the European Commission cover broad concepts and apply generally — but, in places, they specifically mention healthcare and medical devices.
The Commission recognizes the important role that AI and big data analysis can play in improving healthcare, but also notes the specific risks that could arise given the effects that such new technologies may have on individuals’ health, safety, and fundamental rights. The Commission also notes that existing EU legislation already affords a high level of protection for individuals, including through medical devices laws and data protection laws. The Commission’s proposals therefore focus on addressing the gap between these existing rules and the residual risks that remain in respect of new technologies. Note that the Commission’s proposals in the White Paper on AI are open for public consultation until 19 May 2020.
Each of the Commission’s White Paper on AI, the Report on safety and liability implications of AI, IoT and Robotics, and the Communication on a European strategy for data contain proposals that are relevant to digital health, including in particular:
- potential conformity assessment requirements on digital health products that incorporate AI; and
- support for a “Common European Health Data Space”.
Conformity assessment for “high-risk” AI applications
One of the principal elements of the Commission’s White Paper on AI (“White Paper”) is the proposal to introduce conformity assessment requirements for certain “high-risk” AI applications. The two cumulative conditions for determining whether an AI application is “high-risk” is the sector in which it is deployed and the manner in which it is deployed. A digital health application including an AI component operate in the healthcare sector (specifically identified as a sector that is likely to be considered high-risk), and if it is capable of affecting the physical health of an individual, it likely would be considered high-risk. The Commission proposes that such high-risk AI applications should be subject to pre-marketing conformity assessment requirements and also potentially on-going monitoring and ex-post controls.
The Commission acknowledges that in respect of medical devices, legislation in the form of the Medical Devices Regulation already contains specific provisions governing the safety of medical devices — including software as a medical device. The Regulation does not expressly mention AI components but does address certain aspects of digital technologies, such as interoperability and compatibility, and the standards that must be achieved as part of the conformity assessment procedure for such technologies. Further, medical device-specific cybersecurity guidance exists at the EU-level, and addresses additional topics such as automated decisions and cyber threats. However, the legal requirements that the Commission suggests should be included in an AI conformity assessment go beyond these requirements. These requirements include:
- Requirements on training data. Requirement to train AI on datasets that are sufficiently broad and representative to cover all relevant scenarios to avoid dangerous situations.
- Requirements on record-keeping and data sets. Requirement to keep accurate records regarding the dataset used to train and test AI systems; documentation on the programming and training techniques used to build, test and validate the AI systems; and, in some cases, the training datasets themselves.
- Requirements on transparency. Requirement to provide clear information on the AI system’s capabilities, limitations, the purposes for which it is intended, the conditions under which it should function, and the expected level of accuracy.
- Robustness and accuracy. Requirements to ensure that AI systems are robust, accurate and can deal with errors or inconsistences during all phases of its life cycle — a particular challenge for evolving or self-learning AI systems.
- Human oversight. Requirements to ensure that there is an appropriate level of human oversight over the AI system, though the Commission does not provide clear views on what would constitute an “appropriate” level of human oversight for digital health applications.
For products like medical devices that are already subject to pre-marketing conformity assessment requirements, the Commission states that the AI conformity assessment should form part of the existing mechanism. But as the Commission acknowledges, not all of the requirements above may be suitable for verification in a conformity assessment. Medical device manufacturers may also not be best placed to comply with some of the requirements, particularly if they are deploying AI technologies developed by third parties. It is therefore not clear whether or how the Commission will update the existing medical devices rules to take AI into account. The Commission is now seeking industry input on proposals in the White Paper, through a public consultation that is open until 19 May 2020. Please contact the Covington team for a more detailed analysis of these proposals or to input into the consultation on the White Paper.
Common European Health Data Space
In its Communication on a European strategy for data, the Commission emphasizes the importance of data to realize the EU’s potential in the digital economy. In particular, the Commission recognizes the challenges that exist for organizations to effectively share health data for research, innovation and to improve patient care — including both regulatory and technical challenges. To tackle these challenges, the Commission proposes the creation of a “common European health data space”.
The Commission proposes to take the following actions (among others):
- facilitate the establishment of a Code of Conduct for processing personal data in the health sector (in accordance with GDPR Article 40);
- scale up cross-border exchange of health data such as electronic health records, genomic information (for at least 10 million people by 2025), and digital health images through secure federated repositories in compliance with the GDPR;
- support the development of national electronic health records and interoperability of health data through the application of the Electronic Health Record Exchange Format;
- start cross-border exchanges of data through the eHealth Digital Service Infrastructure (eHDSI) of electronic patient summaries, ePrescriptions, medical images, laboratory results and discharge reports; and
- support big data projects promoted by a network of regulators.
Through these actions, the Commission hopes to support the prevention, diagnosis, and treatment of diseases (particularly cancer, rare diseases and common and complex diseases) through these initiatives, as well as foster research and innovation in this field.