Information Law analysis: In the fourth article in a six part series exploring the NHS long-term plan, Dr Nathalie Moreno discusses digitalisation plans for the NHS and the potential problems which might arise in connection to data protection.

The government is saying it wants the NHS to move to ‘Digital First’ within the next ten years, what does that mean for service users and is the timeframe achievable?

A shift to ‘Digital First’ represents a utopia for healthcare professionals and patients alike—straightforward and unhindered access to healthcare, quick and accurate diagnoses and enhanced treatment. The feasibility of such a shift, however, depends on tapping the data well of the UK’s healthcare service within the confines of data protection regimes, conducting a mass digitisation of existing patient histories and encouraging a shift in public attitudes towards digital care.

The NHS’ current use of technology focuses on enhancing patient access to healthcare, enabling people to manage their own health and improving diagnoses and treatment, through, for instance, online GP consultations, the electronic prescriptions service and the large variety of mobile apps available via the NHS apps library.

Through its long term plan, the NHS hopes to build on this foundation by bringing technology further into the spotlight, introducing, for instance, longer and richer face-to-face GP consultations via video and changes to primary care and outpatient services, such as tiered escalation based on the needs of patients. Under this vision, it would ultimately be the norm for an artificial intelligence (AI) interface to form the first point of contact for patients, in place of a human doctor.

The extent, to which a shift to a Digital First model is feasible, however, depends largely on two things—the ability to collect and collate comprehensive patient data and the ability to successfully navigate data protection regimes while collecting and utilising this data. Despite being the largest single healthcare organisation in the world, the NHS does not own datasets as diverse and comprehensive as are required to make them fruitful data foundations for developing AI algorithms powered by patient data, anonymised or not.

What are the data protection and privacy implications of the objective that within five years every patient will be able to access a GP digitally where appropriate and opt for a virtual outpatient appointment?

A cohesive and digitised database of patient data is vital for the successful operation of a Digital First model, for instance the virtual consultations with GPs, a central tenet of the long term plan’s digital chapter. Consultations with digital GPs require on-demand access to patients’ medical histories and records, regardless of location. This would mean that patient records, which under the General Data Protection Regulation (EU) 2016/679 (GDPR) constitute ‘special category data’, would be accessible on individuals’ smart devices, predominantly via the NHS app. The sharing of data in this way will be impacted by data protection law in two key ways.

Firstly, service providers must ensure that the security and reliability of their identity verification processes are of the highest security standards and that access is always limited to those who have been granted it lawfully. As per the NHS guidance on Data Security Protection Requirements (published in advance of the GDPR), NHS organisations must ensure that any supplier of IT systems, which may process personal data, have the appropriate security certification, as the NHS (as data controller) will be responsible for the lawful processing of the personal data. This is echoed in the GDPR, which calls for appropriate technical and security measures to be implemented to safeguard personal data. The NHS will also need to continue its efforts in relation to monitoring cyber activity, and its response to cyber-attacks or data breaches will be more closely scrutinised by regulatory bodies, such as the Information Commissioner’s Office, when deciding on fines.

Secondly, on-demand access to patient data and the development of AI through a diverse data set necessarily entails the sharing of patients’ personal data across the UK, giving rise to serious privacy risks. It must therefore be a central focus of software development to ensure that all NHS systems are able to receive and share data from other systems (whether internal or external) in a recognised interoperable format. This highlights the need for interoperability to remain a top priority for the NHS and the importance of implementing enhanced technical and security measures and in-depth training for the healthcare staff involved in the provision of digital healthcare services.

To satisfy the need for transparency under the GDPR, the NHS must continue to keep patients informed of whom they are sharing their personal data with and ensure that such sharing is only in accordance with the permissions granted by the individuals to whom it relates. Additionally, the NHS must make clear to patients the ways in which they, as data subjects, can exercise their rights under the GDPR, over their personal data. The Digital First model aims to give more control to patients over their health, but this control must be extended o their health data. Ultimately, AI powered by anonymised patient data is the future of the NHS.

How do limitations on use of personal data impact on artificial intelligence learning and technological product evolution?

Patients are, understandably, more comfortable with sharing data which has been anonymised, particularly in the wake of high-profile data breaches such as the care.data project or the data breach of 2017 (as a result of which 150,000 patients’ data was unlawfully shared). In response, the NHS has introduced systems such as De-ID, which provides an automated and standardised way for NHS teams to remove the identifying elements of patient data, rendering it safer for use across the system.

However, we must consider the extent to which meaningful analysis can be extracted from patient data which has been pseudonymised or anonymised. Anonymising data completely such that there is zero probability of identification may mean that the data is no longer ‘personal data’ and thus not subject to data protection rules, however where the link between the data and the individual is completely severed, this could, in the context of health data and medical research, mean that the information constitutes meaningless ‘noise’ rather than valuable insights. Without quality data sets, it is in uphill struggle to teach AI to operate accurately.

The NHS must, therefore, strike a balance between the utility of the data for research and AI development purposes and the degree to which we anonymise the data so that it can be utilised lawfully. This would enable AI learning and technological product evolution to continue to develop whilst still observing the applicable data protection regimes. This is further explored with the use of open data, which could help to inform and enhance artificial intelligence learning and the development of technological products. Utilising patient data as open data would doubtless require anonymisation, but it could help to inform AI development.

The national data opt-out does well to observe the rules with regards to the processing of personal and special category data, by allowing patients to opt-out of having their confidential health data shared for purposes other than their individual care, such as research and planning. Although necessary, this also poses a hurdle in obtaining the quality data required for successful developments in AI.

There is much reference to automation and the use of AI in the provision of care and the associated cost savings, but is it likely that there will be much, if any, saving once fail safe’s and human monitoring are factored in or is it more likely that much of the ‘virtual care’ will merely add a layer of admin or process for the patient prior to accessing a human physician?

Though the potential cost-saving benefits of AI in the provision of healthcare are often touted, automating a process goes beyond cost-saving—it saves time, for instance by allowing patients to be triaged more swiftly, tests to be conducted more efficiently and the results to be understood more quickly. Where a human doctor may require days to painstakingly mark out the perimeters of a brain tumour, AI can do so in a matter of minutes. Alongside saving time (which ultimately also contributes to cost savings), virtual care can also deliver diagnoses and treatment accurately and, some would argue, more precisely than human doctors. Though the focus of AI is currently on supplementing the care and services provided by humans, the long term plan hopes that AI will significantly reduce the level of human monitoring that is required. AI-operated machines will be able to act quickly and accurately, thereby reducing the hoops through which patients must currently jump.

This article was originally published by Lexis Nexis.