The development and use of artificial intelligence (AI) in health, aged care and biotechnology is creating opportunities and benefits for health care providers and consumers. Presently, AI is being used in medical fields such as diagnostics, e-health and evidenced based medicine. In aged care, one of the greatest opportunities is for technology to provide efficiencies with respect to administrative or mundane tasks in order to enable staff to spend more quality face-to-face time with residents and clients.
However, a number of legal, regulatory, ethical and social issues have arisen with the use of AI in the health and aged care sectors. The issues is: can the law keep up with the pace?
Duty of care, negligence
The potential liability for injury caused to a resident or patient due to AI will depend on the circumstances of the adverse event but may include:
- the treating clinician, such as the GP, who relied upon the technology;
- the developer of the algorithm;
- the programmer of the software; or
- the hospital or aged care provider.
Proving causation in negligence under civil liability legislation may be difficult when machine learning occurs in a multi-layered, fluid environment when the machine itself is influencing the output. Answers may be complex and difficult to find given the legal, regulatory, ethical and social issues at play.
Additionally, the use of AI in the clinical practice of healthcare may also present challenges regarding the duty of care for health practitioners. For example, to what extent will clinicians have a responsibility to educate their patients on the complexities of AI?
We are watching closely how the law of negligence and duty of care adapts to this new technology.
Product liability laws may also be relevant including consumer guarantees under the Australian Consumer Law.
Regulatory changes for software based medical devices under the Therapeutic Goods Act
The Therapeutic Goods Act regulates software-based medical devices, including software that functions as a medical device in its own right and software that controls or interacts with a medical device.
On 25 February 2021, the Therapeutic Goods Administration (TGA) implemented reforms to the regulation of software-based medical devices, including new classification rules for software-based medical devices according to their potential to cause harm through the provision of incorrect information.
The changes include:
- clarifying the boundary of regulated software products (including ‘carve outs’);
- introducing new classification rules; and
- providing updates to the essential principles to more clearly express the requirements for software-based medical devices.
Certain software-based medical devices have been carved out (through either exemption or exclusion) from the scope of the TGA regulation either by exclusion or exemption:
- exclusion means that the devices are completely unregulated by the TGA; and
- exemption means that the TGA retains some oversight for advertising adverse events and notification, however, registration of the device is not required.
Certain clinical support decision support systems have been exempted.
Excluded products include consumer health products involved in prevention management and follow up that do not provide specific treatment or treatment suggestions, enabling technology for telehealth, healthcare or dispensing, digitisation of paper based or other published clinical rules or data including simple calculators and electronic patient records, population-based analytics and laboratory information management systems.
The TGA published the following guidelines ‘Regulatory changes for software based medical devices’ in August 2021.
In Australia, the Therapeutic Goods Act 1989 (Cth) defines ‘therapeutic goods’ and ‘medical devices’ very broadly, particularly if therapeutic claims are made.
Section 41BD of the Act defines ‘medical device’ as.
- any instrument, apparatus, appliance, software, implant, reagent, material or other article (whether used alone or in combination, and including the software necessary for its proper application) intended, by the person under whose name it is or is to be supplied, to be used for human beings for the purpose of one or more of the following:
- diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease;
- diagnosis, monitoring, treatment, alleviation of or compensation for an injury or disability;
- investigation, replacement or modification of the anatomy or of a physiological or pathological process or state;
- control or support of conception;
- in vitro examination of a specimen derived from the human body for a specific medical purpose;
and that does not achieve its principal intended action in or on the human body by pharmacological, immunological or metabolic means, but that may be assisted in its function by such means.
According to TGA guidelines, software that has an intended medical purpose consistent with the definition of a medical device will be regulated as a medical device.
The term ‘Software as a Medical Device’ (SaMD) includes software that is an accessory or controls a medical device. SaMD must be included on the Australian Register of Therapeutic Goods before they are supplied in Australia unless an exemption applies (such as a clinical trial).
Presently, the TGA regulates software under the existing medical device framework.
One of the main regulatory hurdles with registration of AI is that it is fluid and constantly changing whereas the TGA review of medical devices is currently based upon a pre-market product at a fixed period of time. The traditional framework of medical device regulation is not designed for adaptive artificial intelligence and machine learning techniques.
There have been a number of working groups established to discuss ethical issues concerning the use of AI in healthcare.
In 2017, the World Health Organisation and its Collaborating Centre at the University of Miami organised an international consultation on the subject. A theme issue of the WHO Bulletin devoted to big data, machine learning and AI was published in 2020.
The European Union on Ethics in Science and New Technologies published a ‘Statement on Artificial Intelligence, Robotics and Autonomous Systems’ (the Statement) in March 2018.
Further, in February 2020, the European Commission published a report on the safety and liability implications of AI, the Internet of Things and robotics (the Report).
While the Report argued that ‘the existing Union and national liability laws are able to cope with emerging technologies’, it also identified some challenges raised by AI that require adjustments to the current regulatory framework.
Australia is not a member of the EU. However, its therapeutic goods regulation more heavily aligned with the EU than the US.
The Statement proposed a set of basic principles and democratic prerequisites, based on the fundamental values laid down in the EU Treaties and in the EU Charter of Fundamental Rights. These principles and our commentary are set out below.
Human dignity: the principle of human dignity, understood as the recognition of the inherent human state of being worthy of respect, must not be violated by ‘autonomous’ techniques. It implies that there have to be (legal) limits to the ways in which people can be led to believe that they are dealing with human beings when in fact they are dealing with algorithms and smart machines.
Should we be transparent in telling people that they are interfacing with AI?
Autonomy: the principle of autonomy implies the freedom of the human being, the freedom of human beings to set their own standards. The technology must respect the choice of humans when to delegate decisions and actions to them.
What should we delegate to machines? Surely, the best care is the human touch and people should come first?
Responsibility: autonomous systems should only be developed and used in ways that serve the global social and environmental good. Applications of AI and robotics should not pose unacceptable risks of harm to human beings.
This is consistent with the principle that we should do no harm.
Justice, equity and solidarity: AI should contribute to global justice and equal access.
It is important to grant equity of access and that the benefits of AI not only be provided to those countries or people who can pay for the technology.
Democracy: key decisions should be subject to democratic debate and public engagement.
The use of AI should be done in accordance with community expectations and standards.
Rule of law and accountability: rule of law, access to justice and the right of redress and a fair trial should provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulation.
There should be adequate compensation for negligence.
Security, safety, bodily and mental integrity: safety and security of autonomous systems includes external safety for the environment and users, reliability and internal robustness (eg against hacking) and emotional safety with respect to human-machine interaction.
The use of AI in health care should be appropriately regulated to ensure that it is safe.
Data protection and privacy: utonomous systems must not interfere with the right to privacy of personal information and other human rights, including the right to live free from surveillance.
The protection of privacy and data protection is important.
Sustainability: AI technology must be in line with the human responsibility to ensure the sustainability of mankind and the environment for future generations.
There is a lack of Australian case law specific to negligence and AI in a health or aged care setting.
However, the ‘Watson for Oncology’ (WFO) clinical decision-support system in the US is an example of possible future challenges with the use of AI in healthcare. The WFO uses AI algorithms to assess medical records and assist physicians with selecting cancer treatments for their patients. This software recently received criticism after a news report (the Report) alleged that the WFO provided ‘unsafe and incorrect treatment recommendations’. According to the Report, the WFO was fed hypothetical, or ‘synthetic’, patient data by doctors at the Memorial Sloan Kettering (MSK) Cancer Centre. As such, it was argued that the WFO was biased towards MSK’s treatment options.
The Chief Medical Officer of the developer of WFO, Dr Nathan Levitan, has since addressed these criticisms. According to Dr Levitan, the ‘unsafe and incorrect’ recommendations were identified by IBM Co. Limited’s quality management system and corrected before ever reaching a patient. Additionally, Dr Levitan argued that the use of ‘synthetic patient data’ is necessary to ensure that recommended treatment options reflect current practice.
An identified issue with AI is bias arising from the data used and the assumptions made by developers. Further, education is required for medical practitioners to understand the use and limitations of AI in health care. In addition, it is recommended that companies who develop AI for use in the health care sector have multi-disciplinary clinical governance committees who oversee the development of the product from a clinical point of view. Ultimately, it is the treating clinician’s responsibility to decide treatment options for patients using AI as a tool.
This article was written with the assistance of Lauren Krejci, Paralegal.