Limiting possible negative aspects of AI
Guidelines for Good Practice in Telemedicine


The "fourth industrial revolution" (4IR) refers to the complex ecosystem encompassing the following, among others:

  • artificial intelligence (AI);
  • robotics;
  • the Internet of Things (IoT);
  • genetic engineering; and
  • quantum computing.

The 4IR thus blurs the lines of technology separating the physical, digital and biological. There is no standard definition for "AI", but this has been described as "computing technologies that resemble processes associated with human intelligence, such as reasoning, learning and adaptation, sensory understanding and interaction".(1) In the healthcare context, there have been and will continue to be extensive gains and opportunities to solve existing challenges with the use of the technologies encompassed within the 4IR and AI.

In the South African context, AI in the healthcare setting has already been used in many ways. For example:

  • Envisionit Deep AI, which streamlines and improves medical imaging diagnosis for radiologists;
  • Phulukisa Health Solutions, including a mobile medical solution to reach underserved communities, with:
    • an automated triaging system through the cloud to reduce patient waiting time at clinics and to ensure that records are always accessible; and
    • automated screening systems for infectious diseases such as covid-19, human immunodeficiency virus and tuberculosis, as well as lifestyle diseases such as diabetes, issues with vision and hearing and hypertension;
  • Datawizzards, including a number of bespoke solutions such as DeepDoctor, which is a multilingual chatbot service for self-diagnosis on your smartphone; and
  • BroadReach Health, including Vantage, an AI-enabled platform where the entire health system is empowered by combining data with purposeful analytics and proven workflows so that healthcare workers can make the right decisions at the right time and execute on those decisions at scale.

Limiting possible negative aspects of AI

However, care will need to be taken to limit the possible negative aspects of AI, which could, for example, result in inaccurate predictions, or possibly even discrimination and stigmatisation of certain ethnic populations as a result of a lack of data used to generate the AI systems. Furthermore, legislation will need to be updated and regulation introduced so that 4IR technologies and AI systems are regulated in compliance with global best practices, the Bill of Rights and the South African Constitution.

One legal aspect of AI that has recently been in the spotlight in South Africa is the ability for a patent to be granted where there is an AI inventor. The first AI patent was recently granted in South Africa for a non-healthcare-related invention, but given the vast application of AI in the medical and healthcare context, the question of patentability is highly relevant. The grant has been widely criticised by patent experts in South Africa, given the use of the phrase "person" or "him" in the Patents Act when referring to inventors, applicants or patentees, which has been taken to mean that an AI system, which is not a person, cannot be an inventor.

Additional issues arise from the difficulty of transferring an invention from an AI inventor to an applicant, since a person applying on behalf of the inventor is required to provide proof of their authority to do so, which is usually done as a written document. The uncertainty created by grant of the patent highlights the need for legislation and the regulatory environment to be updated to take into consideration the impact of the 4IR and AI on technology development.

Guidelines for Good Practice in Telemedicine

The impact of the pandemic on the provision of telemedicine services and the current legal framework in South Africa set out in the General Ethical Guidelines for Good Practice in Telemedicine (the guidelines) issued under the Health Professions Act (56/1974) in August 2014 have been previously reported and considered (for further details, please see "Telemedicine: time for an upgrade?"). Since the guidelines permitted only registered medical practitioners and health professionals acting within their own delineated scopes of practice to carry out the acts that fall within the scope of the medical profession, it was argued that any platform or medical device including automated responses or AI programmes that purported to diagnose a patient without a healthcare practitioner physically assessing and/or confirming the diagnosis was prohibited.

Additionally, after the national lockdown in March 2020, the guidelines on telemedicine for health practitioners were amended for the duration of the pandemic to permit "remote consultations with patients using telephonic or virtual platforms of consultation", but only where there was an existing healthcare practitioner-patient relationship. The guidelines were again amended in April 2020 to provide that such remote consultations could be practised where there was no existing healthcare practitioner-patient relationship only if in the best clinical interest of patients. It remains to be seen what form the guidelines will take after the pandemic, but in the interests of facilitating the benefits to be obtained from the 4IR and AI technologies that are available and being developed, it is hoped that the guidelines will be aligned with international telemedicine regulations.


Moreover, the Protection of Personal Information Act (POPIA), which was enacted in 2013 but which only came into effect on 31 July 2021 and has been in the spotlight recently in South Africa, is also highly relevant to the 4IR and AI advances in healthcare technology. In particular, section 71(1) essentially provides that a person or "data subject" may not be subject to any decision:

  • that results in legal consequences or that affects the person to a substantial degree; and
  • that is based solely on the automated processing of personal information (which, by definition in the POPIA, includes information relating to the data subject's health, such as physical or mental health or wellbeing, disability and biometric information) intended to provide a profile of the person, including in relation to the person's health.

The exceptions provided in section 71(2) are that such automated decision-making is allowed:

  • for the purposes of executing or concluding a contract, provided that the request of the data subject in terms of the contract has been met, or appropriate measures have been taken to protect the data subject's legitimate interests; or
  • where the decision is governed by a code of conduct in which appropriate measures are laid down for protecting the lawful interests of data subjects.

The "appropriate measures" must provide an opportunity for the data subject to make representations about the automated decision, and require a responsible party to provide the person with sufficient information about the underlying logic of the automated processing methodology so that the person can make such representations.

In general terms, a "responsible party" is defined in the POPIA as "a public or private body or any other person which, along with or in conjunction with others, determines the purpose of and means for processing personal information". This leads to uncertainty as to who the responsible party would be in a case where AI determines the purpose of and means for processing personal information.

Also, the processing of biometric information is defined in the POPIA as "a technique of personal identification that is based on physical, physiological or behavioural characterisation including blood typing, fingerprinting, DNA analysis, retinal scanning and voice recognition". Genetic data and biometric data could include "health information", which is considered special personal information. In order to lawfully process special personal information, certain requirements must be adhered to (for further details on the legislative requirements in this regard please see "Genetic information - a new resource to be mined?").


Naturally, there are also implications relating to a patient's legal recourse where AI makes a mistake with regard to processing, analysis or diagnosis based on the patient's data, and provision needs to be made in the appropriate legislation for legal liability in such an instance.

The old saying "with great power comes great responsibility" holds true for 4IR technologies and AI in the healthcare space, and who will ultimately hold that responsibility when the power rests with a machine will need to be considered carefully.

For further information on this topic please contact Joanne van Harmelen at ENSafrica by telephone (+27 21 410 2500) or email ([email protected]). The ENSafrica website can be accessed at


(1) For further information, please click here.