Opportunities for the application of AI in health and life sciences continue to grow.
In February 2026, Australia’s Therapeutic Goods Administration (TGA) released new guidance on when and how software as a medical device (SaMD) that is or uses AI technology is regulated.
This new guidance applies to SaMDs, such as:[1]
- apps that use machine learning to review images on phones and diagnose melanoma,
- cloud-based analytics that can predict patient outcomes, and
- chatbots that employ large language models (LLMs) to apply clinical triage protocols to patients.
This new guidance follows the TGA’s final report[2] into the safe and responsible use of artificial intelligence (AI) in healthcare, published in 2025, calling for increased regulatory guidance to support the implementation of AI.
The TGA’s message is familiar and clear: the application of therapeutic goods regulations is triggered by the manufacturer’s intended purpose, not the presence of AI features in the underlying technology. If software uses AI to influence clinical decisions or patient care, developers should expect to be in scope of the regulatory framework and be prepared to comply fully with the obligations of a sponsor and/or manufacturer of a medical device.
Key takeaways
Monitor intended purpose A manufacturer’s intended purpose for a product is essential to determining whether it will be regulated as a medical device and, if so, the scope of that product’s approved use. Software — and particularly, software reliant on AI algorithms — has the capacity to introduce new features that continuously shift performance and functionality. Manufacturers should monitor changes over time and seek regulatory approval before implementing updates that would either convert unregulated software into a regulated SaMD or change the intended purpose of an SaMD already included in the Australian Register of Therapeutic Goods (ARTG). | |
Managing off-label use The functionality of certain AI-based SaMDs (such as LLM-based chatbots) is often reliant on user interactivity and inputs. If a manufacturer becomes aware that a medical device is being used outside its approved intended purpose (an off-label use), controls should be implemented to prevent further off-label use. | |
Hold transparent evidence Manufacturers must hold evidence that is sufficiently transparent to enable evaluation of safety and performance of their product in line with the essential principles set out in Schedule 1 of the Therapeutic Goods (Medical Devices) Regulations 2002 (Cth) (Essential Principles).[3] This evidence must include: — documentation of the AI model’s alignment with the SaMD’s intended purpose — a description of the AI model and its training and testing phases — a demonstration of data representativeness and quality — evidence of risk management processes and — evidence to show how the device meets requirements for clinical evidence. | |
Synthetic data is allowed, but not a panacea Synthetic data is data created through algorithms or simulations to mimic the characteristics of real data. It is often used to train and validate AI models where it is not possible to rely solely on real world data, such as because there is limited real world data (as is the case with rare diseases) or because access to real world data is restricted. The TGA’s guidance confirms that synthetic data may support training and validation of AI systems, provided there is a clear rationale for its use and a documented data generation methodology. However, synthetic data will generally not replace the clinical data necessary to satisfy clinical evidence requirements. | |
Government oversight The TGA is coordinating with the Australian Commission for Safety and Quality in Health Care and the Department of Health, Disability and Ageing to develop further guidance materials to ensure the safety and performance of AI. The TGA recognises that mandates must be balanced and minimise regulatory burden. |
Looking ahead
In practice, software developers will need to:
- correctly classify the intended purpose of AI-enabled software,
- ensure accompanying materials are aligned with that intended purpose,
- prevent scope creep, by implementing rigorous change controls and managing off-label use,
- keep transparent evidence relating to the development and deployment of AI systems, and
- stay alert for further guidance published by the TGA and other government agencies.
