Simon Burns is a partner in the technology + digital practice at Gilbert + Tobin. Simon’s practice focuses on data and technology intensive transactions, with a focus on emerging technology and associated businesses, such as artificial intelligence, blockchain, data commercialisation, payment systems, digital platforms and digital identity. This work includes advising boards and senior executives on risk and governance strategies around deployment of novel and emerging technology.

Key clients include Westpac, the ASX, Transport for New South Wales, Australian Payments Plus and QBE Insurance.

Simon is recognised as a leading lawyer by Chambers and Partners, The Legal 500 and Best Lawyers.

Jen Bradley is a special counsel in the technology + digital practice at Gilbert + Tobin. Jen specialises in complex technology procurement and transformation projects, including IT and business process outsourcing, systems development and integration and other bespoke commercial arrangements. Her practice includes advising clients on associated consumer, cybersecurity, privacy and data protection compliance obligations, digital asset (including data) commercialisation and the adoption of emerging and disruptive technologies such as artificial intelligence, blockchain and internet of things.


1 What is the current state of the law and regulation governing AI in your jurisdiction? How would you compare the level of regulation with that in other jurisdictions?

In Australia, currently there are no laws or regulations that expressly apply to AI. Instead, the Australian approach to governing AI to date has focused on establishment of ethical principles and similar assurance frameworks around the adoption and implementation of AI. For example, the Australian Department of Industry, Science, Energy and Resources (in consultation with Australian stakeholders) developed the AI Ethics Principles, which are a set of principles that may be used by business or government when designing, developing, integrating or using AI systems. However, the principles are voluntary and, as such, there is no requirement that organisations that develop or deploy AI systems must consider or comply with the principles in respect of any proposed development or deployment of AI systems.

That said, for some government entities at a state level there are mandatory AI policies that must be complied with. For example, New South Wales (NSW) governmental agencies must comply with the NSW Government AI Ethics Policy and AI Assurance Framework, which aim to provide practical guidance in design building and use AI technology appropriately by the NSW government.

Despite the lack of specific AI regulation in Australia, organisations that develop or deploy AI systems do need to consider a range of existing laws of general application, that relate to and impact on different stages of the AI supply chain from design and development, through to the deployment and monitoring use of AI systems. These include:

  • privacy laws, and most relevantly the Privacy Act 1988 (Cth) (Privacy Act), which will apply in respect of any personal information or sensitive information (as defined in the Privacy Act) that is ingested in or used by an AI solution;
  • cyber security regulations, that will apply in respect of the security of the AI solution. This may include the Privacy Act as referred to above, and in some cases, the Security of Critical Infrastructure Act 2018 (Cth). In addition, sectoral-specific regulation may apply, particularly in the financial services sector and those organisations regulated by the Australian Prudential Regulation Authority (APRA);
  • anti-discrimination laws that may be engaged depending on the outcome or function of the AI solution;
  • product liability and consumer protection laws that include various statutory guarantees, including that products supplied to consumers are of acceptable quality and reasonably fit for purpose;
  • tort law and negligence, in circumstances where a duty of care is found to be owed to persons that use or are impacted by the AI solution;
  • laws of confidentiality, which can exist in equity and in contract. Organisations that develop and deploy AI solutions must ensure that any data that is ingested to the AI system is used in compliance with any duties of confidence that the organisation has in respect of that data;
  • surveillance legislation, particularly with respect to any facial recognition technology; and
  • directors’ duties and, in particular, the duties of due care and skill, that include obligations on company boards in relation to their oversight of a company’s development and deployment of AI solutions, and associated risk management.

In addition, there is also a range of use case or sector specific regulation that may be engaged, depending on the particular AI use case, such as product safety legislation or additional regulation in the financial services sector, particularly around Australian Financial Services Licensees and APRA regulated institutions – such as consideration of operational risk arising from AI solutions as part of Prudential Standard CPS 220 (Risk Management).

Overall, we consider that the state of regulation of AI in Australia is largely similar to many large international jurisdictions, although we do note that there is potentially less imminent regulatory reform on the agenda when compared to the likes of the UK, the European Union and Canada.

2 Has the government released a national strategy on AI? Are there any national efforts to create data sharing arrangements?

The Australian government has a range of published strategy documents and associated national efforts with respect to AI. Most prominently, these include:

  • ‘Australia’s AI Action Plan’ June 2021, which nominates a range of key actions against four focus areas: to lift AI adoption to create jobs and boost productivity; grow and attract talent; utilise cutting-edge AI; make Australia a global leader in responsible and inclusive AI.
  • ‘Artificial Intelligence Roadmap’ November 2019, which was co-developed by the Australian Government and CSIRO’s Data61. The roadmap identifies three potential areas of AI specialisation for Australia. These are: (1) health, ageing and disability; (2) cities towns and infrastructure; and (3) natural resources and environment.
  • Commonwealth Government grants of A$44 million to establish four AI and Digital Capability Centres under its AI Action Plan. The Centres will connect small and medium-sized enterprises with AI equipment, tools, expertise and training in respect of AI.
  • Australia’s Digital Economy Strategy 2030, which is a broader strategy for the digital economy in Australia, but also specifically addresses AI as a key technology that will shape Australia’s future to 2030 and receive government investment.

As part of the above AI Action Plan, the Commonwealth Government indicated that it was also considering regulation of AI or AI related matters within a broader review of the Privacy Act (ongoing in Australia since December 2019), in particular around automated decision-making. The Privacy Act Review Report, published by the Australian Attorney-General in early 2023, included a number of proposals impacting on AI, including enhanced transparency with respect to substantially automated decisions that have a legal or significantly similar effect on an individual’s rights and mandatory privacy impact assessments for high risk use of personal information, which would include many AI use cases.

In relation to data sharing arrangements, there are various governmental data sharing regulations that aim to support data sharing and accessibility of public sector data. The latest of these is the Data Availability and Transparency Act 2022 (Cth), which establishes the ‘DATA Scheme’ under which Commonwealth bodies are authorised to share public sector data to accredited users, being Commonwealth, state and territory bodies, as well as universities. Similar regimes also exist in various states and territories, such as in New South Wales (under the Data Sharing (Government Sector) Act 2015 (NSW)), Victoria (under the Data Sharing Act 2017 (Vic)) and South Australia (under the Public Sector (Data Sharing) Act 2016 (SA)).

In addition, there are various Commonwealth, State and Territory ‘open data’ policies that aim to promote sharing of data.

Finally, in addition to these frameworks for the sharing of public sector data, the Commonwealth has introduced the ‘Consumer Data Right’ regime under the Competition and Consumer Act 2010 (Cth), which provides consumers with a data portability right to direct access to and transfer of consumer data to either themselves or accredited data recipients. The regime initially focused on the banking sector but is expanding to the energy sector and potentially others.

3 What is the government policy and strategy for managing the ethical and human rights issues raised by the deployment of AI?

As briefly mentioned above, Australia’s AI Action Plan nominates leadership in responsible and inclusive AI as one of its four focus areas. The core element of this focus area is progression of the implementation of Australia’s AI Ethics Principles.

These are voluntary AI Ethics Principles and therefore do not form binding legislation. There are eight principles, which cover human, societal and environmental wellbeing, human-centred values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability and accountability.

As a voluntary framework, this is effectively a ‘self-regulation’ approach for managing the ethical and human rights issues raised by development of AI. That said, as outlined in question 1 above, there remains laws of general application, such as the Privacy Act and anti-discrimination legislation, that may be engaged by AI solutions.

In addition, the Australian Human Rights Commission published its ‘Human Rights and Technology’ Final Report in 2021. The report includes a detailed review of the potential impact of AI solutions on human rights, as well as some of the legal and regulatory considerations and reform options available in connection with use of AI.

4 What is the government policy and strategy for managing the national security and trade implications of AI? Are there any trade restrictions that may apply to AI-based products?

National security and trade implications for AI are managed primarily through the Foreign Acquisitions and Takeovers Act 1975 (Cth). This regime regulates foreign investment in critical infrastructure and other sensitive assets that may impact on national interests or security.

One of the key areas is ‘critical technologies’, which are current and emerging technologies that have the capacity to significantly enhance or pose a risk to Australia’s national interest. Where there is any military or intelligence use, there is a mandatory approval regime. Outside of this, foreign investment in any artificial intelligence solution is subject to voluntary approval regime – albeit that this is strongly encouraged by the Foreign Investment Review Board (FIRB). In addition, the same voluntary approval regime is encouraged where there is any foreign investment in a business that has access to sensitive personal information (including medical, financial or genetic information) of over 100,000 Australian residents. Where there is access to personal information collected by the Australian Defence Force or an associated agency, there is a mandatory approval regime.

In addition to the above, the Security of Critical Infrastructure Act 2018 (Cth) regulates cybersecurity-related matters for various ‘critical infrastructure assets’ across a broad range of sectors covering most of the economy, including health, finance, transport, food and grocery, data storage and processing, telecommunications, ports, water and energy.

5 How are AI-related data protection and privacy issues being addressed? Have these issues affected data sharing arrangements in any way?

AI-related data protection and privacy issues are primarily addressed through the federal Privacy Act. The Privacy Act governs the collection, use, storage and disclosure of personal information and sensitive information (such as health information and biometric information) (together, Personal Information).

Consequently, in the context of AI solutions, it will be engaged where that solution ingests or otherwise uses any Personal Information. The key requirements under the Privacy Act in relation to AI solutions include that an organisation:

  • must only collect Personal Information where the information is reasonably necessary for the organisation’s functions or activities and, in relation to sensitive information, where the relevant individual has provided consent;
  • must only use or disclose Personal Information for the primary purpose for which that information was collected, or a secondary purpose where consented to by the individual or where that secondary purpose is reasonably expected by the individual and is related or, in the case of sensitive information, directly related to the primary purpose (among some other circumstances); and
  • must destroy or de-identify Personal Information that is no longer needed for a permitted purpose.

Similar principles are implemented under the various State and Territory-based privacy laws that regulate State and Territory bodies.

As a result of the above restrictions that apply to Personal Information, data sharing arrangements in Australia, particularly between unrelated organisations, are often based on de-identified data sets.

6 How are government authorities enforcing and monitoring compliance with AI legislation, regulations and practice guidance? Which entities are issuing and enforcing regulations, strategies and frameworks with respect to AI?

Given the lack of AI specific regulation, there are limited examples of enforcement and compliance activities being taken specifically in respect of AI. However, there is an increased level of regulatory focus and enforcement activity in respect of facial recognition systems, with this enforcement action being undertaken by the Office of the Australian Information Commissioner (OAIC) who is the applicable regulator for the Privacy Act.

There are two notable and recent actions of the OAIC in respect of facial recognition systems:

  • Clearview AI – Clearview AI is the provider of a facial recognition app that allows users to upload photos of an individual’s face to have the photo matched to other photos of the individual’s face that appear on the internet. The app then provides a link to the source photo, to enable the user to identify the relevant individual. To perform the matching, Clearview AI collated a database of images by ‘scraping’ these images from various websites on the internet, including social media platforms. In July 2022, the OAIC, together with the UK’s Information Commissioner’s Office (ICO), commenced a joint investigation into the personal information handling practices of Clearview AI. In November 2021, the Australian Information Commissioner and Privacy Commissioner (Commissioner) found that Clearview AI’s conduct had breached the Privacy Act in five respects, including because Clearview AI collected Australians’ sensitive information without consent and did not take reasonable steps to implement practices, procedures and systems to ensure compliance with the Privacy Act (among other breaches).
  • 7Eleven – 7Eleven is a chain of retail convenience stores throughout Australia. During June 2020 and August 2021, 7Eleven customers were asked to complete surveys about their in-store experience on tablets that were set up within stores. The tablets had inbuilt cameras that captured facial images of survey respondents. The OAIC found that 7Eleven had contravened the Privacy Act in two respects, including because 7Eleven collected individuals’ sensitive information without consent and failed to take reasonable steps to notify individuals about the facts and circumstances of collection of that information.

7 Has your jurisdiction participated in any international frameworks for AI?

Australia has contributed to various international frameworks and initiatives for AI. In March 2019, along with 42 other countries, Australia signed the Organisation for Economic Co-operation and Development’s (OECD) Principles on Artificial Intelligence, agreeing to uphold the principles that aim to ensure responsible stewardship of AI and international cooperation on trustworthy AI. Australia is also a member of the OECD Working Party on Artificial Intelligence Governance, which was established in March 2022.

Additionally, Australia is a founding member of the Global Partnership on AI (GPAI). The GPAI is composed of experts from science, industry, civil society, international organisations and government, who have been nominated to the forum from member countries. Four key themes guide the work of GPAI working groups, including responsible AI, data governance, the future of work, and innovation and commercialisation.

Australia has also been involved in a range of global standards development activities in relation to AI, including:

  • the ISO and IEC Joint Technical Committee 1 subcommittee (ISO/IEC JETC 1/SC 42), which was established to focus on standards development for AI systems, and which Australia has contributed to through its Mirror Committee IT-043;
  • the Institute of Electrical and Electronic Engineers’ (IEEE) through independent experts; and
  • the International Electrotechnical Commission’s (IEC) and the Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS), through the Standards Australia Mirror Committee and the IEC National Mirror Committee.

These activities are in line with one of the overarching goals of Standards Australia’s final report on the development of standards in AI: ‘An Artificial Intelligence Standards Roadmap: Making Australia’s Voice Heard’, which stated that Australia should ensure it can effectively influence AI standard development globally.

8 What have been the most noteworthy AI-related developments over the past year in your jurisdiction?

In 2022, the outcome of a test case that was run in Australia determined that AI systems could not be named as inventors for the purposes of Australian patents. The decision came about after the Australian Commissioner for Patents rejected an application for an Australian patent made by scientist Dr Thaler, which nominated an AI system, the ‘Device for Autonomous Bootstrapping of Unified Sentience’ or DABUS, as the inventor of the patent. Dr Thaler successfully challenged the decision of the Commissioner at first instance in the Federal Court; however, this decision was overturned on appeal in the Full Federal Court. An application for further appeal was dismissed by Australia’s highest court, the Australian High Court.

As a result, Australia’s intellectual property laws will reflect the decision of the Full Federal Court that AI systems cannot be considered inventors for the purposes of patents, until either lawmakers choose to reform Australian patent laws or an appropriate case is brought before the courts to retest the decision

9 Which industry sectors have seen the most development in AI-based products and services in your jurisdiction?

Health has been identified by the Australian government throughout its AI strategies as a high potential area of AI specialisation for Australia. This supports Australia’s current National Digital Health Strategy and is based on health being an existing strength and area of competitive advantage for Australia, as well as being an area that presents opportunities for the resolution of significant problems facing our society. Further, it reflects the Australian government’s substantial investments in this area. In 2020, the Australian government committed to providing A$19 million in grants across five health-related projects being conducted by Australian research centres and universities relating to AI systems, including AI systems to detect eye and cardiovascular disease, breast cancer, improvement of mental healthcare and treatment of neuro diseases. These kinds of investment from government can only be expected to increase. In July 2020, the Medical Research Future Fund, a research fund set up by the Australian government for health and medical research projects, grew to A$20 billion and one of the priorities for the fund from 2022–2024 is data, digital health and artificial intelligence.

That is not to say that investment in AI is new. Australian government agencies have been conducting significant research projects in the area of health and AI for a long period. The Commonwealth Scientific and Industrial Research Organisation’s (CSIRO), an Australian government agency responsible for scientific research, has been involved in projects with leading national digital health research facility, the Australian e-Health Research Centre (AEHRC) using AI and machine learning. Most of the technologies and research projects undertaken by the AEHRC employ some type of AI and machine learning, including across the areas of medical imaging, genomics, data analytics, health services and health data interoperability research. These investments have led to the development of a range of AI-based products and services that are already in use, including, for example, a patient admission prediction tool developed by the AEHRC, which was designed to improve hospital wait times by predicting the number of patients that may present to emergency departments in specific time periods (including in the next hour, week or during holiday periods like Christmas), allowing hospitals to better forecast demand and manage capacity.

While government investment in digital health initiatives involving AI is significant, so too is investment from the private sector. In 2021, Harrison.ai, a Sydney-based healthcare technology company, is reported to have raised over A$158 million in private investments over the past few years. Harrison.ai has also formed a range of joint ventures and partnerships with large medical companies, including Sonic Healthcare and I-MED Radiology (among others). Harrison.ai’s first product to receive regulatory approval was an AI tool developed with I-MED to detect clinical findings in chest X-rays. Public and private partnerships on AI projects are also common in Australia. Maxwell Plus, an Australian startup received A$1.1 million from the Australian government to develop and commercialise technology using its AI platform to identify early-stage Alzheimer’s, together with the CSIRO, I-MED and Austin Health.

10 Are there any pending or proposed legislative or regulatory initiatives in relation to AI?

There have been model laws proposed to regulate facial recognition solutions by the Human Technology Institute at the University of Technology Sydney. Further, the Australian Human Rights Commission made various recommendations in its Final Report on Human Rights and Technology for the introduction of legislation to encourage legal accountability for both government and privacy sector use of AI (among other recommendations for the update of existing laws). However, as of writing, there has not been any confirmed government reform in this regard.

We also note our comments above in relation to the Privacy Act review, which is currently being undertaken and the potential (minor) reform proposed under that Act Act, including in relation to automated decision-making systems. It also foreshadows further consultation on AI more broadly, including in respect of facial recognition.

11 What best practices would you recommend to assess and manage risks arising in the deployment of AI?

Best practice in relation to assessment and management of risks associated with deployment of AI requires recognition that, despite the lack of specific AI regulation, there are a broad range of regulatory considerations that apply at all stages of the development and deployment life cycle, from design and data ingestion to deployment and use.

Of course, we recommend adoption of the Australia’s AI Ethics Policy and framework and application of operational AI assurance and also risk frameworks (akin to those set out in CPS 220 (Risk Management) or its proposed replacement CPS 230, which are prudential standards issued by APRA) even for those entities not regulated by APRA. These prudential standards do provide useful frameworks to consider AI-specific risks.

Applying good risk management to AI requires organisations to establish risk appetite statements for AI systems, acknowledge a degree of inherent uncertainty with respect to data and output quality and reliability, establish methodologies to measure and assess these risks; ensure there are appropriately skilled and experienced resources who understand the data being utilised and how the AI solution is utilising that data, as well as systems, processes and tools to check and verify data and AI output through both initial development and testing, but also on an ongoing basis after implementation and productive use. This is particularly important in machine learning solutions where there is a risk that outputs and results can evolve over time.

However, we do consider that ultimately, best practice requires a human-centric approach to AI systems and associated risks, that appreciates the potential impact on and harm to individuals and society as a result of poorly designed, deployed or managed AI systems.


The Inside Track

What skills and experiences have helped you to navigate AI issues as a lawyer?

At Gilbert + Tobin, we are strong believers that technology lawyers need to understand the technology as well as the law. Indeed, many of our lawyers have backgrounds or degrees in information technology, science or similar fields.

Further, and more particularly with respect to AI issues, it is important to understand, and keep at the front of mind, the broader social and human rights considerations that are relevant to AI systems. What makes AI unique is the huge potential for social and economic good, but also the huge risk for material harm. As a firm, our lawyers pride ourselves on being able to understand both the opportunities and the risk, and balancing commercial interests with social interests.

We’ve also partnered with the Human Technology Institute at the University of Technology Sydney, which is doing important work in helping organisations take a human-centred approach to new technologies such as AI.

Which areas of AI development are you most excited about and which do you think will offer the greatest opportunities?

Digital health and the application of AI to health research, health assessments and screening and ultimately treatment or recovery presents a huge opportunity for society.

What do you see as the greatest challenges facing both developers and society as a whole in relation to the deployment of AI?

The challenge is in the blind spots. There are tools and processes for developers to address bias in data and other similar risks when they are aware of them. However, with AI it is unknown unknowns where the greatest challenge is. Consequently, it is extremely important to acknowledge the likely existence of these blind spots and not be overconfident with any deployment.

Related to this, if there are too many large-scale AI failures (eg because organisations are overconfident) this will ruin social or consumer trust and set the industry back years. This is why it is incumbent on the industry (and regulators) to lean into the challenges associated with ethical and trustworthy AI and help each other manage these risks in way that benefits us all.