Artificial Intelligence (AI) is becoming an increasingly important part of everyday life, and we are seeing its effects ripple through virtually every industry both globally and within Australia. Even creative artistic endeavours, once thought comparatively immune from AI interference, are struggling to grapple with the consequences of AI models that can closely mimic human artistry. Nowhere has the impact of AI been more significant, nor the challenges posed by AI so apparent, as in the music industry.

With AI tools able to create songs that sound almost indistinguishable from original music (see, for example, the case of David Guetta playing a track featuring the “voice” of Eminem which was generated with AI), musicians are no doubt wondering what legal avenues they may have in seeking to prevent AI-generated music from using or exploiting their work, or alternatively, what risks they may face if they choose to use AI-generator tools.

In this article, we look at the challenges posed by AI-generated music, in particular considering the possible causes of action under Australian intellectual property law that a musician may have (or may be exposed to) in relation to a song generated by AI which replicates their style, lyrics, and/or instrumentation. While there are no clear answers given the novelty of the area, it’s already clear that these are going to be important questions in shaping the future of Australian music and the Australian music industry.

Potential legal issues

Australian law does not recognise a voice right. As such, a musician seeking to resist an AI-generated song using their work will have to rely upon other areas of the law which are relatively untested in the context of AI-generated works.

  1. Copyright – output song If the AI-generated song reproduces a substantial part of the artist’s copyright work, then the song is likely to constitute copyright infringement under Australian law. The original copyright work could be a recording, lyrics, or sheet music (ie, the instrumental melody). In either case, provided that a “substantial part” of the original work is taken, judged by quality rather than quantity (meaning that an important part of the music, such as the ‘hook’, could be protected even if it is a comparatively small part of the overall song), then the artist may be able to rely upon their copyright protection.Importantly however, copyright does not protect a musical style or ‘feel’, or a person’s voice. As such, if the AI-generated song is a ‘sound alike’ piece which is designed to sound similar to the artist’s music without directly reproducing any of their work, then the artist may be out of luck in terms of relying upon copyright protection (subject to the issue of machine learning considered below).There may also be defences to copyright infringement available in relation to certain AI-generated music, such as where the song is a parody.

    The copyright owner may also face a practical difficulty in choosing who to sue for infringement. It is likely to be the person who provided the input prompts for the AI tool to generate the output song. However, it could also be those who publish or sell the output song, or do any other act which is covered by the exclusive rights.

    There is also the potential for providers of AI-generator tools to be liable for secondary infringement. However, the primary act of infringement must first be established before any question of authorisation can arise. It is common for the terms of service of AI-generator tools to require users to warrant they will not infringe intellectual property rights and some AI-generator tools have functions to detect inputs that are protected by copyright.

  2. Copyright – input learning Where an AI is tasked with creating a song that sounds like an existing (human) artist, this is generally accomplished by ‘feeding’ the AI a large amount of music from that artist and having the AI learn the musician’s style. This raises the novel question of whether this process infringes copyright in the original music which is fed to the AI (as distinct from the ‘output’ song ultimately produced by the AI).We won’t know the answer to this question until it is considered by the Australian courts (either in the context of music or in other similar contexts such as AI models which are trained on written material). The answer is likely to depend firstly upon the exact process of machine learning and whether it involves a substantial reproduction of the original work, and secondly upon whether there are any defences available.There is also likely to be difficulty in establishing the copyright materials (and the extent of those materials) used by the AI-generator tool. This inability to understand how machine learning makes decisions is often referred to as the “black box problem”. In the US case of Getty Images (US), Inc v Stability AI, Inc, it was clear an AI-generator tool had been trained on Getty Images’ stock photos (subject of copyright) because the output contained a modified version of Getty Images’ watermark.
  3. Trade mark If the musician has a registered trade mark for their stage name and/or a song name, then they may be able to prevent others from using their name or song name as a trade mark (meaning as a ‘badge of origin’ to distinguish goods and services from those of other traders). For example, an album could not be released under the name of an existing artist/band/album name the subject of a trade mark, even if it contained AI-generated songs that were made to sound like that artist/band).However, in the context of AI-generated music, this is likely to be a relatively narrow protection. An allegation of trade mark infringement would not prevent the AI-generated song from being disseminated; it would merely limit the way in which the song could be described. The perennial question as to whether use of a stage name or song title is ‘use as a trade mark’ will also be relevant.
  4. Consumer law If the use of the musician’s work implies that the AI-generated song is associated in some way with the original artist (for example, if consumers would think that the AI-generated song was in fact a song from the original artist), then the musician may have a cause of action for misleading or deceptive conduct under the Australian Consumer Law.Ironically, the better the AI is at replicating the sound of the original artist, the more likely that this cause of action would succeed. If the AI was ineffective, so that anyone listening to the song would know that it was AI-generated rather than an original work, then this cause of action is unlikely to succeed. On the other hand, if the AI was so effective in mimicking the sound of the original artist that listeners would believe it was a new song by them, then this cause of action may be available.This would also depend to some extent on how the AI-generated song was presented to consumers of music. For example, if the song was presented as being an original song from the musician, then the musician may have recourse under the consumer law.
  5. Other causes of action AI-generated music may, depending on the circumstances of its creation and dissemination, also raise issues under other areas of law, such as fraud, defamation, and privacy laws. Further, if the music is posted on social media sites, it is possible that it may be against the terms of service of those sites.

Ownership of AI-generated music

A separate, but equally challenging, legal question is who (if anyone) owns copyright in a song which has been generated by AI? There are several entities who may make a claim to copyright ownership, including the artist or artists whose music was used to train the AI, the creator of the AI, or even the AI itself.

Yet again, we won’t know with any certainty until a case such as this comes before the Australian courts. In the meantime however, it seems likely that the answer will be that no-one owns copyright in AI-generated music, based on a similar ruling in the context of Australian patent law which found that patents can only be granted for inventions by humans. Of course, it is possible that the relevant factual matrix, circumstances of a particular case, or nuances in the language in the Copyright Act compared to the Patents Act may lead to a different result.

Future challenges and opportunities

The proliferation of AI works, including music, will undoubtedly become an increasingly relevant issue to the Australian legal system. To the extent that these works are created without the consent of the musicians whose music is used to ‘train’ the AI models, and particularly where the AI-generated music is indistinguishable from original music, this poses a very real existential threat to the music industry as we know it.

With this threat comes an opportunity for both the legal system and the Australian music industry to adapt. For example, the 2023 Hollywood Writers’ Guild strike resulted in a new agreement that imposed strict limitations on the use of AI to write scripts while leaving open the question of training AI on existing scripts (a highly relevant issue given that writers such as George RR Martin are currently involved in legal disputes with AI owners in relation to the use of their work to train AI).

In relation to the legal industry, it seems like it is only a matter of time before this issue comes squarely before the Australian courts, either in the context of music or another creative industry (with AI able to generate paintings, comic strips, and novels). Such a case would raise very interesting questions of statutory interpretation and could potentially lead to legislative reform which would help define the role of machine-generated art under Australian law.