With the recent news that a finance worker in Hong Kong was conned out of $39million of company funds after scammers used AI software to imitate the workers senior colleagues in a “deep fake” video, deep fakes are rapidly becoming a very real concern. Last month Taylor Swift was a victim of AI pornography after graphic images of the singer emerged on X which were viewed more than 45 million times. Consequences of deep fakes can be significant and not just for those in the public eye.

The law surrounding AI is complex, as there are different infringements depending on whether you’re considering what goes into an AI model, and what comes out of it. In this update we look at where the law stands on deep-fakes from a consent perspective.  

What is a Deep Fake?

A deep fake is content (images, audio or video) created by AI. The name ‘deep’ comes from the fact the image is a result of a kind of machine learning, called “deep” learning. Deep learning is when computer is taught to process data and information in a manner that is similar to a human brain.

Photo-shopping an image is nothing new, however, the reason why deep fakes are of a greater concern is because one of the algorithms that is used in creating deep fake content is an algorithm that is trained to detect when the content is fake and when it’s not. So, the short point is that deep fakes are extremely realistic.

What’s the current law?

In Australia, there is no specific legislation protecting individuals against deep fakes and the law is presently deficient. It comes as no surprise that technology is evolving faster than the law can keep up. Firstly, thanks to VPNs, it is often impossible to identify who is responsible for the distribution of the content. Secondly, the law doesn’t prevent the creation of non-consensual deep fakes, and finally, any Court proceedings to protect the interests of the victim are likely to re-traumatise the victim, as well as being incredibly costly.

On both a criminal and civil level, non-consensual sharing of intimate images, including digitally altered images can be a criminal offence as well as a breach of the Online Safety Act.  The law in Australia does not, however, penalize the creation and possession of deep fakes.

The problem in the legislation is the use of the word “capture” versus “create”. AI imagery is created – not captured. While distribution of the images will be caught by relevant legislation, the act creating and holding onto non-consensual deep fake pornography is not properly addressed.