Introduction
What are they?
Danger of deepfakes
What can be done?

Comment


Introduction

In March 2022, a video circulated on the internet appearing to show President Zelenskyy announcing Ukraine's surrender to Russia. No sooner had the video found its way into social media feeds and news bulletins, than it was debunked as fake: a so-called "deepfake".

To some, this episode barely registered on their radar, or was dismissed as a failed Kremlin gimmick. Besides, as deepfakes go, it was a particularly unsophisticated one. Few would have been convinced by the video's authenticity. To others, the video (and its dissemination) signalled something much more dangerous: the latest shot across the bow of democracy.

To them, it was confirmation that deepfakes were no longer the little-known phenomenon confined to the digital backwaters of internet message boards and pornography websites, left to fester by a political class that was either unaware of their existence or unable to comprehend their potency. Now they had emerged into the mainstream and threatened to become one of the most effective and dangerous weapons of informational warfare.

Unsurprisingly, there have been the inevitable calls for action that come when confronted with technological innovations that have the ability to subvert institutions but which are not yet fully understand: legislate or regulate.

Often, such clamour is premature: the United Kingdom has plenty of laws as things stand. But in the case of deepfakes, the law may well need an update.

What are they?

At their most basic, deepfakes are artificially produced audio or video clips in which an individual's (usually a celebrity's) image or voice is "cloned" such that it can be manipulated to say or do whatever the deepfake creator wants.

The creators of deepfakes employ artificial intelligence by feeding a deep-learning system a dataset of the subject's image or voice in order to produce the digital clone. Simultaneously, a separate system tests this clone against the original to detect any flaws and further hone its likeness.

The result is an uncanny reproduction of the original subject that is not a copy of any one image, but instead an amalgamation of them all. And it can be manipulated at the creator's whim.

Danger of deepfakes

The attraction of deepfakes is simple: it is possible to make the subject do or say anything – literally putting words into the mouth of another person. What the subject is made to say or do is another matter: it can range from the trivial to the malign. Deepfakes are particularly dangerous in the world of politics given the nature of information wars. Whereas a scammer needs to produce a highly sophisticated deepfake to convince the audience that the cloned celebrity actually said particular words, the propagandist need not aim so high. A rudimentary deepfake, such as the one featuring President Zelenskyy, can do just as much damage.

That is because for those engaged in informational wars, it is not critical to deceive the public into believing the authenticity of the deepfake. Just as important is simply sowing a seed of doubt: hope that the video goes viral, maybe gets picked up by news bulletins, and even is quickly debunked. In this way, the public is made aware that doctored videos are being disseminated across the internet and is primed to treat any video they watch in future with scepticism.

This phenomenon has been referred to by academics as the "liar's dividend": saturate the Internet with sufficient misinformation and disinformation that nothing will be believed and everything can be questioned. It is in this context that the Russian ambassador to the United Kingdom can say to the BBC that the independently-verified CCTV images from the massacre at Bucha were computer generated as part of a video game.

Before it is thought that this is exclusively the tool of the Kremlin and fellow autocratic regimes, it should be noted that in the last major elections in both the United Kingdom and the United States, politicians were prepared to share altered and distorted videos of their political opponents. In the United Kingdom, for example, a video of the shadow Brexit Secretary and Labour-leadership hopeful, Sir Keir Starmer, was crudely manipulated so as to make it appear that he stumbled when asked a question on daytime television. It was later posted by the official Twitter account of the Conservative Party Press Headquarters.

What can be done?

Unsurprisingly, there have been calls for action in the United Kingdom and abroad. In the run-up to the 2020 US presidential election, Facebook, Google and Twitter all announced steps to remove or label potentially harmful or misleading deepfakes. At the same time, engineers are using the very same technology that deepfakes rely on to detect and remove them. Very recently, Google prevented the use of one of its Google Research products, Colab, for the purpose of creating deepfakes.

This game of digital whack-a-mole will go some way to stop the proliferation of dangerous deepfakes. But what about the law? What can the UK government do beyond merely leaning on the tech companies to do more? And what about the victim of a deepfake – what recourse is open in the United Kingdom to the person who never said those words or did those things?

It seems natural to suppose that there must be UK laws that can protect against malicious deepfakes. Analysed one way, they feel akin to a form of identity theft. However, just as in the case of identity theft, it is not the stealing of identity per se that brings liability, but the fraudulent actions that follow – so, in the case of deepfakes, it is necessary to identify an appropriate cause of action that arises from their production or proliferation to which a claimant can resort. This is not always an easy task, particularly in the law of England and Wales.

Intellectual property
The first step might be to analyse deepfakes from an IP perspective. At first blush, given that the debate takes place in the realm of the production of audio or video works, it might be thought that UK copyright law could come to the aid of the subject of the deepfake. However, on closer inspection, problems quickly emerge. For example, the subject matter of the deepfake will in many cases not own the copyright in either the base image onto which their face is placed, nor the original images of their face which contributed to the production of the digital clone.

Furthermore, even if they did have a claim in copyright in any of the images contained within the deepfake video, the creator may be able to rely on a defence of fair dealing in England, for example, on the grounds of parody or caricature.

What about passing-off, another member of the IP family tree? Rihanna famously successfully sued Topshop for passing off in the United Kingdom when it used her image in one of its advertising campaigns without her permission.

Would a celebrity suing a deepfake creator for using their image without permission be any different? Perhaps not, but as a matter of law the deepfake would have to be sufficiently convincing that it would reasonably lead a consumer to think that the relevant celebrity had somehow endorsed the product in question. And even if that were the case, an action in passing off or false endorsement would be limited only to those with the most famous of faces as a critical element for a successful claim is that their image is known to be used to endorse or sponsor products.

Privacy or defamation
If UK IP law is not the most obvious fit, what about laws relating to defamation and the privacy of the individual?

The courts of England and Wales have traditionally been very reluctant to recognise any general freestanding right to privacy, preferring instead to rely on a patchwork of laws and judgments, particularly since the passing of the Human Rights Act 1998.

Under English law, the best course of action would likely be for a subject of a deepfake to seek to bring a claim in defamation. But, of course, that brings difficulties that are inherent in all cases of defamation – namely, having to demonstrate that:

  • the deepfake is, in fact, defamatory;
  • the deepfake has caused the subject serious harm; and
  • no defences are available to its creator.

This is quite apart from the practical difficulty of trying to identify the creator, and any likely questions of jurisdiction that might well arise.

Comment

English law is not, therefore, well equipped to deal with deepfakes. The patchwork of laws that exists probably does not quite extend far enough to capture the peculiarities of this new technology.

One answer may be simply to require tech companies to do more to stop deepfakes' proliferation. Indeed, on 16 June 2022 the European Commission has published its strengthened Code of Practice on Disinformation which has precisely that aim, with the threat of financial penalties on tech companies if they fail to take sufficient action.

This is a model of oversight that is reflected too in the Online Safety Bill proposed by the UK government, which puts the onus on tech companies, on pain of considerable fines, to police the darkest corners of their platforms. However, it is notable that a recent report by the Digital, Culture, Media and Sport Committee criticised the draft bill for failing adequately to address the "insidious" problem of deepfakes.

The other answer, hinted at by the committee, is to introduce new primary legislation: an anti-deepfake law to address the problem before it gets out of control.

Such new laws are starting to emerge in other jurisdictions. The United Kingdom will be watching keenly to see whether to adopt a legislative clone of such laws for itself. But it will just as equally be watching to see whether, in fact, these laws reflect the nature of deepfakes themselves: seemingly innocent and well-meaning at the beginning, but ultimately replete with unintended, harmful consequences.

For further information on this topic please contact Jack Kennedy at Wiggin by telephone (+44 20 7612 9612) or email ([email protected] ). The Wiggin website can be accessed at www.wiggin.co.uk.