The word ‘deepfake’ is quickly gaining traction. But what is it exactly? Linguistically a combination of ‘deep-learning’ and ‘fake’, it refers to the artificial production, manipulation and modification of data to create a false representation of an existing person, object, place, or entity.
What’s the problem?
Media manipulation is not a new phenomenon, particularly in the entertainment industry. However, recent use of deepfake technology takes it a notch lower, deeper, as it were. Troublingly, it is about the unprecedented and more visually effective spread of misinformation, manipulation of public opinion, creation of malicious or harmful content, and scamming.
Are there any laws which regulate deepfakes?
One would assume that the knight-in-shining-armor leading the charge to defend victims of artificial manipulation of their image would be the newly proposed Artificial Intelligence Act (AI Act). Through it the European Union (EU) hopes to harmonise the rules and obligations steering the development, market placement, and use of AI systems amongst member states.
The proposed AI Act will not bar the use of deepfakes outright, but attempts to regulate them through transparency obligations placed on the creators, who will be obliged to “disclose that the content has been artificially generated or manipulated”.
Whilst this piece of legislation can be viewed as a promising first step, certain issues persist. Enforceability against malicious content creators, especially those operating from outside of the EU, remains an issue. Furthermore, it is not yet clear whether those who create a malicious deepfake in their personal capacity (as opposed to a professional one) will be subject to such transparency obligations.
Another relevant legal framework is the General Data Protection Regulation (GDPR). Given that content is frequently created using an individual’s personal data, including their image and voice, the content creator would be subject to GDPR obligations. Significantly, the processing of personal data requires a legal basis, such as the informed consent of the person depicted in the deepfake.
Although it may seem promising for victims, the reality is that relying on the GDPR for legal protection can be a rather meandering legal route. As with the AI Act, the most challenging obstacle remains enforceability, as in most cases it is very difficult to identify and hold accountable malicious deepfake creators.
The Digital Services Act (DSA) is another important legal framework, allowing individuals to notify an online platform of any content which they deem to be illegal. It is then up to the platform to take the necessary action. Recent discussions in the European Parliament focused on the introduction of a specific section in the DSA which obliges very large online platforms who become aware of malicious deepfakes to provide a visible indication informing platform users of such malicious content.
The DSA’s key stumbling block is that it is not clear what a platform should deem to be ‘illegal’ and consequently remove. Furthermore, instead of seeking to proactively prohibit the production of malicious deepfakes, this Act adopts a rather reactive approach to eliminate existing illegal content.
Ok, so are we protected?
The bottom line is that while on paper it seems that individuals have ample legal safeguards against malicious deepfakes, in practice they may not be easily enforceable. Pending further breakthroughs on this front, it is of utmost importance to take the necessary precautions, including the following:
Be cautious of content that seems too good to be true.
Keep an eye out for clues that the content may be a deepfake (such as strange distortions or artefacts in the video or audio), bearing in mind that technology is bound to improve and such giveaways will most likely be less evident.
Be careful before sharing deepfake content, as it may contribute to its spread and help to legitimise it.
This article was first published in the Times Of Malta