Given that the GDPR is five years old, you would think the law relating to data would be settled. However, there is a cluster of new laws coming from the European Union (EU), and it can be tricky to understand what they all mean. We set out some of the big takeaways, from a data protection point of view, from the following three pieces of legislation:
- The Digital Markets Act (DMA)
- Digital Services Act (DSA)
- The AI Act
The Digital Markets Act
The DMA entered into force on 1 November 2022 and started applying 6 months later ie, on 2 May 2023.
However, the core obligations under the DMA will apply only six months after an entity is designated as a “gatekeeper”. Therefore, we expect that the DMA will become fully applicable only around March 2024.
The DMA is part of a package of measures proposed by the European Commission to introduce more competition in digital markets and protect consumers while promoting data mobility and interoperability.
The DMA is principally a competition law, not a data protection law, but it will impact how the largest companies can handle data.
Who is the DMA targeting?
The DMA targets so-called “gatekeepers” ie, corporate groups which have a significant impact on the internal market, so its impact is only likely to be felt by the big technology companies.
Big tech companies will be designated as “gatekeepers” by the European Commission and once designated, gatekeepers will be subject to an additional level of regulation over and above other companies.
For example, they will be restricted in their ability to share data across their services without user consent.
Conversely, gatekeepers will also be obliged to share additional information with advertisers around how their ads perform ie, enhanced transparency.
Key takeaway from the DMA
The DMA will not directly impact on most businesses, given their size, but it will have consequences on how the bigger technology players handle data in Europe.
The Digital Services Act
Another piece of legislation is the Digital Services Act, or the DSA which came into force on the 16 November 2022.
The DSA establishes legal rules for online platforms operating in the EU, including:
- Social media platforms
- Online marketplaces, and
- Search engines
An online platform has a technical definition as “a hosting service that, at the request of a recipient of the service, stores and disseminates information to the public”.
Broadly, the DSA seeks to make online platforms more accountable for the content they host and to strengthen user rights and protections. The DSA’s objectives are to:
- Effectively combat illegal activities
- Reinforce the fundamental rights of individuals, and
- Improve the free movement of services within the EU
However, it also contains several provisions which directly impact on data protection considerations.
Some of those particularly noteworthy obligations include:
- Bans on targeted advertising on online platforms by profiling children or based on special categories of personal data such as ethnicity, political views or sexual orientation;
- New obligations for the protection of minors on any platform in the EU – the DSA contains a new open-ended obligation to “put in place appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors, on their service” (emphasis added);
- Rules on recommender systems - ie, a system that decides what content is visible to the recipient and in which order, using parameters set by the platform. There will be an obligation to set out in plain and intelligible language, in the online platform’s terms and conditions, the main parameters used in their recommender systems, as well as any options for the recipients of the service to modify or influence those main parameters. Very large online platforms and search engines, ie, ones that have more than 45 million users, have an obligation to provide at least one option which is not based on profiling.
Key takeaway from the DSA
If you are an online platform there are additional data protection obligations to be aware of in the DSA. These obligations increase if you are a very large online platform or search engine.
Note, however, that not all online businesses will fall under the DSA’s definition of “online platform”, so it is important to assess, on a case-by-case basis which definition, if any, applies to you.
The AI Act
It is important to note that, currently, artificial intelligence (AI) is principally regulated by the GDPR. In other words, the current setup is that the use of personal data to develop or deploy AI is subject to regulation, whereas AI itself is only indirectly regulated.
The AI Act will change this.
It is a ground-breaking piece of legislation dedicated to regulating AI systems. Its framework has more in common with EU general product safety laws, and the AI Act will regulate all types of products containing AI, including:
- Medical devices
- Biometric tech
- Recruitment tech
- Education tech, and
- Insurance tech
Who are the targets?
The scope of application of the AI Act will be very broad.
Deploying or using qualifying AI systems in the EU will trigger compliance regardless of the location or group structure of the provider.
Other actors in the AI economic chain, like importers, distributors and deployers, will be subject to regulation also.
Qualifying AI systems will be regulated on a sliding scale based on the risk posed by the intended use of that AI. The rules establish obligations for providers and deployers depending on the level of risk the AI system can generate.
- Firstly, AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited. Examples include social scoring and real time remote biometric identification. These practices will be banned outright.
- The second of these categories, the high-risk AI category, will generate the most change for providers of AI systems
- This is the real focus of the AI Act and “high-risk” AI will be subject to extensive regulation
- The basic idea is that certain types of “high-risk” AI are going to be subject to a certification and preapproval process before they can be launched in the EU
- Providers of AI systems under this category, including recruitment and insurance AI systems, will be subject to a conformity assessment regime involving CE marking, pre-market assessments and post mark monitoring
- Compliance will be based on a list of seven detailed requirements which will likely cause many providers to significantly adjust their engineering processes and product procedures to ensure compliance
- This conformity assessment regime will be very new to most tech companies
- Deepfakes and chatbots will be subject to transparency obligations and the remaining limited/minimal category will all but fall outside regulation.
Of interest to a lot of companies seeking to integrate AI like ChatGPT and Anthropic Claude into their platforms is the fact that foundation models, generative AI and “general purpose” AI will be regulated also. However, it is the providers of these types of AI that will bear the vast majority of the compliance obligation.
What does this mean in practice?
One of the big unknowns at this stage is therefore how many use cases will fall within the scope of “high-risk” AI and this threshold question will define the overall impact of the legislation.
This is because high risk AI will be regulated like physical hardware.
From a practical perspective, this is a huge shift from the way in which software is developed and deployed. Usually cutting-edge software is released in “beta” form as a “minimal viable product” and is then iterated on, sometimes again and again, and developed based on that feedback. In contrast, hardware needs to work as soon as it is released.
This means two things in practice.
- First, high risk AI will need to undergo a conformity assessment and receive the equivalent of a CE marking before it can deployed
- Second, extensive documentation and safety requirements will apply in relation to high-risk AI systems. For example, there will need to be detailed risk management processes and data governance practices around the training data, to avoid introducing bias into the system. There will also be an “explainability” requirement, ie, the AI must be “designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately”;
Important dates to note
Unlike the DMA and the DSA, the AI Act has not yet been adopted. However, it is being reported that, apart from narrow State and police derogations for use of remote biometric monitoring, the vast majority of the key points are now agreed as we approach the final lap – the Trilogue negotiations i.e., an informal interinstitutional negotiation bringing together representatives of the European Parliament, the Council of the European Union and the European Commission.
Current indications are that it will be adopted later this year and there will be a 2-year implementation period. Assuming that is correct, these obligations will not bite until 2025 or 2026.
Key takeaways from the AI Act
The position on most aspects of the AI Act is now known;
The regulation of technology products under a market conformity assessment regime will significantly disrupt the tech industry.
If you use or are considering using AI then you should monitor the various developments in the AI Act and get “plugged in” to the various obligations that could apply to you, especially if you are using a “high risk” AI.
At the same time, data protection issues and contractual considerations should be a core part of any “AI strategy” or system.
There are several new laws in this area, which go beyond the pure prism of privacy and are perhaps better described as regulating the law on technology more generally.
It is important to remember that even though these three separate pieces of legislation have distinct objectives and goals which at first glance may not seem “data protection” in nature, they impose additional, and often burdensome. data protection obligations which must be considered.