European Council adopts Digital Services Act
European Union adopts NIS2 Directive
New liability rules for AI products
EU supervisory authorities warn World Cup visitors to pay close attention to digital security
European Commission approves next version of AI Act
This article covers key EU privacy and cybersecurity regulatory and legal developments from October 2022 to December 2022.
European Council adopts Digital Services Act
Following on from the European Parliament's adoption of the Digital Services Act (DSA) earlier this year, the European Council formally adopted the DSA on 4 October 2022. The DSA aims to create a safer and more accountable online environment by imposing obligations on providers of online intermediary services (including online marketplaces and social media) and increasing obligations around transparency and accountability.
The European Council explains that:
[a]mongst other things, the DSA:
- lays down special obligations for online marketplaces in order to combat the online sale of illegal products and services;
- introduces measures to counter illegal content online and obligations for platforms to react quickly, while respecting fundamental rights;
- better protects minors online by prohibiting platforms from using targeted advertising based on the use of minors' personal data as defined in EU law;
- imposes certain limits on the presentation of advertising and on the use of sensitive personal data for targeted advertising, including gender, race and religion;
- bans misleading interfaces known as 'dark patterns' and practices aimed at misleading
Larger providers with "significant societal impact" are subject to stricter rules under the DSA and may be subject to a shorter transitional period than other smaller providers.
European Union adopts NIS2 Directive
On 10 November 2022, the EU Parliament and the Council of the European Union each adopted the second EU Network and Information Security (NIS) Directive (the NIS2 Directive). The NIS2 Directive will replace the existing NIS Directive and is intended to address deficiencies therein, including by:
- expanding its scope to apply to more sectors (based on their criticality for the economy and society) and introducing a size cap rule to identify regulated entities;
- removing the distinction between operators of essential services and digital service providers and replacing this with a new system that will classify in-scope entities as essential or important, with a different regime for each category;
- requiring in-scope organisations to address cybersecurity risk in their supply chains;
- establishing the European Cyber Crises Liaison Organisation Network to support coordinated management of large-scale cybersecurity incidents and crises; and
- aligning with sector specific legislation including the Digital Operational Resilience Act and the directive on the resilience of critical entities.
The NIS2 will enter into force 20 days after being published in the Official Journal of the European Union. EU member states will then have 21 months in which to implement it. Organisations that fall within the scope of both the NIS2 and the UK NIS regime will then be required to comply with divergent rules.
New liability rules for AI products
On 19 October 2022, the European Commission published two proposals aimed at updating product liability rules for the digital age: the Product Liability Directive (PLD) and the Artificial Intelligence (AI) Liability Directive (AILD). These two directives are also said to align with the objectives of the European Commission's White Paper on AI and the 2021 AI Act proposal, setting out a framework for excellence and trust in AI.
PLD
The PLD adapts existing product liability rules to address compensation for defective products emerging from new digital technologies, such as smart products and AI. The PLD confirms that victims can claim compensation if software or AI systems cause damage, including personal injury. The proposed rules also apply to manufacturers and other businesses that modify or upgrade products already on the market.
AILD
The AILD seeks to ensure that victims of harm caused by AI technology can access reparation, in the same manner as if they were harmed under any other circumstances. The new rules introduce two main changes to achieve this:
- The victims' "burden of proof" will be replaced with a "presumption of causality". Victims can show, for example, that a manufacturer was at fault for not complying with a certain obligation relevant to the harm caused, and they can prove a causal link with the AI performance. Courts can presume in such a case that non-compliance with the obligation caused the damage.
- Victims will have a right of access to evidence by obtaining a court order to disclose relevant and necessary evidence about high-risk AI systems (subject to appropriate safeguards).
Impact and actions
If approved, as well as clarifying the existence of liability for damage caused by AI, the PLD will require companies to disclose evidence that a claimant would need to prove their case in court, to address the information gap between consumers and manufacturers. Non-EU manufacturers must also have an EU-based representative from whom consumers can seek compensation. The AILD means that providers of AI systems can be held liable for harm caused by AI technology where victims can prove a "causal link" between the damage caused and a failure by the provider to comply with certain obligations.
EU supervisory authorities warn World Cup visitors to pay close attention to digital security
Individuals who visited the World Cup in Qatar were required to install a number of applications on their mobile phone. The apps in question included a corona tracker app (Ehteraz) and a special World Cup app (Hayya). Privacy regulators from several European countries pointed out that the apps were likely to collect information about users without users being aware of it. For instance, the apps collected data on the user's caller history and apps installed on the phone and possibly had access to the user's photos.
European Commission approves next version of AI Act
With organisations increasingly relying on AI technology, EU regulators are turning their attention to effective regulation of AI in an effort to recognise its benefits while instilling confidence in individuals that the increasing use of AI is being deployed appropriately and lawfully (for further details please see "European Commission approves next version of AI Act").
The European Commission's AI Act proposal has undergone further changes following review by EU member states. The Council of the European Union approved a compromise version of the Act on 6 December 2022. The European Parliament are expected to vote on the draft by the end of March 2023, with a view to adopting the Act by the end of 2023.
For further information on this topic please contact Paula Barrett or Jonathan Palmer at Eversheds Sutherland by telephone (+44 20 7919 4500) or email ([email protected] or [email protected]). The Eversheds Sutherland website can be accessed at www.eversheds-sutherland.com.