Key changes


With organisations increasingly relying on artificial intelligence (AI) technology, EU regulators are turning their attention to effective regulation of AI in an effort to recognise its benefits while instilling confidence in individuals that the increasing use of AI is being deployed appropriately and lawfully.

The European Commission's AI Act proposal has undergone further changes following review by EU member states. The Council of the European Union approved a compromise version of the Act on 6 December 2022. The European Parliament are expected to vote on the draft by the end of March 2023, with a view to adopting the Act by the end of 2023.

The Act is expected to lead the framework for the regulation of AI in and outside the European Union. Much like the EU General Data Protection Regulation (GDPR) in terms of impact, the Act will have an extra-territorial scope, extending to providers and users outside the European Union where the output is used in the European Union. This is anticipated as being a benchmark AI law that other jurisdictions might look towards when developing their own laws (much like the EU GDPR has become a standard upon which some other countries' own laws are heavily based).

Member states are given authority to rule on penalties, including administrative fines, applicable to infringements of the Act. The Act requires penalties to be effective, proportionate and dissuasive, while taking into particular account the size and interests of small and medium-sized enterprise (SME) providers, including start-ups and their economic viability. The Act does lay down fixed penalties for certain infringements of the Act, the highest fine being €30,000,000 or 6% of a company's total worldwide annual turnover (3% in the case of an SME or start-up) for non-compliance with the prohibitions of AI practices laid down in article 5. The proportionate caps for SMEs indicate that there might well be a willingness by the Commission to support innovation, while the huge potential fines for certain infringements shows how dissuasive enforcement action is intended to be.

Key changes

The compromise text outlines a number of changes since the first draft, including the following.

Narrower scope of AI systems
In order to ensure that the definition of an "AI system" provides sufficiently clear criteria for distinguishing AI from more classical software systems, the compromise text narrows it down to systems developed through machine learning and/or logic-based approaches to generate predictions, recommendations or decisions. The Act states this definition is intended to be flexible enough to accommodate future developments in technology. The text makes clear that an AI system can be designed to operate with varying levels of autonomy, with some human input, though an AI system that uses rules defined solely by natural persons to automatically execute operations should not be considered an AI system.

Extension of prohibited AI practices
The compromise text extends the prohibition of using AI for social scoring also to private actors (therefore, AI cannot be outsourced to third-party contractors). The prohibition on the exploitation of the vulnerabilities of a specific group of persons has also been extended to cover persons who are vulnerable due to their social or economic situation. It is worth noting that non-compliance with these prohibitions is subject to the highest possible fines under the Act.

Classification of high-risk systems
AI systems that are not likely to cause "serious fundamental rights violations" or other significant risks are not captured by the classification of a high-risk system. The significance of the output of the AI system in respect of the relevant action or decision is to be taken into account when classifying AI systems as high risk. This would be based on whether or not it is purely accessory in respect of the relevant action or decision to be taken.

Clarification of responsibilities of "provider" of AI systems
The compromise text includes changes clarifying the allocation of responsibilities and roles. Articles 13 and 14 set out certain information that providers must issue to enable users to understand and use the system appropriately, including the contact details of the provider of the system, its intended purpose and the human oversight measures in place to facilitate the interpretation of the outputs of the system. A new article 23(a) specifies when a natural or legal person is a "provider" of an AI system and indicates more clearly the situations in which other actors in the value chain are obliged to take on the responsibilities of a "provider".

Support for innovation
The provisions concerning measures in support of innovation have been substantially modified in the compromise text in an effort to achieve the Act's objective in creating a legal framework that is innovation-friendly and to promote evidence-based regulatory learning. The Act clarifies that AI regulatory sandboxes, which establish a controlled environment for the development, testing and validation of innovative AI systems under the direct supervision and guidance by the national competent authorities, should also allow for testing of innovative AI systems in real world conditions. This is supposed to support organisations without the regulatory risk during the innovation stages.

Support for smaller companies
In order to alleviate the administrative burden for smaller companies, article 55 of the compromise text includes a list of actions to be undertaken by the Commission to support such operators, including providing SMEs with priority access to regulatory sandboxes and establishing dedicated channels of communication with SMEs.

Supervision and guidance
The Act establishes a European Artificial Intelligence (EAI) Board, which is composed of one representative per member state. The compromise text provides greater autonomy to the EAI Board, with the objective that it advises and assists the Commission and the member states in order to facilitate the consistent and effective application of the Act, including cooperating with market surveillance authorities and the Commission with regard to changes required to the Act and the development of relevant guidance. In what will be welcome news to AI providers, new article 58a lays down an obligation for the Commission to produce guidance on the application of the Act.


It is anticipated, if desired timelines are met, that the AI Act will be adopted by the end of 2023. Industry commentators predict there may be a two-year grace period following adoption (in the same way as for the EU GDPR).

Organisations deploying AI systems may now wish to consider the AI Act (in its current form) – in particular, the responsibilities of providers with respect to those systems – in order to develop its processes accordingly. In a similar vein to "privacy by design" as is required by the EU GDPR, the AI Act (if passed into law) means AI system providers will need to bear these obligations in mind when developing AI systems.

This new proposed law is a timely reminder that organisations who are subject to EU (and UK) privacy laws already have to ensure that, among other things, they:

  • carry out impact assessments for high-risk processing;
  • consider data subject rights; and
  • are accountable.

All of the privacy law principles already apply to how personal data is used and processed through the lens of AI. The AI Act will, if it becomes law, add to the protection for data subjects and ensure tighter regulation (not limited to data privacy angles) while seeking to strike a balance to encourage innovation and enjoyment of the benefits of AI.

For further information on this topic please contact Lorna Doggett or Carolyn Sullivan at Eversheds Sutherland by telephone (+44 20 7919 4500) or email ([email protected] or [email protected]). The Eversheds Sutherland website can be accessed at