The EU Commission has published proposals for the regulation of artificial intelligence (AI). There's a lot to digest but we set out here some of the key points contained in the proposed regulations and their accompanying documents, including why this is of relevance in the UK.
The proposed objectives of the regulations are to:
- ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
- ensure legal certainty to facilitate investment and innovation in AI;
- enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
- facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
However, the regulations should be seen as part of a wider move by the EU along with its Co-ordinated Plan with Member States to strengthen AI uptake, investment and innovation.
The regulations have been drafted with the above in mind.
How does the proposed regulation define AI?
Defining AI is tricky and there are various across industry and regulators. When it comes to the EU's regulations, the EU recognises that the definition needs to be specific enough to provide certainty but flexible enough to accommodate technological developments. As a result, the definition of AI is:
"software that is developed with one or more [specified] techniques and approaches [e.g. machine learning] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with"
This is different to the definition of AI adopted by the EU Parliament in October 2020 for its framework for ethical AI (which we wrote about here). Whether anything turns on the difference is not immediately apparent, but does denote the difficulties of pinning down a definition.
To whom and what would the proposed regulations apply?
The regulations would apply to:
- providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;
- users of AI systems located within the Union;
- providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union.
The regulations would not apply to:
- AI systems developed or used exclusively for military purposes;
- public bodies or international organisations in third countries use AI in the framework of international agreements for law enforcement or judicial co-operation; or
- application of the EU's forthcoming Digital Services Act.
What do the regulations propose?
The regulations affect different types and uses of AI based on a risk-based approach. There's a lot of detail in the proposed regulations, but in summary (in the words of the EU press release):
Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring' by governments.
High-risk: AI systems identified as high-risk include AI technology used in:
- Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
- Educational or vocational training, that may determine the access to education and professional course of someone's life (e.g. scoring of exams);
- Safety components of products (e.g. AI application in robot-assisted surgery);
- Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
- Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
- Law enforcement that may interfere with people's fundamental rights (e.g. evaluation of the reliability of evidence).
High-risk AI systems will be subject to strict obligations before they can be put on the market:
- Adequate risk assessment and mitigation systems;
- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
- Logging of activity to ensure traceability of results;
- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
- Clear and adequate information to the user;
- Appropriate human oversight measures to minimise risk;
- High level of robustness, security and accuracy.
Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens' rights or safety.
The regulations also propose potentially significant fines for non-compliance.
What relevance does this have in the UK after Brexit?
As a result of Brexit, the regulations would not apply directly in the UK. However, they will have an impact. Two reasons include:
- the regulations apply to providers of AI systems who are based outside the EU but place AI systems on the market or put into service in the Union. This is to avoid those inside the EU contracting out high-risk AI systems to providers outside the Union with the continuing risk of harm to EU citizens.
- there has been debate in the UK about legislation and regulation of AI, whether for specific sectors or across sectors. The EU's objectives are similar to those being discussed in the UK (for example, as we wrote about the UK's AI Council's calls for a National AI strategy). UK governments and regulators will look to the EU regulations when deciding whether legislation or regulation is required and what it may look like.
Much of what the regulations propose may already be happening in practice in some form. For example, requirements for high-risk AI specify include detail about the required risk management systems, data governance, technical documentation, and record keeping. However, some of the regulations may require AI developers and users to rethink what they are doing. For example, "high-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately."
In any event, should the regulations ultimately proceed and have effect, users and providers or AI systems will want to ensure compliance given the significant penalties for non-compliance. Given the detail of the proposed regulations, and often complexity of AI systems, thought on compliance is likely to be required sooner rather than later.
Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence