A shorter version of this article was originally published on ComputerWeekly.com on 3 March 2020. The shorter version can be viewed here.


The new President of the European Commission (EC) Ursula von der Leyen promised to put forward legislation “for a coordinated European approach on the human and ethical implications of artificial intelligence” within 100 days of taking office on 1 December 2019. On 19 February 2020 the EC published for consultation its white paper “On Artificial Intelligence – A European approach to excellence and trust”, giving us a clear view of the substantial changes the EC has in mind.

Law and regulation are generally technology agnostic. There are laws and regulations which apply to technology, but most of these are not specific to technology. The same is true for artificial intelligence (AI) solutions, used in this article as an umbrella term for a wide range of algorithm based technologies that solve complex tasks, often tasks which until recently required human intelligence. There are laws and regulations which apply to AI, but for most types of AI businesses are deploying, the applicable laws and regulations are not specific to AI.

As the UK exits the EU and the EC brings forward a new framework for AI regulation, the UK finds itself at a crossroads. Will it choose to follow the new European approach and bring in laws targeted specifically at AI, or go its own way? First, a look at the current state of AI regulation in the UK.

Where are we now?

As the importance of AI to the future of business, work and government has become apparent, regulators and law makers in the UK have paid increasing attention to how existing laws should apply to AI being deployed by businesses and government.

Personal Data and AI

In the UK, the most active areas of law and regulation relevant to the types of AI being deployed by businesses has been around uses of data which relate to identifiable individuals, “personal data” in the language of the key European and UK legislation, the General Data Protection Regulation (GDPR) and the Data Protection Act 2018 (DPA).

In December 2019 the Information Commissioner’s Office (ICO) and the Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, published detailed guidance on explaining AI decisions. Being able to explain how an AI decision is made is key to being able to demonstrate that a business is complying with its obligations under GDPR, DPA and the Equality Act 2010 (EA 2010).

The obligations of transparency, accountability and non-discrimination set out in GDPR, DPA 2018 and EA 2010 are not specific to AI, but are issues which will be relevant for any AI implementations which use personal data to make decisions, predictions or inferences about individuals.

The ICO has published separate guidance making it clear that the ICO expects businesses intending to use AI in relation to personal data to first carry out a data protection impact assessment whenever the use of AI is combined with certain other criteria, and has issued a formal opinion on the use of live facial recognition AI technology by law enforcement in public places.

Under the GDPR, individuals have a right not to be subject to a solely automated decision which produces legal or similarly significant effects, subject to certain exceptions. Whilst the relevant article of the GDPR does not refer expressly to AI, it does apply to AI processes.

AI and the Public Sector

Additional guidance to support the public sector’s lawful use of AI is also now being called for. The UK Parliament’s Committee on Standards in Public Life published a report on 10 February 2020 asking for guidance on how public bodies should best comply with the EA 2010 when using AI. In the same report it asked the government to clarify how the multiple AI ethics frameworks published over the past two years by the G20, the EC and others should be reconciled and applied by the public sector.

This is part of the growing awareness of AI and a desire to ensure existing laws can be understood and applied in the AI context.

Autonomous Vehicles

One type of AI in relation to which new laws and regulations are being considered is autonomous vehicles. Following on from the Autonomous and Electric Vehicles Act 2018, the Law Commission is carrying out a three year project to help ensure the UK’s laws are ready for the arrival of self-driving vehicles.

In its first consultation, which began in 2018, the Law Commission focused on questions of safety assurance, ongoing monitoring and maintenance and the need to adapt the rules of the road for use by AI. The second consultation is currently underway and focuses on services which will use automated vehicles to supply road journeys to individuals without a human driver. A third consultation is due later in 2020, which will draw on responses to the first two papers and set out detailed proposals for the way forward, before a final report is issued in 2021 setting out recommendations on all issues, which the UK government will then consider.

These developments will be of interest to vehicle manufacturers, transport service providers, technology companies involved in developing AI for autonomous vehicles and businesses which depend heavily on road haulage, although most businesses will not be directly impacted by the proposed legislative changes.

Where We Are

Other than as outlined, AI is treated by the law like any other type of software or product. No more specific laws or regulations apply, and there is no regulator as yet which has sole responsibility for governing uses of AI. This is broadly aligned with the approach taken by the EU to date, but it appears that the EU’s approach is about to change.

The EC has been paying close attention to AI for a number of years, but the election of a new EC President does seem to have triggered a desire to take more direct action.

Under the previous regime of Jean-Claude Juncker, the EC was supportive of the development of AI and keen to ensure EU citizens had trust in AI through its “Coordinated Plan on AI” and the “Ethics Guidelines for Trustworthy Artificial Intelligence”, published in April 2019. The ethics guidelines do not relate to the application of particular legislation, nor do they have the force of law. Accordingly, whilst many businesses took note of the guidance and made use of the assessment guidelines, these did not have the profile of a regulatory change.

It seems that Ursula von der Leyen wants the EU to enact that regulatory change, and for member states to take a more hands-on approach to regulating AI. As well as suggesting targeted changes to the European liability framework, the EC white paper published on 19 February 2020 consults on a possible new AI-specific regulatory framework, which would impose significant additional legal requirements in relation to the development and use of “high-risk” AI.

High-Risk AI

The EC is proposing that a set of binding requirements would apply to developers and users of high-risk AI. In order to distinguish between high-risk and low-risk AI, a list of high-risk sectors would be identified (such as healthcare and transport) along with a more abstract definition of high-risk use. This will focus on AI which produces legal effects for individuals or companies, poses a risk of injury, death or significant damage, or other effects which cannot reasonably be avoided.

The AI would have to satisfy both the sector and use criteria in order to be considered high-risk. For example an AI system which is used in the healthcare sector but relates to booking appointments would not be caught, as it would not be sufficiently high-risk to justify intervention. There would also be exceptional purposes which would be considered high-risk irrespective of sector, such as use of AI in recruitment processes or remote biometric identification.

The EC has also set out its suggestions for the types of mandatory legal requirements which would apply to high-risk AI, and these are extensive:

  • Training data – ensure the AI is trained on data sets that are sufficiently broad to cover all relevant scenarios, and sufficiently representative to avoid discriminatory outcomes.
  • Data and record keeping – keep for a reasonable period of time accurate records of the training data and the programming and training methodologies, processes and techniques used to build, test and validate the AI. Such data would need to be made available to a regulator on request.
  • Information provision – provide a description of the AI’s capabilities and limitations, including the expected level of accuracy. Inform individuals when they are interacting with AI and not a human.
  • Robustness and accuracy – build the AI so that it will correctly reflect the indicated level of accuracy throughout its life cycle and generate reproducible outcomes, and ensure the AI is resilient against overt attacks and other attempts to manipulate its data or algorithms.
  • Human oversight – this would vary by case, but could include requiring AI outputs to be reviewable by humans, or monitored by humans and deactivated if the human identifies an issue.
  • Specific requirements – additional obligations would apply to certain uses of AI, such as use of AI to enable remote biometric identification.

Enforcement and Governance

Perhaps the most surprising recommendation in the white paper is the EC’s proposed enforcement regime. The EC is suggesting that to ensure high-risk AI meets the mandatory requirements, a “prior conformity assessment” should be carried out. This could include procedures for testing, inspection or certification, checks on algorithms and of the data sets used during development. Additional ongoing monitoring may also be mandated, where appropriate. The conformity assessments would be carried out by notified bodies, identified by each member state.

The introduction of a need for prior regulatory consent for high-risk AI would be a substantial change from the current, hands-off, approach. Developers and users of high-risk AI would have to weigh carefully the additional costs of compliance with the regulatory regime, which could be substantial, against the business potential of their high-risk AI products and services.

In terms of governance, the EC suggests that member states should be required to appoint an authority responsible for monitoring the overall application and enforcement of the regulatory framework for AI. This would be similar to the network of data protection authorities, represented in the UK by the ICO.

Voluntary Labelling

For AI which is not high-risk, the EC has set out the option of introducing a voluntary labelling scheme. The businesses opting-in would have to comply with certain requirements and in return would receive the right to use a quality label, to signal that their product meets the relevant European requirements.

Geographical Reach

Unsurprisingly, the EC is suggesting that the new regulatory regime would apply to everyone providing AI enabled products or services in the EU, irrespective of their country of origin. From the EC’s point of view this will mean that anyone wishing to access Europe’s markets will need to be in compliance, there is no advantage to being outside the EU if you are servicing or selling to EU customers. This will also expand European influence outside the EU’s borders, as it mean that non-EU companies will be required to comply if they wish to service EU customers.

It is clear from the white paper that the EC is still at an early stage in its regulatory journey. The outline proposed is high level and would need substantial further consultation and development before it becomes legislation. By the time the EC brings forward its full proposals for AI regulation, it seems almost certain that the UK Brexit transition period will have ended, and the UK will not be bound to follow the EU approach.

This puts the UK in an interesting position. Rather than being aligned with Europe for the time being and having the choice as to whether to move away from the EU position at some point in the future, or maintain the status quo, the UK will instead have to decide whether it wishes to follow the EU, or to maintain a separate regime.

Many of the points raised by the EC’s white paper had already been considered to some extent by the UK’s House of Lords Select Committee on Artificial Intelligence, which summarised its views in its report “AI in the UK: ready, willing and able?” in April 2018. The committee concluded that, at that stage, blanket AI-regulation would be inappropriate and that existing regulators, such as the ICO, were best placed to consider the impact of AI on their sectors of expertise.

More recent insight into the possible thinking of the UK government can be found in a February 2019 blog post by Dominic Cummings, written before he took up his current role as Chief Special Adviser to the Prime Minister. He wrote:

“In many areas, the EU regulates to help the worst sort of giant corporate looters defending their position against entrepreneurs. Post-Brexit Britain will be outside this jurisdiction and able to make faster and better decisions about regulating technology like genomics, AI and robotics. Prediction: just as Insiders now talk of how we ‘dodged a bullet’ in staying out of the euro, within ~10 years Insiders will talk about being outside the Charter/ECJ and the EU’s regulation of data/AI in similar terms (assuming Brexit happens and UK politicians even try to do something other than copy the EU’s rules).” (emphasis added)

If this blog post reflects the views of the current government, with its substantial majority, the UK and the EU will take very different approaches to AI regulation. The EU’s direction of travel is in part mapped out. Although much more detail is required, the EC has expressed a strong desire to establish a regime of national regulators overseeing the enforcement of mandatory requirements and prior clearances for high-risk AI.

In contrast, the findings of the House of Lords Select Committee and the thoughts of those at the top of the UK’s government indicate a reluctance to put in place EU-style rules and a general regulator for AI. UK companies wishing to sell AI related products or services into the EU would have to comply with the new European regime, but there could be an advantage to UK companies in being able to develop AI products and services and launch them in the UK market, before expanding successful operations into the EU (and at that point complying with the EU rules).

In this context the UK government might well decide that nothing has changed since the House of Lords Select Committee report and that no further general regulation for AI is needed. This would not prevent it from continuing to progress the reforms required to facilitate the introduction of autonomous vehicles, or from encouraging the issuing of guidance to help businesses and the public sector apply existing laws to their use of AI, or from making minor changes to ensure that the UK’s matrix of laws and regulations continues to make sense in the context of AI.

Confirmation from the UK government, setting out its intentions for AI regulation for this five year parliament, would be immensely helpful for developers of AI and businesses wishing to take advantage of the opportunities AI offers. At the moment it seems that whilst the EU proceeds on its newly set course, the UK is likely to take its own route.