On 18 July 2022, the United Kingdom (UK) government set out its new proposals for regulating the use of artificial intelligence (AI) technologies while promoting innovation, boosting public trust, and protecting data. The proposals reflect a less centralised and more risk-based approach than in the EU’s draft AI Act.

The proposals coincide with the introduction to Parliament of the Data Protection and Digital Information Bill, which includes measures to use AI responsibly while reducing compliance burdens on businesses to boost the economy.

What is considered ‘AI’?

One of the key challenges of regulating the use of AI is undoubtedly the pace of technology advancement in this area. The UK government’s current proposal is to set out the core characteristics of AI, whilst allowing regulators to set out more detailed definitions according to their specific sectors. The government is of the opinion that it should regulate the use of AI, not the technology itself. The idea being that this should future-proof the UK’s AI regulation without hindering innovation.

Two core characteristics have been identified in the proposals:

  • “Adaptiveness”: meaning that AI systems often partially operate on the basis of instructions that have not been expressly programmed with human intent, having instead been learnt through training data; and
  • “Autonomy”: meaning that AI systems demonstrate a high degree of autonomy and often automate complex cognitive tasks.

Self-driving car control systems and natural language processing were used as examples of systems that meet the above characteristics.

Cross-sectoral principles for AI regulation

The government has set out six cross-sectoral principles which will apply to all actors throughout the AI lifecycle. These principles will then be interpreted and implemented in practice by existing regulators, such as Ofcom or the Competition and Markets Authority.

Regulators will be encouraged to consider “lighter touch” options, such as guidance, voluntary measures or creating sandbox environments before AI technology is introduced into the wider market.

The principles aim to offer a steer to regulators and assist them in adopting a proportionate and risk-based approach to AI regulation:

  1. Ensure that AI is used safely.
  2. Ensure that AI is technically secure and functions as designed.
  3. Make sure that AI is appropriately transparent and explainable.
  4. Embed considerations of fairness into AI.
  5. Define legal persons’ responsibility for AI governance.
  6. Clarify routes to redress or contestability.

Sectoral divergence and diverging from the EU

The risk-based, more flexible approach of the UK government’s proposal does have the potential to be business and innovation friendly. However, delegating powers to various regulators across different domains and sectors could cause unintended complexities, making the regulatory landscape potentially more difficult for businesses to navigate. It is not uncommon for international businesses to have presence in more than one sector, nor to be the recipient of AI services in one sector and the provider in another. Collaboration and clear allocation of responsibilities amongst the various regulators will no doubt be key to the success of this proposal.

Some regulators have already identified areas of focus within the AI space. Earlier in July the UK Information Commissioner highlighted the risks of using AI tools to screen job applicants or applications for financial support. The Commissioner further announced the launch of investigations into such use. The Commissioner plans to publish refreshed guidance for AI developers on ensuring that AI systems treat people and their information fairly. Similarly, the Financial Conduct Authority also emphasised its intention to explore ethics and bias in algorithms and AI.

The proposal also diverges from the draft EU AI Act, which sets out more detailed obligations on the different players within the AI lifecycle. Whilst the UK government’s proposals signal the government’s willingness to attract AI-driven businesses, it is important to note that the draft EU AI Act has extra-territorial scope and remains relevant to UK-based businesses.

What’s next

The proposal has been launched simultaneously with a call for evidence closing on 26 September 2022, in which industry experts, academics and civil society organisations focusing on technology can share their views on putting the proposals into practice. A white paper is expected in late 2022.