This is the third article in the DACB "AI Explainer" series focusing on the United Kingdom's approach to AI Regulation and looking, in particular, at the important takeaways from the recent AI Safety Summit in Bletchley Park.

As explained in our second article, in June 2023 the UK Government proposed its White Paper titled "AI regulation: a pro-innovation approach" setting out a principles based, sector-led approach to regulating AI. The UK Government closed their consultation period on the White Paper on 21 June 2023. This article aims to provide in further detail how the UK Government's approach to AI regulation and the proposed next steps following the White Paper's consultation period.

Our key takeaways provide a snap shot of the recent developments in the UK AI regulation space and, as expected given the recent advancements in AI, it has been a very busy few months.

KEY TAKEAWAYS

  1. The UK Government's pro innovation, principles based, sector led approach to AI regulation has industry and inter-government departmental support – However, the UK Government needs to respond by the end of 2023 to several questions on the practical implementation of their proposal following the White Paper's consultation period.
  2. The Interim Reports of the Department for Science, Innovation and Technology (DIST) and Department for Digital, Culture, Media & Sport (DCMS) in August 2023 (together the Reports) have raised questions – Specifically, questions on the ability of the current regulators to perform their proposed roles in regulating AI in their sectors. The existing regulators have cited a lack of training, funding, staff numbers and coordination as potential challenges.
  3. It is unlikely that any new UK AI legislation could now be enacted before 2025 - This is due to the absence of the suggested "tightly focused" AI Bill from the King's Speech on 7 November. However see our comments on the new Private Members Bill below.
  4. The Bletchley Declaration 1st November 2023 was a diplomatic success for the UK Government - Following on from the AI Safety Summit the Bletchley Declaration is purported to be the foundation of international cooperation and collaboration on AI safety.
  5. A Private Members Bill (the Artificial Intelligence (Regulation) Bill (the Bill) was introduced into the House of Lords on 22 November 2023 – The Bill is an attempt to establish by statutory instrument a framework to produce more detailed AI regulation. However it is worth noting that Private Members Bills rarely become law but, due to the media attention they often receive, they can encourage the Government to take action.
  6. On 27 November 2023 the UK published the first global guidelines to ensure the secure development of AI technology – The guidelines are the first of their kind to be agreed globally.

The following expert analysis considers the recent key milestones in more detail and explains what DACB predicts will happen next in the UK .

1. The March 2023 White Paper: the UK Government's Initial Stance

Our second article covered the UK Government's initial approach to AI regulation and its White Paper. By way of a brief reminder, the pro-innovation, principles based proposal does not create a single new AI regulator which will govern the development and use of AI in the UK. Existing regulators who are experts in regulating their individual sectors will take on this role. These regulators will include the Information Commissioner's Office (ICO), the Financial Conduct Authority (FCA), the Competition and Markets Authority (CMA) and the Office of Communications (Ofcom).

The following five principles from the White Paper set out the parameters which the UK Government will expect regulators to enforce in their sector:

  1. Safety, security and robustness;
  2. Appropriate transparency and "explainability";
  3. Fairness;
  4. Accountability and governance; and
  5. Contestability and redress.

These five principles are not expected to be statutory (however DSIT has questioned this approach - see our comments below). Existing regulators will be empowered to issue guidance regarding interpretation of and compliance with the five principles.

The UK Government's reasoning for taking this approach, rather than introducing AI-specific legislation, is that it will:

i. create a regulatory framework which is adaptable in the face of rapidly evolving technology; and ii. avoid a scenario whereby the remit of existing regulators is undermined by new legislation.

On the first of these points, the Online Safety Act is often cited as an example of legislation which took too long to implement (it was 5 years in the making), leading to legal practitioners questioning its relevance once enacted due the technology having moved on.

In addition to introducing the White Paper, the UK Government formed the "AI Foundation Model Taskforce" (renamed the "AI Frontier Taskforce" as explained in section 4 below) whose remit is to carry out research on AI safety and assist in the development of international safety and security standards. The UK Government has also proposed an AI regulatory sandbox designed to help developers ensure products are safe, transparent and ethical.

Since publishing the White Paper, the UK Government has received over 400 responses across regulators, industry, academia, and civil society. The Reports have supported and criticised the UK Government's approach in equal measure and will likely influence their response to its consultation, which is due to be published later this year.

2. The Department of Science, Innovation and Technology (DSIT) Interim Committee Report 31 August 2023

This Interim Report was an important contribution to the ongoing debate around AI regulation in the UK. The DSIT report highlighted that "there was a growing imperative to ensure governance and regulatory frameworks are not left irretrievably behind".

It outlined the twelve challenges of AI governance which the UK Government's approach needs to address:

  1. Bias: AI can introduce unacceptable bias against minorities;
  2. Privacy: AI can allow individuals to be identified and personal information to be used without authorisation;
  3. Misrepresentation: AI can generate material which deliberately misrepresents individual’s behaviours, opinions or character;
  4. Access to data: Only a few organisations hold the very large data sets required for the most powerful AI systems;
  5. Access to compute: Similarly to (4), only limited organisations have access to the significant computer power required for the development of powerful AI systems;
  6. Black box: There can be a lack of transparency as certain AI models cannot explain why they produce a particular result;
  7. Open source: Open source code can promote transparency and innovation. Maintaining proprietary code may concentrate market power amongst leading AI developers and will allow potentially harmful code to remain private;
  8. Intellectual property: Where an AI model or tool uses third party content, the IP rights of the content providers must be determined and enforced;
  9. Liability: If AI models and tools are used to do harm, the liability of developers or providers must be established;
  10. Employment: AI's disruption to the job market must be anticipated and managed;
  11. International coordination: As AI is a global technology its regulation must be an international undertaking;
  12. Existential challenge: The potential threat to human life and national security posed by AI needs to be understood and mitigated.

Although generally supportive of the UK Government's White Paper, the DSIT Interim Report noted the UK's approach was already at risk of falling behind the rapid pace of AI development. The need for a clear AI governance framework was imperative and in DSIT's view a "tightly focused" AI Bill should be put forward in the next King's Speech. As readers will likely be aware, on 7 November this recommendation was not followed. This new session of Parliament would have been the last opportunity before the General Election for the UK to legislate on the governance of AI. Following the election it is unlikely any new legislation could be enacted until 2025. Given the recent progress in the EU and the US with the EU AI Act and US Executive Order, critics of the UK Government's approach could state the UK is being left behind. Will the new Private Members Bill have an impact? We will address this later.

The DSIT Interim Report highlighted the importance of regulatory capacity and coordination. The work of the Digital Regulation Corporation Forum (DCRF) which brings together the CMA, ICO, FCA and Ofcom was seen as an example of best practice. There is a suggestion that an expanded version of the DCRF should be considered to co-ordinate approaches between all regulators.

The establishment of the Frontier AI Taskforce was praised, but the UK Government needs to carry out a gap analysis of the UK's regulators for resourcing, capacity, coordination and to understand if existing regulators need new powers to implement and enforce the White Paper's principles.

3. Department of Culture, Media and Sports Committee Interim Report (DCMS): Connected Tech: Protecting Creative Rights from AI 30 August 2023

At the same time the DCMS' Interim Report provided commentary on the UK Government's approach to AI regulation. It focuses on the risks from AI in the creative industry. DCMS highlighted the challenges AI technology presents to intellectual property rightsholders in the use of copyrighted material as "inputs", with which AI could be trained and developed, and as "outputs" of AI - i.e. AI generated works.

The Interim Report explained the main concern of the creative industry was the Intellectual Property Office's (IPO) proposal to extend the exemption to copyright infringement for text and data mining systems (TDMs). Currently the exception states that making a copy of a work does not infringe copyright where this is done for the purpose of text and data mining for non-commercial research1. In order to make the UK an attractive jurisdiction for AI development, the IPO proposed a new copyright and database exemption which allows TMD "for any purpose".

The UK Government has indicated it will not proceed with this proposal. However the DCMS Report calls on the UK Government to review how licensing schemes can be introduced, how creatives can ensure transparency and redress if they suspect AI developers are wrongfully using their works in AI development. The DCMS have asked the UK Government for a substantive update on its direction in managing the impact of AI on the creative industries by the end of 2023.

The DCMS whilst supporting the UK Government's "sensible proposals for regulating AI" have identified outstanding weaknesses which require clarification. DCMS's main concerns are:

  • the lack of skill in the regulators who do not currently regulate the technology sector. They state there needs to be a plan to provide upskilling and resourcing for non- digital sector regulators; and
  • the lack of coordination of the regulators and the UK Government. They state the UK Government should establish a discrete AI regulation co-ordination unit to ensure coherent working and enable robust stakeholder engagement.

4. UK Artificial Intelligence Policy Update Statement made by Michelle Donelan on 19 September 2023

In her Statement to the House of Commons, Michelle Donelan reported on several updates to the developments in the UK Government's AI policy and addressed a number of the concerns raised in the Reports.

She highlighted the actions DSIT have taken since the publishing of the White Paper;

  • Renaming the Foundation Model Taskforce to the Frontier AI Taskforce. This Taskforce has £100 million of funding. Its responsibilities are evaluating risks at the frontier of AI and AI systems which could pose significant risks to public safety and global security;
  • Creating a Central AI Risk Function within DSIT which will identify, measure and monitor existing and emerging AI risks using expertise from across UK Government, industry, and academia. This will enable a holistic assessment of risks as well as identifying any potential gaps in the UK Government's approach; and
  • Examining ways to improve co-ordination and clarity across the regulatory landscape. Including working with the DCRF to pilot a multi-regulator advisory service for AI and digital innovators, which will be known as the DRCF AI and Digital Hub. This will provide tailored support to innovators to navigate the AI and wider digital regulatory landscape.

5. The AI Safety Summit Bletchley Park 1-2 November 2023

The AI Safety Summit held at Bletchley Park at the beginning of November has been hailed as a diplomatic breakthrough and a "generational moment"2. Further, it produced The Bletchley Declaration. The Bletchley Declaration was signed by over 25 countries (including several African Countries, Saudi Arabia, the EU, the US and notably China) and represents a new international effort to unlock the benefits of AI. It also recognises the urgent need to understand and collectively manage the potential risks presented by AI. It is positioned to be the foundation for the safe and responsible development and deployment of AI at an international level.

The Summit focused on "Frontier AI" (high risk AI systems which are highly capable, general-purpose AI models, including foundation models, that could perform a wide variety of tasks). The key agreements underpinning the Bletchley Declaration were:

  • The countries agreed to support the, first of its kind, State of the Science Report which will help build international consensus on the capabilities and the risks of frontier AI. The UK has commissioned Yoshua Bengio (Turing Award winning AI academic) to chair the report.
  • The UK have established the world's first AI Safety Institute as their contribution to the joint statement on AI safety testing. This statement sets out that world leaders and those developing frontier AI systems, recognise the need to collaborate on testing the next generation of AI models.
  • A commitment that international collaboration will continue, with the Republic of Korea hosting a mini virtual summit on AI in the next 6 months, with France to then host the next in person summit in one year from now.

Although each of the countries at the AI Safety Summit are adopting different approaches for AI regulation they have all agreed the importance of international collaboration.

6. The Private Members Bill

The Artificial Intelligence (Regulation) Bill was introduced into the House of Lords and it is now on it second reading. Its key provisions:

  • Define AI as: "technology enabling the programming or training of a device or software to (a) perceive environments through the use of data; (b) interpret data using automated processing designed to approximate cognitive abilities; and (c) make recommendations, predictions or decisions; with a view to achieving a specific objective".
  • Create a new AI authority which serves as a supervisory function over the existing regulators. Rather than being the sole regulator for AI its role will include gap analysis of regulatory responsibilities, coordinating reviews of relevant legislation, assessing and monitoring risks arising from AI and accrediting independent AI auditors.
  • Establish principles for (a) the regulation of AI; (b) business developing, deploying or using AI; and (c) AI and it applications. The principles follow closely the five principles set out in the White Paper.
  • Introduce a requirement for any business which develops, deploys or uses AI to have a designated AI officer.
  • Introduce requirements for transparency obligations. Any persons involved in training AI must supply the AI Authority with a record of all third-party data and IP used, and consents received. Customers must be given clear and unambiguous health warnings where products / services involve AI, as well as the opportunity to opt out.

How far this Private Members Bill progresses remains to be seen. Such bills rarely get enacted and there may be limited parliamentary time to debate it depending on the timing of the general election. However, it does demonstrate that there is further support for the recommendations in the Interim Reports and to codify the principles contained in the White Paper in law.

7. Guidelines for Secure AI System Development

On 27 November the UK published the first global guidelines to ensure the secure development of AI technology. This initiative was led by GCHQ’s National Cyber Security Centre and developed with US’s Cybersecurity and Infrastructure Security Agency, industry experts and other international agencies. It is further evidence of the continuing global collaboration on AI established at the AI Safety Summit. The guidelines are the first of their kind to be agreed globally.

The Guidelines advise developers on the security of AI systems at every stage of the development process and aim to help ensure that it is designed, developed, and deployed securely. This is known as a ‘secure by design’ approach. Agencies from 18 countries have confirmed that they will endorse the new UK-developed guidelines. This has reinforced the UK's leadership in AI safety.

8. Next steps

Whilst the success of the AI Safety Summit and the global endorsement of the new guidelines for Secure AI System Development have confirmed the UK Government's place as a leader on the international AI stage, if its ambitions on UK regulation are to be realised, it may well need to move with greater urgency in enacting the legislative powers which DSIT and the Private Members Bill propose.

Currently, the success of the Government's regulatory proposal will depend on the successful implementation of the cross sector regulatory system. It will need to address the concerns raised in the Reports about insufficient funding, expertise, numbers of staff and the need for a central body to assist in cross regulator coordination. Further, it will need to tackle whether the DCRF will be able to step up and expand to coordinate a multi-regulatory approach to AI regulation across all sectors going forward.

We are now waiting on the UK Government's response on many of the key points to the development of AI regulation. It is due to respond to the DSIT Interim Report, and the White Paper consultation by the end of this year. Additionally, the DCMS Interim Report also calls for the UK Government to provide a substantive update on its direction in managing the impact of AI on the creative industries by the end of 2023.

The UK Government has a considerable amount to do by the end of the year and we will be watching this space with interest. Without any further certainly, UK organisations need to continue to follow current relevant regulatory guidance, for example the ICO's "AI and data protection risk" toolkit. More details of the approach of the UK regulators will follow in our next article in AI Explainer Series.