The last few months have been a busy time for AI Regulation - from the US Executive Order, to the UK’s AI Summit (see blog) and private members bill. Most recently headlines have suggested that plans to agree the EU AI Act before Christmas could be thwarted by a lack of consensus over regulating foundation models.

To help keep track of these various developments, I’ve pulled together some high level thoughts on where we currently are in the UK and EU.

EU AI Act:

There is a push to get the EU AI Act agreed at the next political trilogue on 6th December. The EU was quick off the blocks in terms of AI regulation when it published its proposals for an AI Regulation back in 2021 but could it now be falling behind on the global stage given progress in other jurisdictions such as the US and China? There is certainly political desire to regulate - AI risks are still high on the political agenda. However, there are concerns that if the law is not agreed before Christmas, momentum may be lost. Next year’s elections at the European Parliament also mean timing will start to get tight if trilogue negotiations go into next year.

However, some big issues still remain far from agreed. Headlines suggest, for example, that the rules on how to regulate gen AI/foundation models are proving particularly difficult and could scupper agreement of the AI Act more generally. Reports discuss France, backed by Germany and Italy, being worried that the European Parliament’s proposals around this (see blog) would stifle innovation. They are therefore advocating mandatory self- regulation (in the form of a code of conduct). However, members of the European Parliament have said they cannot accept this approach. We understand that the Spanish presidency (on behalf of EU countries) has just proposed a new compromise position to try to reach agreement which will be discussed this Friday. All eyes are therefore on 6th December…

UK Developments:

In the UK, we have been waiting for the Government to start ticking off the to-do list it set itself in its March white paper on AI Regulation (see our blog for more details). In the meantime it has hosted the first global AI Safety Summit which led to the Bletchley Declaration (the first international statement on frontier AI) and responded to 12 AI governance challenges set out in a House of Commons Select Committee interim report. In this response it confirms (amongst other things):

  • It will provide an updated regulatory approach to AI, and its response to the consultation that accompanied the AI white paper “in due course” – this was originally expected by September.
  • That a central AI risk function designed to monitor AI risks (which was another action set out in the white paper) has been established within DSIT (the Department of Science, Innovation and Technology).
  • There will be no AI specific legislation introduced immediately – DSIT will, however, work with other government departments to develop the UK’s regulatory approach. Interestingly, while the Government has no plans to introduce new legislation, this has not stopped a private members bill (the Artificial Intelligence (Regulation) Bill) being introduced to Parliament on November 22. It passed its first reading in the House of Lords on the same day. While it is not common for private members bills to become law, if it did make it through the legislative process, it would (in particular) create an AI Authority to monitor risks, accredit auditors and ensure relevant regulators take account of AI. It would also impose obligations on those who develop, deploy or use AI (including generative AI) – for example they would need to designate an AI officer.
  • That it has agreed the terms of reference of the AI Safety Institute (which was formerly the Frontier AI Taskforce). These confirm it will enable safe and reliable development of advanced AI systems.
  • The government remains committed to international engagement, continuing its involvement with the G7 Hiroshima AI Process and the future AI safety summits (planned next year in South Korea and France).

The ICO also launched a consultation yesterday on the guidance and toolkits available to organisations on the topic of AI, reminding us of the importance of keeping track with existing laws and guidance, as well as monitoring potential new rules coming down the line.