Questions
Q&A
What is the current state of the law and regulation governing AI in your jurisdiction? How would you compare the level of regulation with that in other jurisdictions?
The UK currently has no specific legislation dedicated solely to AI. The UK’s current legal landscape governing AI is characterised by a combination of existing regulations, guidance, and addressing ongoing developments, including the UK General Data Protection Regulation (GDPR), Data Protection Act 2018 and sector-specific regulations as explained below. It does not yet address some of the specific risks AI poses.
Currently, AI technologies and products are covered by existing data protection, intellectual property, online harm, consumer, contract, consumer protection, consumer rights, product safety and competition laws.
The Online Safety Act 2023 aims to make the UK the ‘safest place in the world to be online’. The government has announced that AI chatbots and any search results they generate will come under its remit, affecting companies that embed AI into their services.
Product-specific legislation, such as the Medical Devices Regulations 2002 and the Electrical Equipment (Safety) Regulations 2016, applies to products that include integrated AI.
Where AI is used in financial services, the Financial Services and Markets Act 2023 outlines the regulatory framework and includes the use of AI by regulated firms.
On 27 July 2022, the Financial Conduct Authority (FCA) published the final rules and guidance for a new Consumer Duty, which it envisages provides the tools required to mitigate the risks of AI in the context of algorithms causing potential harm by discrimination, bias or getting out of control. AI systems will contravene Consumer Duty if they fuel behavioural biases, thereby creating harmful price discrimination.
The FCA’s Digital Sandbox and AI Sandbox support AI innovation by offering access to synthetic data and collaboration tools. The Medicines and Healthcare Products Regulatory Agency launched AI Airlock, its new regulatory sandbox for AI as a Medical Device in May 2024.
The FCA’s Senior Managers and Certification Regime (SMCR) provides a framework for accountability in the use of AI, ensuring that senior managers are responsible for their firms’ activities.
UK IP and copyright law was not designed with AI in mind. Under the UK Copyright, Designs and Patents Act 1988, copyright protection is generally granted to original works created by a human author. AI systems do not currently qualify as human authors. Section 9(3) states that for computer-generated works, the author is considered to be the person who made the arrangements necessary for the creation of the work.
AI-related inventions can be patented if they meet the standard criteria of novelty, inventive step, and industrial applicability. However, the inventor must be a natural person, so AI cannot be listed as an inventor on a patent application. The human who contributed to the inventive process or the entity employing them would be named as the inventor.
In March 2023, the UK government published a white paper on AI Regulation called ‘A pro-innovation approach to AI regulation’, which sets out five principles for regulators to interpret and apply within their remit:
- safety, security and robustness;
- appropriate transparency and explainability;
- fairness;
- accountability and governance; and
- contestability and redress.
On 27 October 2023, the UK government published a policy paper titled ‘Emerging processes for frontier AI safety’. This sets out AI safety policies that AI organisations can adopt to increase transparency. While it does not prescribe or mandate what AI safety policies must be adopted, it guides understanding what good policy looks like.
The UK government pledged better AI governance and regulation following the November 2023 UK AI Safety Summit. The Summit also resulted in the formation of an AI Safety Institute, which will be the first state-backed organisation focused on advanced AI safety for the public interest.
The 17 July 2024 King’s Speech did not mention AI other than saying the UK government should ‘seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models’.
The UK currently has limited case law related specifically to AI.
Despite not being in the EU or directly bound by it, the EU AI Act is still one of, if not the most, important pieces of AI legislation affecting businesses operating in the UK. Any UK-based AI business wanting to target (or have as a client) any EU citizen must comply with the EU AI Act to maintain market access, including adhering to the EU’s requirements for AI systems, which may include risk assessments, transparency obligations, and data governance standards. Non-compliance could result in restricted access to the EU market or penalties.
The UK currently has a low level of regulation compared with other jurisdictions, especially, since the EU AI Act, those in the neighbouring European Union.
With the EU AI Act, the EU takes a more prescriptive and detailed approach to AI regulation than the UK. EU AI regulation includes more specific obligations for AI systems’ transparency, accountability and data quality.
The UK’s approach to AI regulation currently requires greater interpretation by firms. Regulatory sandboxes and innovation hubs provide a form of exemption by allowing firms to test AI technologies in a controlled environment, potentially reducing regulatory burdens during the innovation phase.
Has the government released a national strategy on AI? Are there any national efforts to create data sharing arrangements?
The UK government released a UK National AI Strategy in 2022, outlining its vision for the UK to become an ‘AI and science superpower’. It aims to prepare the UK for the 10 years to 2031 and position the UK as a leader in AI research, innovation and deployment by focusing on three key beliefs and pillars:
- investing in the long-term needs of the AI ecosystem to ensure the UK has the necessary people, data, computing, finance, skills, infrastructure and innovation environment to support AI development. This includes investment in research and development and collaboration between academia, industry and government.
- ensuring AI benefits all business sectors and geographic regions of the UK; and
- governing AI effectively by creating a principles-based, pro-innovation regulatory structure that keeps pace with AI’s fast-changing demands by balancing innovation with the need for safety and ethical considerations and engages with international partners to shape global AI standards.
The strategy also highlights the importance of addressing ethical and societal issues related to AI, such as bias, transparency and accountability and emphasises the need for public trust and confidence in AI technologies.
There are national efforts in the UK to create data-sharing arrangements, particularly to combat economic crime and enhance corporate transparency.
The UK government published its Data Sharing Governance Framework in 2022. The framework does not explicitly mention data sharing in relation to AI but emphasises its importance in areas including delivering good public services, developing and evaluating policy, and managing government operations.
The Economic Crime and Corporate Transparency Bill aims to facilitate data sharing among businesses to prevent, detect and investigate economic crime. Key provisions include:
- Information sharing for economic crime prevention: it introduces provisions in the Proceeds of Crime Act 2002 to enable information sharing between certain businesses to prevent, detect, and investigate economic crime.
- Disapplication of civil liability for breaches of confidentiality when information is shared for the specified purpose of preventing economic crime. This ensures businesses can share information without fear of being sued for breaching confidentiality, provided the sharing is for the intended purpose.
- Safeguards to prevent misuse of shared data.
- Voluntary participation: participation in these data-sharing arrangements is voluntary and does not replace existing obligations, such as submitting Suspicious Activity Reports.
The National Fraud Database also allows businesses to share information to prevent and detect fraud.
The 2024 King’s Speech did not refer to data. However, briefing notes referred to a new Digital Information and Smart Data Bill after the previous Data Protection and Digital Information Bill was dropped.
What is the government policy and strategy for managing the ethical and human rights issues raised by the deployment of AI?
The UK’s National AI Strategy includes a commitment to addressing ethical and human rights issues, including algorithmic bias, discrimination and privacy intrusion. It mentions:
- investing in AI governance and regulatory frameworks;
- promoting diversity and inclusion in AI development; and
- enhancing public trust through transparency and public engagement.
As part of the strategy, the UK government has said that it will collaborate with key global actors and partners to promote the responsible development and deployment of AI.
In its 2023 AI Regulation White Paper, the UK government proposed to focus on the following areas relating to ethical use of AI:
- safety, security and robustness;
- appropriate transparency and explainability;
- fairness;
- accountability and governance; and
- contestability and redress.
The UK’s data protection laws, including the GDPR, Data Protection Act 2018 and Privacy and Electronic Communications Regulations, equally apply to AI systems. UK GDPR and the Data Protection Act 2018 require AI-based decision-making platforms to ensure transparency and human oversight and give individuals the right to challenge decisions made by AI systems that significantly affect them. However, UK data regulations do not address risks such as a lack of transparency around consent, how data will be used in the future under more powerful technology and the risk of further de-anonymisation of data.
Existing laws, such as the Equality Act 2010, protect against discrimination based on protected characteristics such as age, gender, race and disability. This protection applies equally to discrimination resulting from AI output.
The AI Council and Centre for Data Ethics and Innovation (CDEI) advises the government on AI-related issues and ensures that data-driven technologies are used in a way that is ethical and benefits society. It has also published recommendations for organisations on responsibly developing and deploying AI systems.
The CDEI AI Barometer report, published in 2020, highlights key risks associated with adopting AI irresponsibly, particularly the potential for detrimental effects on human rights.
Regarding the ethical development of AI, the UK government is taking several soft steps:
- Encouraging transparency and explainability in AI systems to ensure that decisions made by algorithms can be understood and challenged if necessary. Notable examples include the government’s publication of the ‘AI Sector Deal’ and establishment of the CDEI. In particular, the CDEI has conducted reviews and published reports on how to improve such transparency and explainability, including a report called ‘AI and Public Standards’, which provided recommendations on how public sector organisations can make transparent decisions when utilising AI systems.
- Working to mitigate algorithmic bias through research and collaboration with industry and academia and by developing guidelines for fair and unbiased AI systems.
- Encouraging efforts to ensure that AI development teams are diverse and inclusive to help reduce bias in AI systems. The AI Council, an independent expert committee that provides advice to the UK government, published the ‘AI Roadmap’ in January 2021, which encouraged the development of diverse AI teams by supporting initiatives that help increase the participation of underrepresented groups in education and careers related to AI. It also promoted the design and testing of AI systems by utilising diverse datasets in order to minimise bias and improve fairness.
- Encouraging the use of impact assessments, such as Data Protection Impact Assessments (DPIAs), Equality Impact Assessments and Human Rights Impact Assessments (HRIAs), to evaluate the potential human rights implications of AI systems and to identify and address any adverse effects on individuals or groups.
- Seeking input from the public and stakeholders to understand AI's societal impact and ensure that human rights are protected.
- Participating in international discussions on AI ethics and regulation, working with other countries and organisations to develop global standards and best practices. For instance, the UK is a member of the Organisation for Economic Co-operation and Development (OECD) and works with other member countries and the OECD to measure the impact of AI and help create policies around the responsible development and use of AI. The UK is also a founding member of the Global Partnership on AI (GPAI), which is an international initiative that brings together experts in science, industry, civil society and government, each aiming to promote the responsible use and development of AI.
