The European Commission recently published its proposal for the regulation of artificial intelligence, or AI, (Regulation) which will subject high-risk AI systems to strict obligations before being put on the market and has the potential for steep fines. Despite the UK’s departure from the EU, the Regulation is significant and its extra-territorial reach means many UK-based businesses stand to be affected.
The Regulation aims at its heart to protect the privacy of EU citizens – a common theme in EU legislation, including GDPR. Its effects will be wide-ranging, capturing both providers and commercial users of AI systems based within the EU, as well as those using AI systems to service or deal with individuals in the EU. Providers will be caught by the Regulation regardless of whether they produce systems and place them on the market or put them into service for use in their own business. This includes those outside the EU who use AI systems to process data about EU citizens. As such its impact extends to businesses far beyond the borders of the bloc.
Significant additional operational burdens
The Regulation categorises AI into different tiers and prescribes various audit and governance requirements according to the nature of the AI used. Systems that manipulate human behaviour or allow social scoring by public authorities are classed as ‘prohibited practices’, while ‘high risk’ systems will require risk assessments and mitigation processes to be established. All AI systems will be subject to transparency and governance requirements.
Altogether the Regulation has the potential to create significant operational burdens on the parties caught by it. In a world where the pace of change and state of the art is constantly reaching forward, some parties may be unable to use AI as they currently intend. This may impact business planning and research and development strategies, particularly in relation to the Internet of Things, as well as Big Data.
Aside from those tech companies who are developing AI systems themselves, data-rich sectors stand to be most affected. In healthcare, for example, it is expected that AI will be worth over $61 billion within the next decade. Internet of things applications are also inherently data-rich and deploy AI, such as intelligent transportation network management and smart cars. Activities such as consumer behaviour modelling for retail and consumer finance applications will also be impacted.
Governance and enforcement
The Regulation will establish a new European Artificial Intelligence Board (EAIB) to be chaired by the European Commission, with representation from each national supervisory authority as well as the European Data Protection Board (EDPB). The EAIB’s functions include sharing best practices, laying down harmonised technical standards and issuing opinions and interpretive guidance on implementing the Regulation.
The proposed consequences of non-compliance are significant. The Regulation anticipates penalties of up to €30m or 6% of total worldwide annual turnover for the most serious infringements, such as selling prohibited AI systems or failure to comply with data training requirements. All other offences under the Regulation will be subject to GDPR-scale penalties of up to €20m or 4% of annual worldwide turnover.
Balancing individuals rights with commercial innovation
The Regulation also raises the perennial dichotomy: upholding both freedom of expression and the rights and freedoms of individuals. In this case, commercial expression in the form of technological innovation and the benefits it may bring to a variety of stakeholders must be balanced against the possible risks to personal privacy, rights and freedoms that AI applications may pose.
Will the proposed Regulation slow down technological development? Perhaps, but some commentators argue it doesn’t go far enough. The Regulation treat the high-profile algorithms used in social media, search engines, online retailers, app stores, mobile apps or mobile operating systems as ‘standard’ risk only. They not subject to the stricter governance requirements for ‘high-risk’ practices, despite the fact that these algorithms have contributed to the growing concern surrounding the use of AI.
Parties such as Amnesty International also believe the ‘high-risk’ practices should be subject to tighter restrictions, with an EU-wide ban on any AI that violates an individual’s fundamental human rights. They argue that biometric mass-surveillance practices and automated recognition of sensitive traits violate those rights and exacerbate discrimination of minority groups.
Planning for compliance
Once the Regulation is finalised, companies operating in, or providing services to, the EU will need to audit their systems for any AI processes. Compliance will require companies to understand which category such processes fall under and the resultant requirements for their continued use.
However it’s still early days. The Regulation is in draft form and the European Commission has invited response from those affected by the proposed regulation. It’s expected that considerable debate will follow in the EU legislature, including around core principles such as the definition of AI. Its passage into law may be slow therefore and, once passed, it’s likely that a grace period for implementation will apply.
The domestic outlook
At this stage it is unclear what approach the UK legislature will take in respect of AI or whether any equivalent legislation may be introduced to bring us close to the Regulation’s requirements in the domestic setting. It is understood, however, that the ICO and UK government will be responding to the European Commission’s impact assessment for the Regulation.
AI and the protection of individuals in the digital space are, however, already matters of scrutiny closer to home. Last summer the ICO revealed a new AI auditing framework which serves to assist organisations by providing guidance and good practice recommendations on compliance with existing data protection regulation when using AI. Also this month, the UK government published its Online Safety Bill (Bill), which will require digital service providers to take action against harmful content, whilst also demonstrating commitment to protect democratically important content. It will be interesting to see how the Bill plays out compared to the Regulation’s categorisation of social media AI algorithms as only ‘standard’ risk.