In April 2021, the European Commission set out its proposal for a legal framework in a new Artificial Intelligence Act (“AIA”) in an effort to balance promoting the uptake of AI and of addressing the associated risks. We wrote in more detail about the EU’s proposals to regulate AI here.

The AIA caused a lot of interest in April and here we note some of the key events that have happened since and what can be expected next.

A quick recap - the AIA proposal

In April 2021 the European Commission proposed the AIA to address the risks associated with AI, and to ensure the establishment of conditions for the development and use in terms of quality, performance and trustworthiness of AI.

The regulations would apply to (with a limited number of exceptions):

  • providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;
  • users of AI systems located within the Union;
  • providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union.

Under the initial proposal, the Commission set out a classification system for AI with different requirements and obligations on a 'risk-based approach'.

AI systems presenting 'unacceptable' risks would be prohibited.

AI systems presenting 'high-risk' would be authorised, but subject to a set of requirements and obligations to gain access to the EU market.

AI systems presenting 'low or minimal risk' would be subject to lighter transparency obligations.

The AIA lists some of the types and uses of AI product and under which risk-category they fall. But those definitions and categories may change as the AIA progresses before coming into force.

What has happened since?

In September 2021 the European Economic and Social Committee scrutiny published its opinion on the draft legislation, indicating, in its view, there is capacity for improvement specifically surrounding the scope, definition and clarity of the prohibited AI practices, the implications of categorisations, the risk-mitigating effect of the requirements for high-risk AI, the enforceability of the AIA and the relation to existing regulation and other recent regulatory proposals.

Most notably, the EESC recommended that:

  • the AIA provide for certain decisions to remain the prerogative of humans, particularly where decisions have a moral component and legal implications or a societal impact;
  • third party assessments obligatory for all high-risk AI (as opposed to the suggested self-assessments);
  • inclusion of a complaints and redress mechanism for organisations and citizens that have suffered harm from any AI system, practice or use that falls within the scope of the AIA.

So far, the Council has welcomed regulation in this area. However, both the Senates of Poland and the Czech Republic have written to the European Commission identifying concerns over the permitted use of biometric identification systems, which may include facial recognition, in public spaces under the AIA. Due to the high-risk implications to human rights and freedoms, calls for a stricter approach than currently set out under the AIA proposal have been made.

The European Data Protection Supervisor (EDPS), who is set to become the new AI regulator for the EU public administration as required by the AIA, has called for a moratorium on the use of remote biometric identification systems in publicly accessible spaces. These are not limited to facial recognition: the EDPS advocate a stricter approach to automated recognition in public spaces of human features - such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals - whether these are used in a commercial or administrative context, or for law enforcement purposes. The EDPDS consider that these have not been addressed by the Commission.

Where we are now and what next

Now ready for its first reading in Parliament, the proposal will move through the usual ordinary European legislative process, being subject to scrutiny by the European Parliament and members states in the Council of Europe. There are potentially multiple rounds for amendments.

The European Parliament Overview provides a useful guide as to the next steps. Of particular interest is that there has been a steady decline in the number of proposals that have reached a third reading; in 2014-2019 (the last available timeframe) most proposals are now passed at the first reading, with a few at the second.

It is still going to be several years and multiple stages before final wording is agreed and the AIA is adopted. By comparison, the General Data Protection Regulation took more than four years to get from the proposal stage to become adopted. Even after it is adopted, the AIA is subject to a two-year implementation period before it comes into force.

However, it is important to engage with the debates around the AIA as it is likely to have impact before it comes into force. As we noted in our article when the AIA proposals were published, much of what the AIA proposes may already be happening in practice in some form. Further, the AIA is intended to influence global standards, just as GDPR has. To what extent the AIA will change over the coming years though, watch this space.

This article was written by Tom Whittaker and Eve Jenkins.

https://www.lawcom.gov.uk/project/smart-contracts/