On Wednesday 21 April, the European Commission published its much-anticipated Draft Regulation for AI. Will these proposed rules hinder the ongoing development and potential of AI or are they exactly what’s needed to prevent misuse and protect individuals?

The ‘Proposal for a Regulation on a European approach for Artificial intelligence’ (the Draft Regulation) follows the European Commission’s white paper on ‘high risk’ applications of AI, published in February 2020. A previous draft was leaked the week before it was published.

The current Draft Regulation aims to address the often complex challenges of AI technology, including regulating its use, preventing bias and discrimination, and balancing companies’ use of AI against the needs and fundamental rights of individuals. At the same time, it seeks to encourage the use and growth of AI.

The proposals would prohibit using artificial intelligence for certain purposes and regulating its use in other areas, subject to exceptions such as specific investigations and terrorism. The Draft also incorporates significant penalties for violations – up to 6% of global annual turnover or €30 million, whichever is greater.

Defining AI

There is no globally accepted definition of AI, and the European Commission has already voiced its desire to take the lead in establishing one. The Draft Regulation applies the same definition of an AI system as that used in the EC’s proposal for a Machinery Regulation (adopted on the same day as this Draft Regulation).

Article 3(1) of the Draft Regulation defines AI as ‘’software that is developed with one or more of the techniques and approaches listed in Annex 1 and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.’’

According to Annex 1, AI comprises:

“(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning (b) Logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems (c) Statistical approaches, Bayesian estimation, search and optimization methods.”

Prohibited practices

Article 5 of the Draft Regulation sets out the AI uses that the EU seeks to prohibit. This includes discriminatory use, using real time remote biometric identification systems for law enforcement purposes, and AI that “deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour.”

Two levels of risk

Rather than apply a blanket rule, the Draft Regulation divides non-prohibited uses of AI into ‘normal’ AI and ‘high risk’ AI categories.

Examples of high risk AI are likely to include some aspects of self-driving cars (especially as the technology becomes more widely used) and, as the Draft Regulation puts it, AI systems that could be associated with the “injury or death of a person or the damage of property.”

High risk AI will be subject to risk assessments and various compliance requirements, for example: risk management through testing data and training staff, keeping records and registering each AI system on a European Commission-managed database. Such a centralised database of high-risk AI is likely to be controversial, especially as it’s unclear what form this would take. Weapons/military use is specifically excluded from the definition of high-risk AI, which is likely to raise international political concerns.

AI in the workplace

It’s no secret that HR departments and recruiters frequently use AI to evaluate potential workers and assess task prioritisation. The Draft Regulation aims to limit this by classing such use as ‘high risk’ and imposing certain safeguards.

The European Commission’s likely intention is that this classification will limit AI’s potential for discrimination in the workplace and on employees’ privacy. Yet this provision is already being criticised because of the ‘self-assessment’ element. Essentially, the employer will determine whether their use of AI for recruitment and HR conforms to the rules. In practice, it means the Draft Regulation may offer more flexibility to the employer and but not provide the protection it seeks to give the employee.

Requiring transparency

The Draft Regulation requires that individuals are told (unless it is obvious) that they are using/interacting with AI. The European Commission’s probable objective is to increase transparency and reduce the risk to consumers. However, the Draft has far-reaching exceptions to this requirement, not only for safeguarding public security but also covering satire and parody. So, again, this proposal may not have the desired impact, assuming it actually becomes law.

Clear objectives, but are they practical?

The European Commission’s objectives with the Draft Regulation are clear. In its own words, it sets out to:

  • ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values
  • ensure legal certainty to facilitate investment and innovation in AI
  • enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems
  • facilitate the development of a single market for lawful, sage and trustworthy AI applications and prevent market fragmentation.”

Whether the Draft Regulation can actually achieve this remains to be seen, but there is plenty of time for its provisions to be debated. It is still in the early stages of the European Union’s legislative process and must now go through the European Parliament before it can be implemented as law – a process which could take several years.

You can access a copy of the Draft Regulation here.