Over the coming months we will look at opportunities and challenges for those looking to create and use algorithmic decision systems, and implement AI solutions, in the public sector and criminal justice system

In recent years there has been lively discussion about artificial intelligence revolutionising the way we work and live our lives. In its policy paper on the AI Sector Deal, the UK government predicted that the development of AI technology could have the same dramatic impact on society as the invention of the printing press. The anticipation of a new AI era, and the increased deployment of algorithmic decision systems, has been accompanied by wariness about how such technology will be used and regulated. In June 2018 the European Commission created a High-Level Expert Group on Artificial intelligence with a remit to work on ethics guidelines and in late 2018 the UK launched the AI-focused Centre for Data Ethics + Innovation (CDEI). The Chair of that Centre, Roger Taylor, said that it was established because the government had “woken up to the fact that algorithmic decision making systems, artificial intelligence… present novel problems and they are testing, to destruction possibly, our current regulatory arrangements”.

Over the coming months our Public Law and Criminal Litigation teams will look at opportunities and challenges for those looking to create and use algorithmic decision systems, and implement AI solutions, in the public sector and criminal justice system. We will consider the existing legal framework and how it may develop further to deal with these methods of decision making. In the first of these posts we will be briefly look at the technology, how it is already being utilised in the public sector, and areas of focus for later blog posts.

AI and Algorithmic Decision Making

Algorithmic decision systems are “specific types of algorithms that focus on decision making”. They can include systems that utilise artificial intelligence (explained further below) and systems that analyse data in different ways. Some algorithmic decision systems are classified as “semi-automatic” systems which can assist humans who are still empowered with making the final decision. Other systems, are “fully automatic” and require no human input at all before a decision is made.

There is no universally accepted definition of artificial intelligence. The Government Office for Science defines AI as “the analysis of data to model some aspect of the world. Inferences from these models are then used to predict and anticipate possible future events”. The ICO differentiates this from normal data analysis by explaining that “AI programs don’t linearly analyse data in the way they were originally programmed. Instead they learn from the data in order to respond intelligently to new data and adapt their outputs accordingly”. In other words, the decision making process is not simply pre-programmed by a human but is itself analysed and modified. It would be possible to predict a decision made using normal automated data analysis by understanding how a program dealt with the different variables. In contrast, when AI is used it would not be possible to make such a prediction because the program or system is not only being tasked with analysing the data but also with the method of analysis.

The use of AI and Algorithmic Decision Making in the Public Sector and Criminal Justice System

Governments in the UK, Europe and throughout the world have already begun using AI technology. The government guide to using artificial intelligence in the public sector explains how the Driver and Vehicle Standards Agency has used AI to help ensure that MOT standards remained high and the Department for International Development (DFID) has used it to help developing countries better understand their population distribution. In Europe, the Danish government agency tasked with handling benefits has been using AI, and it has been used to improve efficiency at the Swedish Land Registry.

Controversially, AI and other forms of algorithmic decision-making, have also been deployed in the field of criminal justice, particularly in the United States. In some states judges are using automated systems that generate scores to indicate the likelihood of re-offending when determining whether bail should be granted, and when determining prison sentences for convicted criminals. In the UK its use has been much more limited. Durham Constabulary has developed with Cambridge University the Harm Assessment Risk Tool (HART), a machine learning system which analyses 34 categories of data to predict the likeliness of reoffending. The tool is not though used to determine bail or sentencing decisions but only to inform the selection of candidates for a rehabilitation programme.

The European Parliament’s Panel for the Future of Science and Technology described algorithmic decision-making as being used in “energy, education, healthcare, transportation, the judicial system and security”.

Concluding Thoughts and Areas of Focus

Different algorithmic decision systems, including those employing AI, are already impacting on a wide variety of government work. The technology can be used to improve efficiency and effectiveness, saving valuable time and resources, but also has the capability to impact on individual rights. Given the range of technology and potential effects, it is unsurprising that it has been looked at as a challenge by public sector organisations and those affected by decisions made, or influenced, by it.