This is an issue that is central to the National Data Strategy which was launched for consultation on 9 September 2020. In response to the consultation, the Ada Lovelace Institute, the Centre for Public Data, the Institute for Government, the Open Data Institute, and the Royal Statistical Society have published a report setting out their findings from events they held to discuss the Strategy with practitioners and experts.

The report is worth a read in full but there are two notable practical suggestions.

The first is for there to be public registers of algorithmic decision-making to improve transparency. We also wrote recently about widespread use by local councils in the UK of algorithmic decision-making, and that the Guardian had to use freedom of information requests suggests that it is not straightforward to get an overview of how public bodies are using algorithmic decision-making. There are calls in other areas for similar registers, such as the Law Society's recommendation for a public register of algorithms used in the criminal justice system.

How to ensure that algorithmic decision-making by public bodies is transparent is an issue grappled with internationally. The report notes that the cities of Amsterdam and Helsinki have launched algorithmic registries to detail how those city governments use algorithms to deliver public services, down to the level of the data sets used to train models and descriptions of how an algorithm is used. "In Amsterdam the registry is indirectly linked to new standard clauses in procurement contracts that impose a duty of cooperation on the vendor, to provide the municipality with all the information that may be required in order to explain how an algorithmic system works."

The second is for algorithmic impact assessments akin to impact assessments for data protection and human rights. Such assessments are seen as positive because they require a risk assessment at an early stage of a system as a whole. These three areas could be rolled into one impact assessment. It could also be required that they are published to allow for transparency over what the risks are and how they are managed.

It will be interesting to read responses to the Strategy's consultation to see what other practical proposals are made for how to ensure transparency and accountability in algorithmic decision-making by public bodies. As we noted when the MIT Schwarzman College of Computing kicked-off an AI Policy Forum to develop frameworks and tools for governments and companies to make practical decisions about AI policy, there is a need for the growing body of principles related to AI and automated decision-making to be turned into practice.

The Ada Lovelace Institute organised a policy roundtable to stimulate discussion on the fourth pillar of the National Data Strategy on Responsibility: Driving Safe and Trusted Use of Data.

The aim was to discuss what’s missing from the strategy and come up with practical recommendations for the implementation phase, to ensure that data is used responsibly and in the interests of people and society.

http://theodi.org/wp-content/uploads/2020/11/Getting-data-rightperspectives-on-the-UK-National-Data-Strategy-2020.pdf