Governments are increasingly making use of AI tools to help with decision making, but they face a key challenge: ensuring fairness and maintaining public trust.

Automated decision-making frees up time and resources, two commodities that many government departments lack. The increased availability of AI decision-making tools allows government decisions to be delegated to algorithms, particularly in the resource-intensive areas of local government work that directly impact individual citizens: identifying children in need of additional care, rating schools’ performance, selecting food outlets for a health and safety inspection, calculating fire risks and predicting crime.

Algorithms are useful. They save time, they save money and they can provide better outcomes. But the uptake of automated decision-making is tricky for governments, which are under even more pressure than companies to maintain accountability and transparency and ensure that their citizens trust that the decisions being made about them are fair and trustworthy. Things can sometimes go wrong – a study last year for example reported that an algorithm widely used in the US for predicting criminal re-offending rates was exhibiting unintentional racial bias.

Even when they don’t go wrong, automated decision-making tools can be a headache for local government: how can the decisions reached by such tools be explained to citizens? Do citizens trust AI to make good decisions? And do the government officials commissioning and deploying these tools understand them enough to decide whether or not the system is worth investing into?

The challenge for governments is therefore to harness the enormous potential of these new technologies without alienating the people who they serve. Last year saw the start of a wave of innovative legislative proposals in the US designed to enhance accountability in these technologies, a first sign that governments are engaging in a solution to this issue.

Avoiding a black-box government: local government efforts in the US

New York City’s local government has been at the forefront of this algorithmic accountability movement. Last year, the city council introduced a new law that requires algorithms being used in council decision-making to be properly assessed, so as to be able to produce an explanation about their decision-making processes. In some cases, technical information such as source code may be released to the public too.

The city council representative who proposed the law, James Vacca, has said the legislation was introduced “not to prevent city agencies from taking advantage of cutting-edge tools, but to ensure that when they do, they remain accountable to the public” – a motto to live by for any modern administration trying to use innovation and retain the public’s trust. New York Mayor de Blasio established a task force to investigate the use of AI in the city’s administration, which is expected to publish its first findings at the end of this year.

Other US governments have followed. Earlier this year Washington State proposed a bill to regulate the use of AI in government and to increase transparency in automated decision making. The bill envisages that government departments using AI tools must publish algorithmic accountability reports, to give the public an understanding of how decisions are made.

Algorithmic accountability in the EU

The EU is also grappling with these issues. The GDPR, a privacy law that came into force in all EU countries last year, introduced new requirements for accountability. Under the GDPR, when automated decisions about individuals are made without a human involved in the outcome, the individuals may need to be provided with “meaningful information about the logic involved”. This new legal requirement has generated a lot of debate, both technical and legal. Explaining how an AI system reached a decision in an easily intelligible way is currently a hot topic for academic research and legal practitioners, yet it remains unclear when, and indeed if, a solution to this will become available. Users and developers of AI in the EU will be keeping a close eye on the other side of the pond, to see how city and state governments in the US explain these new decision-making tools to their citizens.