The German Data Ethics Commission has published proposals for risk-based regulation of algorithms. This is just one of many legislative and regulatory proposals and reflects concern about the influence technology has over society. But how do you “regulate” an algorithm? We set out a range of remedies and consider if they would work in practice.

What is an algorithm?

An algorithm is just a computer program; a series of instructions given to a computer requiring it to process data in a particular way. Algorithms can be broadly split into two categories:

  • Traditional algorithms: These are created by a human writing out lines of code. The code is a direct manifestation of the way the programmer wants the computer to operate. However, these algorithms are not always simple and are often created by large teams over many years. For example, since work on the Linux kernel started in 1991, thousands of programmers have developed and improved this project which now runs to over 12 million lines of code. This is orders of magnitude more complex than any contract.
  • Artificial intelligence: There has been a significant growth in the use of powerful new algorithms using artificial intelligence. This typically involves providing a computer with vast quantities of data and letting the computer “learn” from that data. This technology sometimes involves the use of a “reward function” which is used by the computer to assess its performance and learn or reinforce the actions that have the most successful outcomes. These algorithms typically operate inside a “black box” making it very difficult, if not impossible, to understand their internal workings.

Algorithms and society – Why does it matter?

The impact of algorithms on society is significant and will become more pronounced as more and more decisions are delegated by man to machine.

Some impacts are obvious because of a direct human interaction with the algorithm. For example, artificial intelligence is increasingly used to determine if you can get a job. It will scan and assess your CV and can also assess your performance during video interview. Do you use positive words? Do you smile enough? This is not science fiction, with this technology becoming used widely in the U.S.

In other cases, algorithms are used behind the scenes by businesses and governments and have a more invisible effect. For example, determining whether you are eligible for a mortgage, to decide what offers or advertisements you see, or the price you pay for goods and services online.

Finally, algorithms have an important effect on our core democratic values. The explosion in the availability of information on the internet has resulted in a dramatic shift from an information economy to an attention economy. The ability to determine what that person’s valuable attention is focused on has a very powerful impact on their perception of the world. This is determined by algorithms that sift, select and promote content based on whatever priorities are hardwired into that algorithm, likely to be largely based on maximising user interaction.

Providing the proper regulatory framework for these algorithms will be vital to ensure their impact is fair, transparent and socially acceptable.

Algorithmic remedies

The regulatory focus on algorithms has been increasing. The Opinion from Germany’s Data Ethics Commission is interesting and provides one of the most structured proposals to date. It suggests risk-based tiering of regulation, as set out in the diagram below, and that these proposals should be implemented as a new directly effective EU Regulation on Algorithmic Standards. In summary, the Committee suggests:

  • Ban (Level 5): Algorithms would be banned where they have “untenable” potential for harm.
  • “Always on” oversight (Level 4): Supervisory institutions would have “always on” oversight of algorithms with serious potential for harm.
  • Other measures (Level 2-3): Algorithms with potential for lesser harm could be subject to a range of controls including ex-ante approval, transparency obligations, risk assessment, audit or notification obligations to supervisory authorities.
  • No extra regulation (Level 1): Algorithms with zero or negligible prospect of harm should not be subject to extra measures.

Fundamental challenges

These are superficially attractive measures but how do you “regulate” an algorithm in practice. An algorithm is fundamentally different to a human being. It is not intelligent as such and does not share the same desires and motivations as a human. This means there is no digital equivalent to the “smoke filled room” in which the algorithm will incriminate itself.

Instead, many regulatory remedies will attach to the code itself or the operation of the algorithm. This raises some difficult issues.

  • Complexity: Most algorithms of interest will be difficult to review and understand. If artificial intelligence is used the algorithm is likely to operate in a “black box” and the inner workings largely incomprehensible to a human. Even those written using traditional programming techniques will implement complex underlying models and be in technical code that is only accessible by those with high levels of technical skill.
  • Dynamic: Algorithms are updated frequently to improve their performance and to respond to changes in the environment in which it is used (for example, to counter misuse). This could be by tweaking and tuning aspects of the algorithm, or replacement with a completely new algorithm. In either event, any remedy needs to be flexible enough to allow rapid changes in the underlying technology.
  • Dynamic environment and data sensitivity: Even where an algorithm remains the same, the environment in which it is used is likely to change dramatically over time. The algorithm may well be chaotic which would make mapping its inputs and outputs difficult as even small changes to the initial conditions lead to large differences in its output.
  • Confidentiality and gaming: Finally, the algorithm will be highly confidential and organisations will be very sensitive to it being disclosed to a regulator or being subject to more public scrutiny. This is largely to protect the intellectual property rights in the algorithm but is also to stop “gaming” the algorithm. For example, a detailed technical explanation of the algorithms used in search engines would be used by third parties to try and trick the system into giving their website a higher ranking.

A toolkit of strawman remedies

With these challenges in mind, we set out a range of strawman remedies and consider the extent to which they are already available or workable. This assumes that it is necessary to regulate the monkey and not the organ grinder. Where there is evidence the organisation deploying the algorithm intended it to have particular adverse effects, such as to discriminate or collude with competitors, detailed technical inspection of the algorithm may not be necessary.

Compliance: The least intrusive remedies would be to impose duties on organisations using the algorithms to take steps to ensure it is used safely and lawfully. It would be up to the organisation how to implement that duty:

  • Risk assessment. The organisation could be required to conduct a risk assessment on the algorithm before using it in the live environment. Under the EU General Data Protection Regulation, organisations must conduct a data protection impact assessment on new processing where it is potentially high risk. This risk assessment could be backed up by a code of practice or seal. For example, the German Data Ethics Commission suggests that algorithms could be subject to voluntary or mandatory quality seals.
  • X by design. There could be requirements to build compliance into the algorithm by design. For example, training and educating the technical development team on their compliance duties and ensuring compliance is hard-coded onto the specification for the algorithms. The GDPR contains specific obligations to ensure “privacy by design”. Competition regulators have latched onto this idea advocating “anti-trust compliance by design”.
  • Transparency and explainability. To help build trust, organisations could be obliged to disclose when they are using algorithms and to explain the operation of the algorithm. Again, the GDPR contains specific obligations to give individuals "meaningful information about the logic involved” in certain algorithms. While this is superficially attractive, regulators accept that detailed technical information would not be practical or useful. Accordingly, this just appears to require a broad and non-specific description. The German Data Ethics Commission go further and suggest express labelling of the algorithm, for example to help users distinguish between a minimal risk “Level 1” algorithm and a “Level 4” algorithm with the potential to cause serious harm.
  • Human-in-the-loop. There might be certain types of decisions that either should not be taken by an algorithm or ought to be subject to human review. One example is lethal autonomous weapons. This technology raises profound questions of whether the decisions to take human life can ever be delegated to a machine. At a more mundane level, the GDPR gives individuals the right to ask for a human to re-review certain types of automated decisions when taken by an algorithm.
  • Optional application. Alternatively, regulators might order that users have the option to disapply the algorithm. The Filter Bubble Transparency Act is a bipartisan Bill introduced in the U.S. that would require large-scale internet platforms to give the users the option of a “bubble-free” view in which the information they see is not algorithmically customised.
  • Practical safeguards. There are a range of other specific controls that regulators might want to mandate. For example, MiFID II contains specific controls on the use of algorithmic trading such as testing and a proper authorisation process before the algorithm goes live.

Technical: Regulators might want to take more direct action and lift up the hood to supervise the technical operation of an algorithm. We consider some options below, though few are likely to be practical.

  • Opening the black box. Regulators might, in theory, want direct access to the code used in the algorithm. However, this will rarely be practical given the complexity and sensitivity of the code, and the frequency with which that code will be updated.
  • Specifying the reward function. Similarly, regulators could seek direct access to the reward function used to train an AI algorithm, which should determine the algorithm’s desired behaviour. Regulators might want to place some controls over reward functions, e.g. to ensure it does not encourage collusion with competitors (or from a common-sense perspective ensure it does not encourage the AI to replicate). However, reward functions are still complex, difficult to interpret and do not always effectively dictate the algorithm’s operation.
  • Circuit-breakers and kill switches. Organisations could be required to apply controls to the algorithm’s operation to prevent abnormal behaviour. This might trigger an alert or even shut down the algorithm. For example, MiFID II requires any algorithmic trading programme to be subject to trading limits and the installation of a “kill switch” should it go rogue.
  • Systematic test cases. Finally, a regulator might want close oversight over what goes into and comes out of the black box. There have been numerous small-scale and academic studies, but regulators might want more extensive testing capabilities by creating very large test cases (as you would as part of normal software testing) and then running those test cases over the algorithm. For example, when testing for pricing discrimination online, regulators could try millions of variations based on geographic location, purchase history, operating system, referral means etc. to build a better picture of the algorithm’s operation.
  • Document and log. Alternatively, the German Data Ethics Commission suggests express obligations to document and log the operation of an algorithm. This could help regulators verify its operation and carry out an algorithmic post-mortem if it creates serious problems.

Supervisory and structural: Finally, there is the potential for far reaching supervisory and structural remedies to control the use of algorithms.

  • Ex-ante notification and approval. Organisations could be placed under a duty to notify and obtain approval to certain types of high-risk algorithms. The GDPR already requires regulatory approval for high risks processing that raises risks that cannot be mitigated. The challenge with this approach is to clearly identify which activities require this approval, how rigorous that approval should be and how that approval would work with algorithms that change dynamically.
  • Skilled person review. Regulators could be given the power to order a skilled person to undertake a review of the algorithm. The UK Financial Conduct Authority already has this power and can require a regulated firm to be reviewed by a third party with specialist skills. Those reviews could extend to the operation of algorithms and the Financial Conduct Authority already has a panel of experts on technology and information management matters who can act as skilled persons.
  • Independent monitoring authority. There might be limited situations in which a regulator would want to mandate supervision of the use of the algorithm to an independent body on an ongoing basis. Independent compliant monitors have been imposed in other contexts, for example in relation to breach of money laundering breaches by banks. However, there is no precedent in relation to algorithms.
  • Structural separation. A more radical suggestion would be some form of structural separation. The part of the organisation responsible for managing the algorithm could be placed in a separate business unit, with clearly defined objectives and incentives, and made independent from upstream and downstream parts of the business. There is some precedent in regulated industries, such as EU telecoms regulation which contains a “functional separation” remedy aimed at separating wholesale functions and departments from the incumbent’s retail business. However, this is a complex and onerous remedy often requiring a substantial effort by both the organisation and the regulator.
  • Ban. Finally, there may be some types of algorithm that are not socially acceptable. Facial recognition has been banned in some US cities, like San Francisco, and there are tentative proposals to impose a moratorium on its use in the UK. Given the fluid nature of technology these bans require careful thought. The UK proposals ban the operation of “equipment incorporating automated facial recognition technology capable of biometrically analysing those present in any public place”; in other words taking a picture on your iPhone would become a criminal offence.

Algorithms are attracting the attention of multiple regulators and legislative bodies around the world. However, more could be done to co-ordinate this work and to conduct a systematic analysis of the challenges of regulation and thus help ensure algorithms are used in a fair, transparent and socially acceptable manner.

The German Data Ethics Commission proposals are here.