On 1 August 2019, the UK FCA published an article titled "Artificial Intelligence (AI) in the Boardroom"1 in its Insight series of opinion and analysis. Its author, Magnus Falk, a senior technology adviser at the FCA who used to be the UK Government's Deputy Chief Technology Officer, flags that the advent of AI systems means that Boards and senior managers of regulated firms must take business responsibility for the major challenges and issues raised by AI.

These issues cover the areas of AI ethics, liability, transparency, accountability and explainability. They are framed in the article as critical business issues which should not be left solely up to a firm's quants or to its Ops & Tech teams - and instead require genuine understanding and engagement among Boards and senior managers, who should explicitly debate and determine their position and approach to these aspects of AI projects.

The FCA published this AI article on the same day as it launched and updated its two new webpages2 on its 9 December 2019 extension of the Senior Managers and Certification Regime (SM&CR) to solo-regulated firms.

AI Ethics

Regulators, supervisors, firms and politicians are still debating governance around the ethical use of AI. On 8 April 2019, the EU Commission's Independent High-Level Expert Group on AI published their Ethics Guidelines for Trustworthy AI3 after receiving more than 500 responses to their 2018 consultation. The new President of the EU Commission, Ursula von der Leyen, announced in her 16 July 2019 candidacy pitch "My agenda for Europe"4 that - as well as prioritising EU investment in AI - in her first 100 days in office she would put forward legislation for a coordinated EU approach on the ethical implications of AI.

On 16 July 2019, the FCA announced5 the launch of a joint year-long project with the UK's Alan Turing Institute around the use of AI in financial services, aiming to analyse ethical questions and focus on considerations of transparency and explainability as ways to address them. At an international level, the FCA is also leading a workstream on machine learning and AI for the International Organization of Securities Commissions (IOSCO), exploring issues around trust and ethics and what a framework for financial services might look like6.

The FCA Insight article notes that new governance processes, responsibilities and activities such as model validation have been created in the financial services sector to oversee algorithms used to determine pricing and automate execution, but that AI creates questions which are too nuanced and complex to be dealt with simply by clear lines of accountability. While firms in other sectors have set up ethics committees, Boards of regulated entities are warned by the FCA to dedicate ‘time and serious effort’ to identify, analyse and determine ethical questions around their use of AI.

AI Explainability

The great supercomputer Deep Thought in Douglas Adams' radio show The Hitchhiker's Guide to the Galaxy famously spent 7.5 million years cogitating over the answer to Life, the Universe, and Everything – and came up with "42", an outcome which was so inexplicable that it needed to build an even more powerful computer to explain it.

The FCA similarly notes that some of the coding approaches used in AI can produce outcomes which are difficult to explain. The AI section of its 2019 research agenda7 focuses on issues concerning the explainability of decisions made on the basis of black box algorithms, as well as on ethical questions around algorithmic bias.

Another recent FCA Insight article titled "Explaining why the computer says no"8 concluded that the ultimate target should be "sufficient" explainability, calibrated by stakeholder and by context. It noted that senior managers in banks with accountability for the algorithmic models used by their line of business will need to demand an increased level of understanding and ensure that appropriate testing and controls have been implemented. Regulators will want to see evidence that effective accountability is in place, including evidence that a degree of interpretability is being provided which is appropriate to the use case and stakeholders concerned, to ensure that the predictions of a machine learning model are not driven by statistical quirks or odd data inputs. This will also help address ethical concerns around algorithmic bias.

The FCA's Magnus Falk argues that Board members of regulated entities should stringently probe what "sufficient" explainability means for them and for clients, and show the confidence and integrity to admit when they themselves do not fully understand any aspect of their firm’s use of AI.

AI Transparency

Boards may need to set the approach and level of detail of their firms' client communications around when and how AI is used to make decisions for or about them, mandating prior client consent to their personal data being fed in to the firm's algorithms. The FCA article notes that a Board's approach to transparency will reflect the values of the organisation and impact its reputation among its clients, thus assuming increased importance, and no longer constituting a simple control issue, but instead becoming a critical business decision requiring Board-level input.

AI Liability

The FCA article finally highlights that Boards will need to monitor the potential for their firm to take on increased liability where they are offering client services involving AI, since AI usage can change the liability perspectives in traditional business models.

AI Governance: prudential regulators' concerns

On 25 July 2019 the Dutch Central Bank (DNB) published9 Guidelines for the use of AI in financial services and launched its six "SAFEST" principles for regulated firms to use AI responsibly, framing the soundness, accountability, fairness, ethics, skills and transparency aspects of the AI they develop or use. The DNB notes that, as AI applications increasingly inform a firm's decisions, and as their potential consequences for the firm and its clients increase, regulators’ responsibility and accountability standards governing their usage will become more stringent. As part of its supervision of financial institutions, the DNB will critically consider the potential impact of firms' AI applications.

A speech on 4 June 2019 by the Bank of England's Executive Director of UK Deposit Takers Supervision James Proudman, titled "Managing Machines: the governance of artificial intelligence"10, focused on the increasingly important strategic issue of how Boards of regulated financial services firms should govern the introduction of artificial intelligence. Since, he noted, it is a mantra amongst banking regulators that governance failings are the root cause of almost all prudential failures, this is also a topic of increased concern to prudential regulators.

Mr Proudman cited a recent Bank of England survey among regulated firms of their AI deployment showing firms reporting that, properly used, AI and machine learning would lower risks e.g. in anti-money laundering, know-your-customer and credit risk assessment. But there was also acknowledgement that, incorrectly used, AI could give rise to new and complex risk types, implying new challenges for Boards and senior managers. He noted three governance challenges for Boards:

  • Data: the introduction of AI and machine learning poses significant challenges around the proper use of data, suggests that Boards must prioritise data governance, its modelling and testing, and question whether the outcomes derived from the data are correct.
  • Accountability: the introduction of AI and machine learning transforms the role of human incentives in delivering good or bad outcomes, so that Boards should continue to focus on oversight of human incentives and accountabilities within AI-centric systems.
  • AI transition execution risks: acceleration in the rate of introduction of AI and machine learning will create increased execution risks during the transition that need to be overseen. Boards must consider the range of skill sets required to mitigate these risks at Board level and among senior management, and changes in control functions and risk structures. Transition may also create complex interdependencies between parts of firms usually treated as largely separate, many of which can only be brought together at, or near, the top of the organisation.

To cite the Bank of England by way of conclusion:

"In a more automated, fast-moving world of AI and machine learning, Boards – not just regulators – will need to consider and be on top of these issues. Firms will need to consider how to allocate individual responsibilities, including under the Senior Managers Regime."11