On 16 October 2019, the Financial Conduct Authority (FCA) and the Bank of England (BoE) published a research note highlighting their findings from a joint survey (Survey) undertaken by a total of 106 industry respondents. The Survey was conducted to provide an indication as to the current use of machine learning (ML) in the UK financial services industry.

A series of questions were posed surrounding (i) the types of ML currently used, (ii) the areas in practice ML applications had been integrated into, and (iii) the level of maturity of ML applications in use. The Survey also focussed on what models of ML firms currently had in place including, but not limited to, their levels of safeguards, risk identification processes and data types used.

Main Findings

  • ML is increasingly being used in UK financial services: The Survey found that two thirds of the respondents had already implemented some form of ML application into their offerings or business models, with the median firm utilising them in two of their business areas. This use of ML was estimated to increase two-fold over the course of the next three years.
  • In many cases, ML development has passed the initial development phase: The FCA and BoE’s findings from the responses received suggest that ML applications have now begun to enter more “mature” stages of development with one third of these applications being used in multiple activities within a firm’s business. The Survey found that deployment of the most developed examples of ML technologies could be found within the banking and insurance sectors.
  • From front-office to back-office, ML is now used across a range of business areas: The Survey responses highlight the fact that ML applications have become more actively used in anti-money laundering activity and fraud detection although many respondents also noted its use in customer services and marketing. Other notable business areas where ML applications were found to be used within the financial services industry include (i) credit risk management, (ii) trade pricing and execution, and (iii) general insurance pricing and underwriting.
  • Regulation is not seen as an unjustified barrier: The research note outlined the fact that although firms did not see the current regulation surrounding ML deployment as an “unjustified barrier”, more regulatory guidance would be helpful. It was also found that the biggest barriers to further use of ML were internal to firms, namely legacy IT systems and data limitations.
  • Firms thought that ML does not necessarily create new risks: It was found that the risks synonymous with the use of ML applications were not “new”. Firms were concerned about the possibility that ML could amplify existing risks, such as “model validation” and “governance frameworks”, if they were unable to keep pace with the speed of technological development.
  • Firms validate ML applications before and after deployment: Firms were found to most commonly implement outcome-focused monitoring and testing against benchmarks as their main forms of validation for ML applications. That being said, many firms noted that validation frameworks would still need to be developed and improved upon in accordance with the “nature, scale and complexity” of ML applications.
  • Firms use a variety of safeguards to manage the risks associated with ML: The Survey found that the most common forms of safeguards implemented by firms were alert systems and ‘human-in-the-loop’ mechanisms, with more than 60% of respondents using these methods. The report also noted varying levels of further safeguards being implemented, such as (i) back-up systems, (ii) ‘guardrails’, and (iii) kill switches.
  • Most firms design and create in-house ML applications: It was found that the majority of respondents to the Survey produced and developed their own ML applications in-house although some were found to out-source the underlying platforms and infrastructure behind the ML applications to third-party providers. An example of this being the cloud computing requirements.
  • Firms typically apply their existing model risk management framework to ML applications: Although most firms were found to utilise their existing risk management frameworks to ML applications, many accepted the fact that these frameworks would need to be updated to match the “increasing maturity and sophistication” of ML applications. This conclusion was also mirrored by the BoE in their response to the Future of Finance report.

Next Steps

What is clear from the survey is that financial institutions are aware of the possibilities and applications of ML and, if further developed, it will likely form a key part of the future delivery of financial services to customers. That being said, the risks inherent to ML are not to be overlooked, with UK regulators taking a cautious approach to ensure that firms have safeguards in place to identify, understand and manage these risks.

Pursuant to the findings of the Survey, the FCA and BoE have stated their intention to explore the potential for further ML-related policy and have announced their plan to establish a public-private working group on Artificial Intelligence to cover some of the questions raised in the Survey. The research note also highlights the possibility of a repeat of the survey in 2020.