The PRA and FCA has published a feedback statement on artificial intelligence (AI) and machine learning, with responses from across the financial services industry, including insurance. This follows a Discussion Paper (DP 5/22) published in Although the statement does not contain any policy proposals, some of the feedback in the paper is an interesting reflection of the industry’s views on this evolving area. Some of the key points include:
- A principles or risk-based approach to AI is likely to be appropriate with a focus on its specific characteristics or risks. The use of high-level principles will give the firms and regulators the ability to adapt to technological developments.
- It is likely that AI capability will continue to change rapidly and regulators will need to be alive to this, for example by maintaining “live” regulatory guidance i.e. regularly updated guidance with real-life examples.
- More coordination and alignment between different regulators, domestic and international, in relation to AI and data risks would be very helpful.
- The key focus of regulation should be consumer outcomes, especially with regard to fairness and other ethical aspects. This would be in line with existing regulation.
- The existing firm governance structures, such as the Senior Managers and Certification Regime are sufficient to address AI risks.
Comments on the risks of AI included that: the speed and scale of AI could lead to potential new systemic risks, such as the emergence of new forms of market manipulation; the use of deepfakes for misinformation; third-party AI models resulting in convergent models including digital collusion or herding; and AI could amplify flash crashes or automated market disruptions. It was also noted that a risk for firms is insufficient skills and experience within firms to support the level of oversight needed to ensure risk management. Novel challenges from AI might include: the risk of its use for fraud and money laundering; the difficulty in determining whether AI models have been compromised by cyber attacks; and that the risks of generative AI are not properly understood but consumers may nevertheless rely on GenAI as a source of financial information.