On 8 November 2023, Neil Robson, a partner in the Financial Markets and Funds practice, moderated a panel discussion on the regulatory divide for artificial intelligence (AI) across the UK, EU and US, with a focus on providing legal and compliance insights for the asset management industry. Following the event, we thought that it might be interesting to reflect on the following key takeaways from the panel discussion:

  1. Approach to AI regulation. The EU, UK and US are taking different approaches to the regulation of AI. The EU is legislating to implement a prescriptive rules-based approach to AI governance, with clear guidelines and definitions. However, the proposed EU AI Act is yet to become law. On the UK side, there is a more pragmatic principles-based approach to the regulation of AI with the UK regulators exploring how AI can be “slotted into” existing frameworks. To contrast, the US regulators are focusing on curtailing and/or restricting certain AI use.
  2. AI within asset management. AI is already being used within the asset management industry. Research suggests that approximately: (i) one third of market participants use AI to detect fraud in transactions and payments; (ii) 17% use AI for compliance; and (iii) 21% use AI when conducting algorithmic trading. Other uses of AI in the asset management industry include portfolio optimisation and condensing high volumes of research.
  3. Key regulatory and compliance challenges. Key challenges when deploying AI in asset management include the: (i) possible disinformation, “hallucination” and bias from the information generated; (ii) lack of clarity regarding record retention, e.g., the scope of documents to be retained; (iii) uncertainty with regard to who ought to be accountable for the deployment and management of the AI within the business; (iv) reservation in relation to the identification of the “right” type of AI tools to be used, and the relevant contractual safeguards; (v) security risk surrounding the information being fed into the AI systems; and (vi) possibility of breaching copyright and data privacy laws.
  4. The AI Safety Summit 2023. The recent AI Safety Summit held at Bletchley Park in the UK focused on regulating AI in a “safe way”. Key outcomes of the Summit included the signing of a declaration where various countries agreed to work together to ensure AI would be designed, developed, deployed, and used in a manner that is safe for humans.
  5. UK governance of AI. The UK Financial Conduct Authority and the Bank of England’s feedback statement on AI and machine learning dated 26 October 2023 suggested that existing firm governance structure and regulatory frameworks (such as the Senior Managers and Certification Regime) are sufficient to address AI risks.
  6. There is still a long way to go for AI, but the technology it is here to stay! Firms should continue to minimise AI risks by conducting staff training and implementing policies and procedures. It will be interesting to follow how the regulation of AI develops across the UK, EU and the US, and how firms manage any divergence and nuances across such regulation.