COMPLIANCE RISKS BEHIND THE APPLICATION OF AI IN FINANCIAL SERVICES
Regulators, as well as major players in Financial Services, identified the risks behind AI application: The Bank of America requests that any “AI system that makes a judgement about a customer needs to be able to explain itself”. The Monetary Authority of Singapore issued principles in 2018, to promote fairness, ethics, responsibility and transparency (FEAT) in the use of AI and Data Analytics in the Financial Sector. And also Finma noted already in Circular 2013/8: “Supervised institutions must document the key features of their algorithmic trading strategies in a way that third parties can understand.”
Another minefield is the DSGVO. There are a number of regulations that have to be observed in AI projects. AI is increasingly on the radar of data protection authorities and consumer protection organisations.
Further regulations are on the agenda of legislators.
COMPANIES ARE FACED WITH A VARIETY OF QUESTIONS:
- How is the regulatory environment for AI projects? What are our duties of corporations?
- What are the legal risks behind AI applications? How can you minimize the risks?
- How can you interpret algorithms for yourselve and for your customers?
- What points should be considered when AI software is purchased from third parties?