The Big Data Challenge

Earlier this month, we highlighted that there will be an increased focus by the Data Protection Commission (DPC) on ensuring privacy by design and default during 2020. From the regulator’s perspective innovation is encouraged, provided it is done in an accountable, ethical and fair way. Already, with the codification of Privacy Impact Assessments and accountability under the GDPR, we are seeing clients realise that key decision makers in the organisation ought to be fully briefed on how risks are mitigated in relation to Big Data and artificial intelligence (AI) – enabled technology projects.

As we reported back in June, the promise of new technology, such as AI – based automation and enhancement has turned many businesses into data hoarders. That approach might not work for long as the EU recently urged European businesses to capitalise on their vast data resources.[1] The use of AI solutions is one way to do so. The holy grail for many financial services (FS) firms in particular, is the single customer view; mine the data for knowledge and automate delivery, in order to enhance the relationship with the customer. This can be achieved, but not without large databases of quality data and very sophisticated technology solutions. Investment in both continues to grow.

Legislating for AI

Much of the focus in terms of regulatory and compliance risk in relation to AI – enabled innovation is around privacy. But that lens is too narrow. Indeed, current EU legislation on data protection, competition and consumer protection does not define ‘big data’ clearly or at all, which arguably creates a regulatory blind spot that will need to be addressed.

The EU is advocating for the need to prepare for the socio-economic changes brought by AI and to ensure an appropriate ethical and legal framework for it. Last year, the EU Commission published the Ethics Guidelines for Trustworthy Artificial Intelligence and the Report on liability for Artificial Intelligence and other emerging technologies. Expect more during 2020 from the Expert Group on Liability & New Technologies, particularly in relation to whether existing legal frameworks, such as the product liability regime, is fit for purposes when it comes to AI enabled products.

The state of country level regulatory activity in relation to AI currently remains varied across Europe. However, the regulators and legislators in most countries recognise the importance of AI and have started formulating their policies.

In the US, the White House issued guidelines for federal agencies on how to approach AI regulation, and it emphasises the need for proportionate approach where less regulation may be preferred:

Fostering innovation and growth [of AI] through forbearing from new regulations may be appropriate”.

The recently proposed Algorithmic Accountability Act 2019, while still in draft, is an example of a concrete legislative attempt in the US to address concerns in relation to ethical and accountable use of AI. It would require bias and security impact assessments (thereby going beyond privacy concerns) to be conducted by a wide range of players prior to implementing new AI based product and service.


Much of the regulatory activity internationally is being informed by OECD’s values of trustworthy AI, which include: AI driving inclusive and sustainable growth, diversity and fairness, transparency, security, and accountability.[3]


Regulatory Guidance in the Financial Services Sector

In relation to financial services, we think the distinction between privacy and more ‘general’ regulation will be increasingly blurred due to more co-operation and convergence between the two regulatory competencies in relation to AI – enabled innovation.

As the old adage goes, you will get out of [AI] what you put in to it. Bad, unstructured data makes for unsatisfactory results and sometimes additional risk.

The Central Bank of Ireland (CBI) has publicly stated that there is a significant challenge with data sourcing and management amongst firms it regulates:


Firms need to have a single source of their key data if they are to rely on it for critical intelligence and decision-making. Those that manage this transition best are likely to be the firms that survive and thrive.” [4]


In other words, banks and other financial services firms need to substantially invest more in their technology and data capabilities. Effective management of technology risk requires IT systems that are integrated and up to date. Data cannot be captured, interrogated and exploited in business and operational silos. The governance, legal and risk management structures and skill sets in financial services firms should not be playing a catch-up with the technological innovation.

The CBI is not alone in expressing such views. Just like the potential of AI technology, the regulatory challenges and perspectives around AI are also borderless.


The Monetary Authority of Singapore (MAS) has confirmed that when used responsibly and effectively, AI has significant potential to improve business processes, mitigate risks and facilitate stronger decision-making. It worked closely with the Personal Data Protection Commission (PDPC) to develop a set of principles for firms to use in their internal governance structures to govern use of technologies that assist or replace human decision-making.[5] The principles are built around four key concepts and below we give a brief outline of each, an example of how failure to adhere to such principles might manifest itself when using AI in financial services and a possible risk mitigant:




The MAS confirmed that existing risk management models need some refinement to deal with the challenges of AI – enabled innovation.

In October 2019, the Basel Committee on Banking Supervision’s (BCBS) Supervision and Implementation Group (SIG) held a workshop on the use of artificial intelligence (AI) and machine learning (ML) in the banking sector which publicly confirmed a view that AI models may amplify traditional model risks for banks.[6] A key area of risk highlighted by participants the quantity and quality of vast data sets, data access and engagement with third parties that use or store data. This also raises the challenge of effective third party vendor management in support of a AI strategy.


The European Banking Authority has gone a step further. It considered credit scoring as a clear example of where AI – enabled manipulation of data can deliver real benefits for banks. It highlighted the need to manage legal, conduct and reputation risk but also noted that risk could be higher if external providers are involved in such AI-enabled solutions than it is for services developed in house. It also flagged that ICT change and security risk may possibly increase, as ICT systems would need to develop to be more open to different data sources or technology providers and allow more agility in the use of data. Therefore, the procurement, diligence, negotiation and contract management strategy within organisations should be support any AI innovation strategy. In Part 2, we will take a closer look at third party vendor – supported AI.

Financial services regulators are promoting principles and giving guidance through public statements to set the expectation that firms need to ensure their governance model is fit for purpose when applied to AI enabled innovation. In time, these regulators will become more proactive in asking firms to demonstrate they fully understand their data assets and to explain how that data is exploited and how the associated risk is mitigated when using AI – enabled technologies. Financial services firms should develop a coherent AI strategy now in a way that anticipates how they will answer that question when it inevitably comes.