FCA Chair hints that new regulation addressing data ethics in the FinTech space may be on the horizon.
Will societies of the future be ruled by algocracy, in which algorithms decide how humans are governed? Charles Randell, Chair of the Financial Conduct Authority (FCA) and Payment Systems Regulator, addressed how to avoid this hypothetical scenario in a broad-ranging speech on that he delivered on 11 July 2018 in London.
Contributing Factors to an Algocracy
According to Randell, the following three conditions could collectively give rise to a future algocracy:
- If a small number of major corporations were to hold the largest datasets for a significant number of individuals (as is currently the case)
- Continuing vast and rapid improvements in artificial intelligence and machine learning that allows firms to mine Big Data sets with greater ease and speed
- Further developments in behavioural science allowing firms to target their sales efforts by exploiting consumers’ decision-making biases
Threats to the Free Market
Randell argued that the existing, liberal paradigm of the free market, which stands on the principle of caveat emptor, may not be equipped to deal with the advent of Big Data in such circumstances. The principle of caveat emptor assumes that consumers are capable of making good choices if they are provided with fair disclosure, and that it is therefore reasonable to hold the consumer responsible for their decisions. However, the problem with applying this principle to the online world is two-fold:
- Consumers’ ability to make informed decisions is constrained by the evolving pace of technology and the complexity of the use cases of Big Data.
- Expecting consumers to read the plethora of terms and policies that they face on a daily basis is not feasible.
Randell quoted an interesting statistic from a study conducted 10 years ago in the United States. At that time, researchers estimated that it would take the typical online service user approximately 250 hours every year to actually read the privacy policies before clicking ‘Agree’. Given the growth of online services in the last 10 years, this figure would be exponentially larger if applied to the modern online world.
Randell also questioned the implications of considering individuals merely as data points. He provided a handful of real-world examples, including a New York Times report finding that some credit card companies in the US began reducing cardholders’ credit limits when marriage guidance counselling charges appeared on credit card bills (seemingly based on the fact that the breakdown of marital relationships correlates with debt default). He also pointed to a number of media reports earlier this year that claimed price comparison websites provided significantly higher quotes for insurance premiums to people whose names suggested that they were members of ethnic minority groups.
Randell called on society as a whole, and policy makers in particular, to “think about how to mitigate the risk that an algocracy exacerbates social exclusion and worsens access to financial services in the way that it identifies the most profitable or the most risky customers”. He went on to define three pillars of good innovation (i.e., innovation that is sustainable, and which benefits society in the long run), to which financial services firms should adhere: purpose, people, and trust.
According to Randell, a firm’s purpose should be to create long-term value rather than simply to maximise revenue by exploiting consumer bias. He argued that the latter approach is fundamentally short-sighted, and is therefore likely to be short-lived. Randell expressed the view that people must remain at the centre of a firm’s purpose and governance; furthermore, people (rather than machines) must, ultimately, be accountable for whether outcomes are ethically acceptable. Finally, according to Randell, financial services firms must create and maintain trust by remaining part of the communities they serve, understanding society’s views of the fair use of personal data, and clearly articulating their own values, so that consumers understand and accept a firm’s approach to using consumer data. He queried whether all businesses should have a data charter, and if so, whether the requirement should be enforced by means of voluntary codes of practice or by regulation.
In conclusion, Randell stated that Big Data presents the UK financial services industry with an opportunity to take a leadership role in promoting innovative technology and effective regulation, and to contribute to developing new standards for data ethics. Although Randell’s speech provided a thought-provoking look at the use of Big Data in the financial services industry, the underlying message could apply across sectors.
Promisingly, the FCA appears to once again be leading explorations of innovative technology, as well as the development of meaningful guidance for financial services providers. Randell’s comments align with Information Commissioner Elizabeth Denham’s remarks at the Alan Turing Institute on 23 March 2018. Denham acknowledged the importance of algorithms, as well as the interplay of law, technology, and data ethics that allows people to use such algorithms in everyday life.
The fact that both regulators are recognising the opportunities offered by artificial intelligence and similar technologies is encouraging. Furthermore, both regulators are speaking openly about how such use cases can be implemented within the framework of regulation — and the challenges of doing so. One message is clear: operating in a black box is not an option. Trust and ergo transparency are fundamental to ensuring that Big Data facilitates a bright future. In fact, the ICO has publicly announced its intent to launch a sandbox modelled on the FCA’s Project Innovate Sandbox.
Therefore, while organisations may still lack a clear answer for how they can apply Big Data in a legal, transparent, and ethical way, they can be reassured that UK regulators will clearly support this endeavour.