In a recent speech[i] delivered at “Fintech and the New Financial Landscape” in Philadelphia, Federal Reserve Board Governor Lael Brainard discussed how technology is changing the financial landscape and the lessons being learned about artificial intelligence (AI) in financial services. This article seeks to provide some key takeaways from the speech for practitioners, with an emphasis on key compliance considerations relating to the use of AI in the provision of financial services.
Financial services firms are devoting increased amount of money, attention, and time to developing and using AI approaches. This is a result of several significant factors, including the increased accessibility of the three key components of AI -- algorithms, processing power, and big data. But perhaps equally significant, Governor Brainard highlighted the main reasons underlying this heightened interest in using AI approaches. Some of these factors are competitive in nature, such as views by financial services firms that AI approaches could offer cost efficiencies without compromising output and performance, and that the greater degree of automation that AI approaches provide (e.g., requiring less human input which means less opportunity for human error) may foster greater accuracy in processing. Financial services firms also may see the opportunities that AI – especially the potential for better predictive power – may provide in their efforts to improve investment performance or expand credit access.
We are already seeing the ways in which AI is and will continue to impact the banking sector. These impacts range from combining expanded consumer data sets with new algorithms in credit underwriting and insurance pricing, to the use of chatbots (rather than live operators) to provide assistance and financial advice to consumers, among others. As it relates to compliance and risk management by banks, Governor Brainard highlighted how AI approaches are already being used by some financial institutions in areas like fraud detection, capital optimization, and portfolio management.
The use of AI in financial services is still very early but is receiving increased attention and regulatory scrutiny. Regulators are aware of and paying attention to the application of AI in the financial services industry. Governor Brainard highlights the efforts of the Fintech working group, which is working across the Federal Reserve System to develop a better and more nuanced understanding of the potential implications of AI for financial services, especially as they relate to regulatory and supervisory responsibilities. Based on Governor Brainard’s comments, it appears that regulators are open to designing regulation and supervision in such a way as to ensure appropriate risk mitigation while fostering an environment that is receptive to “responsible innovations” that could provide important benefits to consumers, small businesses, and the marketplace. Regulators also seem sensitive to the importance of ensuring a level playing field, so that banks and non-banks can both pursue and seek “responsible innovations.”
The regulators are first looking to existing laws, regulations, guidance, and supervisory approaches as they begin to evaluate the appropriate regulatory and supervisory approach for AI uses. Perhaps most significantly, Governor Brainard’s speech indicates that the regulators have concluded that certain laws, regulation, and guidance appear particularly relevant and well suited to the use of AI tools. These include the Federal Reserve’s “Guidance on Model Risk Management” (SR Letter 11-7), which underscores from a safety and soundness perspective the importance of ensuring that the use of models (which include complex algorithms like AI) is appropriately analyzed, monitored, and challenged (as necessary) at all stages in the lifecycle – including development, implementation, and use. This guidance emphasizes the supervisory expectation to conduct sound independent reviews of such modeling. Also of note is Governor Brainard’s comment that the Federal Reserve’s regulatory guidance on vendor risk management (SR 13-19/CA 13-21), combined with the prudential regulators' guidance on technology service providers, “could be expected to apply as well to AI-based tools or services that are externally sourced.”[ii] As Governor Brainard commented, most banks will need to resort to using non-bank vendors in order to take advantage of AI approaches, including
chatbots, anti-money-laundering/know your customer compliance products, or new credit evaluation tools. In addition, Governor Brainard suggests that financial institutions should adopt the regulators’ risk-based supervisory approach in their use of different AI approaches. According to Governor Brainard, “the level of scrutiny should be commensurate with the potential risk posed by the approach, tool, model, or process used,” which highlights her view that financial institutions should apply greater scrutiny, review, and oversight of AI tools they use “for major decisions or that could have a material impact on consumers, compliance, or safety and soundness.”[iii] In particular, the regulators expect financial institutions “to apply robust analysis and prudent risk management and controls to AI tools, as they do in other areas, as well as to monitor potential changes and ongoing developments.”[iv]
For those financial institutions doing business in the consumer finance space, it is important to be aware of the fair lending and other consumer protection risks and ensure compliance with fair lending and other consumer protection laws. While AI may offer consumer benefits, such as increasing access to credit by making strides to use non-traditional criteria in the credit underwriting process and reaching into the “unbanked” and “underbanked” communities, its use does not come without risk. Financial institutions should not assume that AI approaches are without bias simply because of their automation. As Governor Brainard stated, “[a]lgorithms and models reflect the goals and perspectives of those who develop them as well as the data that trains them and, as a result, AI tools can reflect or ‘learn’ the biases of the society in which they were created.”[v] The requirements contained in the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA) for creditors to provide notice of the factors involved in taking actions that are adverse or unfavorable for the consumer are intended to accomplish several key policy objectives. They provide greater transparency in the underwriting process, promote fair lending by requiring creditors to explain their decision-making process, and provide consumers with important information to improve their credit position. Keeping these requirements in mind, it is important to also remember that the complexity and opacity of some AI tools may make it difficult to explain credit decisions to consumers.
While not all potential consequences of AI are knowable at this time, financial institutions should remain vigilant and adopt sound controls now to prevent and mitigate current and possible future problems. As Governor Brainard points out, the history of banking instructs us that new products and processes are an area ripe for problems, risks, and issues to arise. The same goes for AI. There have been examples of AI approaches not functioning as expected, which signals that this is not a completely risk- or error-free area, and that things can and sometimes do go wrong. “It is important for firms to recognize the possible pitfalls and employ sound controls now to prevent and mitigate possible future problems,” stated Governor Brainard.
Like the rest of us, the regulators are also learning how AI tools can be used in the banking sector, but are open to conversation and feedback. Technological change and innovation is occurring at a fast pace. As is commonly said, the laws and regulations (and the regulators themselves) struggle to keep up with advances in technology, and this may be especially so for the intersection of fintech and banking. Balancing stability and innovation is not always an easy task. But the regulators seem open to discussion and feedback concerning the AI approaches and other innovations banks and other financial services firms are exploring, and how such approaches and innovations may intersect with existing laws, regulations, guidance, supervisory approaches, and policy interests.