Financial institutions and financial technology companies are increasingly using artificial intelligence (“AI”), including machine learning, to offer and deliver products and services to consumers. The U.S. Consumer Financial Protection Bureau (the “CFPB”) continues to issue legal interpretations and policy guidance on AI and machine learning innovations that portend increasing supervisory scrutiny and enforcement actions based on the AI and machine learning algorithms used by companies that are subject to the supervision of the CFPB.

The CFPB reoriented its approach to innovation in May 2022 by creating a new Office of Competition and Innovation to replace the Office of Innovation and Operation Catalyst and by eliminating its No-Action Letter and Compliance Assistance Sandbox programs. The CFPB terminated a no-action letter at the request of the recipient, in part, ostensibly to facilitate time-sensitive AI innovation, and issued guidance clarifying how the CFPB views the interplay between AI and the Equal Credit Opportunity Act (“ECOA”) and its implementing regulation.

This Reed Smith commentary describes the major elements and context for the CFPB’s most recent AI actions and concludes with suggested steps companies that are subject to the jurisdiction of the CFPB or the federal banking and credit union regulators (the “federal financial institution regulators” or the “regulators") should consider taking in order to diminish or avoid adverse actions related to their use of AI, including machine learning algorithms.

I. The CFPB and Federal Regulators Focus on Impacts of AI

Financial institutions and financial technology companies (“financial entities”) often employ some form of AI, machine learning or data analytics in connection with credit underwriting, risk management, cybersecurity, customer experience and service, fraud detection, reporting of suspicious transactions, and more. The COVID-19 pandemic accelerated adoption and use of digital and AI-based applications and catapulted AI into the center of business operations. AI has emerged as an important part of the core business strategies of financial entities. For example, financial entities’ are increasingly using AI in their credit underwriting processes, and the size and complexity of AI data sets and models that employ machine learning to help refine underwriting decisions have grown alongside the growth of AI in underwriting.

In a Request for Information (“RFI”), the CFPB and the other federal financial institution regulators expressed support for responsible innovation that identifies and manages risks associated with the use of new technologies. Through the RFI and other regulatory and supervisory efforts, the regulators have gathered substantial information on the purposes, governance, risk management, and controls financial entities that use AI have adopted and also on the challenges faced by these entities in developing, adopting, and managing AI. The regulators are continuing to increase their understanding of financial entities’ use of AI, including machine learning, in all aspects of development and delivery of financial services products and services. The regulators are also continuing to develop their views of the potential benefits and risks AI and machine learning pose to financial entities and their customers and employees as well as to the regulators’ supervisory and regulatory oversight.

Financial entities’ investment in AI, machine learning and data analytics can be seen as an important path for delivering better products and services to customers that will produce innovations and business ventures that can be reconfigured quickly to keep pace with customer demands and market developments. In both retail and commercial contexts, AI, machine learning and data analytics can improve customer options and access to financial services, reduce costs for financial entities and their customers, and improve efficiency and profitability for financial entities.

Although the federal financial institution regulators have expressed support for the potential benefits of using AI and machine learning, at the same time, the regulators see the potential for increased model, operational, internal control, information security, cyber, and third-party risks to financial entities. In addition to these safety and soundness risks, the regulators see the potential for AI to pose risks to financial entities and their customers regarding potential violations of privacy laws, unlawful discrimination, and unfair or deceptive acts or practices under the Consumer Financial Protection Act or the Federal Trade Commission Act.

In this regard, during a joint press conference with the Department of Justice and the Office of the Comptroller of the Currency announcing an AI-based enforcement action in October 2021, the Director of the CFPB stated his views on the possibility of bias in AI algorithms,

We should never assume that algorithms will be free of bias. If we want to move toward a society where each of us has equal opportunities, we need to investigate whether discriminatory black box models are undermining that goal.

Similarly, in February 2022, the CFPB outlined options to ensure that computer models used to help determine home valuations are correct and fair. In announcing the standards to help protect homebuyers and owners during the valuation process, the Director of the CFPB stated,

It is tempting to think that machines crunching numbers can take bias out of the equation, but they can’t. . . This initiative is one of many steps we are taking to ensure that in-person and algorithmic appraisals are fairer and more accurate.

II. Recent CFPB Actions Regarding AI

Recent CFPB actions demonstrate that the CFPB continues to intensify its focus on financial technology, AI and machine learning. The CFPB adopted a new approach to innovation in consumer finance in May 2022, announcing creation of the Office of Competition and Innovation, to replace the Office of Innovation and Operation Catalyst. In making this announcement, the Director of the CFPB explained,

Competition is one of the best forms of motivation. It can help companies innovate and make their products better, and their customers happier. [The CFPB] will be looking at ways to clear obstacles and pave the path to help people have more options and more easily make choices that are best for their needs.

Concurrent with this announcement, the CFPB eliminated its No-Action Letter and Compliance Assistance Sandbox programs because, according to the accompanying CFPB press release, these programs “proved to be ineffective and…some firms participating in these programs made public statements indicating that the [CFPB] had conferred benefits upon them that the [CFPB] expressly did not.”

Shortly thereafter, at the request of Upstart Network (“Upstart”), the CFPB terminated the second three-year no-action letter it issued to Upstart in late 2020 to enable the use of AI for pricing and underwriting consumer loans. In April 2022, Upstart had sought a modification of its no-action letter to add new variables to its AI algorithm, and the CFPB had requested more time to consider the changes. According to the CFPB, Upstart requested early termination of its no-action letter in order to make time-sensitive model changes that would not be possible if the CFPB conducted “an appropriate level of monitoring and review” pursuant to the no-action letter.

Finally, the CFPB issued Consumer Financial Protection Circular 2022-03 entitled "Adverse action notification requirements in connection with credit decisions based on complex algorithms", (the “CFPB Circular”) which clarified the application of ECOA to the use of technology and AI algorithms in credit decisions. The CFPB Circular applies the requirements of ECOA and its implementing Regulation B to all credit decisions, regardless of the technology used to make the decision. Adverse action notices provided to an applicant must include the reasons for adverse action, even when underwriting algorithms and AI are used.

The CFPB Circular affirms that ECOA does not permit the use of “black-box” models or complex algorithms for credit decisions “when doing so means [creditors] cannot provide the specific and accurate reasons for adverse action.” The CFPB Circular explains that it is a violation of ECOA and Regulation B for a creditor to use technology or algorithms if doing so makes the creditor unable to satisfy the requirement to provide a statement of reasons for adverse action that is specific and provides the principal reasons for the adverse action. In short, if a creditor’s technology for evaluation of applications is too complex to understand, that complexity is not a defense to non-compliance with the adverse action notice requirements. Additionally, the CFPB notes that it is considering the use of “black-box” models and algorithms beyond adverse action notices, referencing its recent spotlight on automated valuation models.

III. Implications of the CFPB’s Actions and What’s Next?

The recent CFPB actions described in this Reed Smith commentary, together with prior CFPB actions and announcements, demonstrate that the CFPB is intensely focused on misuses of AI in connection with financial entities’ offering and provision of consumer financial products and services. The CFPB’s focus may well result in increased supervisory and legal scrutiny and a greater number of enforcement actions going forward.

The CFPB Circular and other announcements regarding the application of ECOA and Regulation B from application through the term of a loan all suggest that the CFPB plans to conduct examinations and pursue enforcement actions for acts and omissions that constitute violations. Lenders that rely on AI and machine learning should ensure that their ECOA policies and procedures address the use of AI and machine learning with respect to adverse action notices and should identify and make changes to their models and algorithms that may be needed to identify the reasons for adverse action determinations. It is important to conduct a careful, deliberate review and to test decisions on loan applications for unlawful discrimination and other violations of law. Model updates should be thoroughly evaluated and tested prior to and following implementation.

Financial entities that are subject to the jurisdiction of the CFPB should evaluate the laws and rules, supervisory guidance and announcements that may be relevant to AI to help understand and address potential problems in their legal, compliance and risk environment. All of the federal financial institution regulators identified and listed in the Appendix to their March 2021 RFI a comprehensive set of laws and regulations, supervisory guidance and statements, examination manuals, procedures and other resources that may be relevant to AI. While the regulators’ RFI list is not exhaustive, it is a good starting point because it covers existing laws and regulations relating to both safety and soundness standards and consumer protection.

Many of the laws and rules in the regulators’ RFI list apply to processes and tools that a financial entity uses generally, and in connection with the entity’s use of AI, machine learning and data analytics. Future regulatory efforts may provide more granular explanatory guidance, such as around the factors that the agencies may believe create risks in using alternative data sets as compared with traditional data sets in marketing, pricing and underwriting.

Financial entities should review their uses of AI against the enforcement actions the CFPB and other regulators have taken that involve the misuse of AI. These enforcement actions are available on each regulator’s public website.

Next, we could see additional rules, interpretations, supervisory guidance, announcements and enforcement actions that describe how the CFPB and the other federal financial institution regulators view AI and machine learning. Financial entities should monitor, review and integrate into their business operations the key elements of these pronouncements, as appropriate.