Artificial intelligence continues to remain a focus in 2021, as we predicted at the start of the year. From the FTC, to the EU, to others, regulators of all kinds are paying attention to companies’ use of these tools. In the latest, five US federal agencies are seeking input on how financial institutions are using AI tools. Comments from stakeholders are due by June 1, 2021.

These financial agencies recognize and acknowledge the benefits of AI, noting that AI tools have the potential to augment business decision-making and enhance services available to consumers and businesses. These tools, they note, are used by financial services firms for things like flagging suspicious transactions and personalizing customer services. While there are certainly benefits, the agencies recognize that use of AI tools are not without risks, including operational vulnerabilities, cyber threats, heightened consumer protection issues, and privacy concerns.

The agencies are seeking input from the industry on a variety of topics to provide them with a more complete picture of current AI. This is part of their efforts to determine the appropriate levels and types of governance, risk management, and controls over those tools. Areas of input sought include:

  • Explainability
  • Data quality and data processing
  • Overfitting (i.e. when an algorithm “learns” from idiosyncratic patterns in the training data that are not representative of the population as a whole)
  • Cybersecurity
  • Dynamic updating
  • AI use by community institutions
  • Use of AI developed or provided by third parties
  • Fair lending

While signaling that they may create more guidance, the agencies point out that there are currently many regulations that govern use of AI tools. These include the Fair Credit Reporting Act, Section 5 of the FTC Act, Sections 501 and 505(b) of the Gramm-Leach-Bliley Act, and more. Similarly, there are several general guidance documents that have been issued that can give direction on the use of AI tools.