On February 12, the Task Force on Artificial Intelligence of the House of Representatives Committee on Financial Services conducted a hearing titled “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services.” The purpose of this hearing, as articulated in the opening remarks of the Committee Chair, was to assess fairness and transparency in the use of algorithms in the financial services industry. The panelists were public interest advocates, academics, and legal professionals working in the technology space.

The comments at the hearing revolved around two major themes: first, how to define ‘fairness’ and the consequences of choosing a definition, and second, how bias can result from using algorithms and how regulators might correct that bias. The panelists and Congressional representatives also made some general suggestions about appropriate regulations and remedies Congress could implement to ensure fair algorithms.

On the first theme of defining fairness, the panelists explained that fairness can have an actual price, because inserting fairness can make the algorithm less accurate. Therefore, choosing why to diverge from that accuracy is a key issue. They observed that there were multiple definitions of fairness and proposed a few different ones, all of which were fairly abstract. The panelists also discussed potential tradeoffs presented by choosing a definition, as the panelists opined that choosing a definition that creates more fairness on one metric (for instance, race) may create less, or be at the expense of more, fairness on another metric (for instance, gender). There was also a dialogue on who should participate in the development of an algorithm to ensure proper tradeoffs were being made, with the panelists arguing for the importance of including the communities affected by the algorithm in the discussion at all stages.

On the second theme, the Congressional representatives attempted to determine at what point in the algorithmic system bias entered the algorithm and thus where to direct the regulation: at the input data, at the algorithm process itself, at the outputs, or at the human decision makers who use the output. Generally, the consensus among the panelists appeared to be that the whole system should be considered, but they also stressed focusing on the outputs. In part that view seemed to stem from a belief by the panelists that the inputs to an algorithm could not be effectively regulated, leading them to stress the need to examine and constrain outputs instead. In addition, the panelists argued for the need to also regulate the conduct of the individuals who develop and use algorithms.

The panelists also opined that the technological capability to create fair algorithms existed, although they noted that there have been relatively few deployments of such capabilities in critical products at the big tech companies.

While no one suggested specific policy actions, the panelists and Congressional representatives offered some general thoughts on appropriate regulations to ensure fair algorithms, including:

  • Strengthening the current regulatory framework, both for consumer finance regulations specifically and agencies generally.
  • Requiring disgorgement of profits made through the use of an unfair algorithm. However, the panelists noted that this would only be useful if disclosure of unfair algorithms was mandated, or regulators had better tools to detect unfair algorithms.
  • Requiring the maintenance of good records concerning the development, process, and evolution (through machine learning) of an algorithm that would allow for auditing.
  • Allowing market forces to ensure fairness through arbitrage or other economics concepts.

The hearing concluded with no overarching decisions or conclusions reached. However, from the discussion, it appeared that lawmakers were considering how to define fairness in the context of using algorithms and artificial intelligence and how to determine where regulation should be used to ensure it. Given the preliminary nature of the discussion, it will be important to continue to watch for developments in this space.