On May 7, 2021, the US House of Representatives Task Force on Artificial Intelligence (AI) held a hearing on “Equitable Algorithms: How Human-Centered AI can Address Systemic Racism and Racial Justice in Housing and Financial Services.”1 It was the latest among several federal, state and international initiatives calling for fair, transparent and accountable AI in the financial and consumer sectors, and urging all AI actors (developers, manufacturers, users and regulators of AI systems) to address inequitable outcomes. This hearing focused on ways the public and private sectors can use AI to address systemic racism and optimize fairness.

Representative Bill Foster, D. Illinois, who chairs the House AI Task Force, is a PhD physicist and businessman in his 11th year in Congress. He opened the hearing by asking the panel how the US can get to a place where AI is used as a tool to automate fairness and racial equity. He explained that AI algorithms – pre-coded sets of instructions and calculations executed automatically to enable computers to make decisions – are inherently neutral. They generally work by taking historical data and building a model that replicates the historical data, while attempting to predict the future. He expressed the view that while AI algorithms are in and of themselves neutral, the algorithms can reflect the biases of their data and their developers. AI developers, however, are not bound to repeat the past and can tell the algorithms to balance their decision-making to promote fairness and equity across protected classes of people.

The hearing panelists were Stephen Hayes of Relman Colfax PLLC, Melissa Koide of FinRegLab, Lisa Rice of the National Fair Housing Alliance, Dave Girouard of Upstart and Kareem Saleh of FairPlay AI. During the hearing, speakers noted the power and pervasiveness of AI algorithms in all sectors, and touched on a variety of real-world examples of how algorithms can determine significant events in people’s lives. For example, AI is already used to determine whether someone can: have access to housing; get a living wage job; access quality credit; get released on bail after an arrest; serve a prison sentence; receive needed healthcare when sick; and be able to refinance a home mortgage.

Witnesses presented data showing the disparities in the US credit system. For example, according to an FDIC survey, nearly 30 percent of Black and Hispanic American households (compared to 16 percent of whites and Asians) lack the mainstream banking and/or prepaid accounts likely to be reported to credit bureaus, accounts that cannot be scored using the most widely adopted credit scoring models.2 Just 45 percent of Americans have access to bank-quality credit, yet 83 percent of Americans have never defaulted on a loan.3 Testimony was given on how, over the past three years, an AI lending platform that has used non-traditional data and advanced AI models to predict credit risk is yielding positive results by increasing lending and lowering interest rates for minority populations.4

Speakers also explained how AI can reinforce disparities, and they called for remedial governmental measures to bring financial regulations into the 21st century. Given their scalable power and feedback loop effects, speakers noted how AI algorithms that are built on bias-laden data can amplify discriminatory outcomes if they are not controlled. Witnesses urged the development of more representative and robust data sets from non-traditional sources; the rigorous testing of models by industry and regulatory agencies to detect discriminatory outcomes against protected classes; the training of all those involved in AI, including developers, financial institutions, housing providers, tech companies, insurers, and regulators, in alternative AI methods to be better able to identify bias and build solutions and the clarification by Congress and regulatory agencies that discrimination is illegal across markets, including those that have been historically unregulated.

The House AI Task Force hearing follows other recent US federal and state activity focused on bias in AI. Other developments include:

  • FTC Business Alert on AI: On April 19, 2021, the Federal Trade Commission (FTC)5 published a business alert, “Aiming for truth, fairness, and equity in your company’s use of AI.”6 The alert recalled the FTC’s decades of experience enforcing three powerful laws that now regulate developers and users of AI: Section 5 of the FTC Act; the Fair Credit Reporting Act; and the Equal Credit Opportunity Act. The alert gave prescriptive advice, such as “Start with the right foundation” by using representative data sets, and “Watch out for discriminatory outcomes,” by testing the algorithms for forbidden outcomes. Importantly, the alert warned AI developers not to overpromise on what their algorithms can deliver, particularly if they claim to deliver unbiased results using algorithms that were built on data lacking in gender and racial diversity. “The result may be deception, discrimination – and an FTC enforcement action.”7
  • RFI on the Use of AI by Financial Institutions: On March 31, 2021, the Consumer Financial Protection Bureau (CFPB) and four federal banking agencies issued a Request for Information (RFI) on the use of AI by financial institutions.8 Recognizing the growing use of AI by financial institutions, the agencies asked financial institutions to respond by June 1st on the beneficial uses and challenges they face in using AI, as well as on the appropriate governance, risk management and other controls required by the use of AI models. Among the 17 questions asked were:
    • How do financial institutions identify and manage the risks related to AI explainability?
    • How do financial institutions using AI manage the risks related to data quality and data processing?
    • What are the risks that AI can be biased and/or result in discrimination on prohibited bases?
  • Racial Equity in AI as an FTC Priority: Shortly after Commissioner Slaughter was named Acting Chair of the FTC by President Biden, she gave an important speech to the Future of Privacy Forum on her top two priorities for the FTC.9 One of those areas is racial equity – “righting the wrongs of over four hundred years of racial injustice” – by focusing on algorithmic discrimination. Chair Slaughter pointedly noted that she has asked her staff to “actively investigate biased and discriminatory algorithms,” and also noted that the agency will “redouble [its] efforts to identify law violations” in faulty facial recognition systems. This speech followed on Commissioner Slaughter’s January 2020 remarks to the UCLA School of Law10 in which she explained the role that faulty AI algorithms and proxy discrimination plays in maintaining economic injustice.
  • Insurance Commissioners’ Principles on AI: Last summer, the National Association of Insurance Commissioners (NAIC), a voluntary association of state insurance supervisors from all 50 US states, Puerto Rico and the District of Columbia, unanimously adopted Principles on Artificial Intelligence to guide state insurance regulators on how to regulate the use of AI in the insurance industry in the US. The document calls on all AI actors to promote and uphold the principles, including that AI should be fair and ethical and avoid proxy discrimination against protected classes. The principles “do not carry the weight of law or impose any legal liability,” but constitute a statement of regulatory policy and “should be used to assist regulators and NAIC committees addressing insurance-specific AI applications.” At the same time, the NAIC formed a special executive committee on Race and Insurance, whose focus is to examine and determine which current practices, including the use of AI in insurance, and barriers exist in the insurance sector that potentially disadvantage minorities and historically underrepresented groups.11 The work of this committee is ongoing.

Recent International Developments

EU’s Artificial Intelligence Regulation: On April 21, 2021, the European Commission published its groundbreaking “Proposal for a Regulation on a European Approach for Artificial Intelligence” (Regulation).12 The 108-page proposal sets out harmonized rules for the development, placement on the market, and use of AI systems in the European Union (EU) following a proportionate risk-based approach. Member States are required to lay down rules on penalties for infringements of the Regulation, with fines at a level of the higher of 30,000,000 euros or 6 percent of worldwide annual turnover for certain offenses including the carrying out of a prohibited AI practice. Additionally, the Regulation imposes obligations in relation to the reporting of serious incidents and of malfunctioning of AI systems, which constitutes a breach of obligations under EU law.

The Regulation has vast implications for industries across numerous sectors around the world, especially given the current emphasis by businesses on digitalization and personalization, the breadth of the Regulation’s definition of “AI Systems” to which it applies, and its extra-territorial reach beyond the EU. The Regulation would have application, in different ways and with differing obligations, to developers, distributors and/or users of these AI systems. The Regulation lays down rules for the placing on the market, the putting into service and the use of AI systems in the EU; prohibits certain AI practices; sets out specific requirements for high-risk AI systems and obligations for operators of such systems; and harmonizes transparency rules and obligations for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorization systems, and AI systems used to generate or manipulate image, audio or video content; and rules on market monitoring and surveillance.

The Regulation is designed to apply to AI systems operated/used in the EU but has extra-territorial reach. As such, although an EU regulation, the legislation will be of relevance to suppliers and manufacturers of AI systems based outside the EU as its scope extends to providers placing AI systems on the EU market or putting AI systems into service in the EU, regardless of where they are based, as well as to providers and users of AI systems located outside the EU where the output produced by the system is used in the EU. A provider established outside the EU will (unless it has an importer) be required to appoint an EU based authorized representative for the purpose of the Regulation. As with GDPR, it is expected to have a wide-reaching impact on shaping the legislative landscape for AI across the globe and, to some extent, it is expected to shape how many customers engage with adopting AI and how suppliers shape their AI products and services in a legally compliant way. The Regulation will serve as a “high bar” by which industries can measure their approach to designing, selling, licensing and/or embedding ethical AI solutions, regardless of location. “High risk” AI systems will have to comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the EU market.

The Regulation calls out regulated credit institutions specifically. For AI systems provided or used by regulated credit institutions, the authorities responsible for the supervision of the EU’s financial services legislation are proposed in the Regulation to be designated as competent authorities for supervising the requirements in the proposal to ensure a coherent enforcement of the obligations under this proposal and the EU’s financial services legislation where AI systems are to some extent implicitly regulated in relation to the internal governance system of credit institutions. To further enhance consistency, the conformity assessment procedure and some of the providers’ procedural obligations under this proposal are integrated into the procedures under Directive 2013/36/EU on access to the activity of credit institutions and the prudential supervision.

At a national level, EU Member States will have to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the Regulation. The European Data Protection Supervisor will act as the competent authority for the supervision of the EU institutions, agencies and bodies when they fall within the scope of this privacy-focused regulation. The Regulation will apply two years following its entry into force, which will be on the 20th day following its publication in the Official Journal of the European Union (OJEU), although certain sections may come into force earlier.

Companies that are developing AI Systems or placing them onto the EU market and/or embedding AI solutions will need to grapple with the implications of the proposed Regulation and other relevant legislation and guidance in this area.

Whether and how the US regulatory system will keep pace with EU developments in regulating AI is unclear. But those companies developing and using AI systems in the US should expect more US governmental initiatives, including enforcement actions, aimed at regulating inequitable outcomes in AI models.