On April 19, 2021, the Federal Trade Commission (FTC or the Commission) published a blog post under “Tips & Advice,” titled, Aiming for truth, fairness, and equity in your company’s use of AI. In this post, the FTC describes some ways artificial intelligence (AI) is making a difference in many applications and industries. But the FTC also highlights certain dangers of AI, like unintended (or worse, intentional) discrimination based on race and other protected classes. The FTC provides guidance on how businesses can minimize these risks while reminding them all of the FTC’s broad enforcement powers under article 5. Simultaneously, various other financial regulatory agencies are requesting information and have opened a comment period for organizations to share current uses of AI. The FTC and other regulators, backed by actions of the Biden administration, are making it clear that there is a requirement of equity when it comes to AI applications.
What is the problem?
As it is possible for AI and its underlying algorithms – even inadvertently – to result in unfair, deceptive, biased, or erroneous outcomes, various federal agencies have taken different approaches to ensure that organizations are aware of and are taking necessary steps to limit or prevent these unfair outcomes. Financial regulators including the Federal Reserve Board, the Consumer Financial Protection Bureau, the Federal Deposit Insurance Corporation, the National Credit Union Administration, and the Office of the Comptroller of the Currency have issued a Request for Information, giving those organizations benefiting from the use of AI the opportunity to help shape policy and regulation by submitting comments. The FTC is taking a more direct approach. While its blog post touts self-accountability, it also mentions its section 5 authority and highlights its willingness to use this authority if the Commission feels it is warranted.
This problem is also a component part of the Biden administration’s efforts to eliminate systemic racism. During his first week in office, President Biden signed executive orders intended to increase racial equity in a variety of sectors – freeing federal agencies to reinstate rules that strengthen antidiscrimination policies in lending and housing. This includes the use of AI by banks and the housing industry to predict creditworthiness.
Who is responsible?
AI is now widely utilized by businesses in many applications. The varied utilizations across disparate industries create a question as to who might be held responsible for the failings of this new and ever-evolving technology. Is the responsibility in the supply chain, including programmers and software engineers? Or should it lie more squarely with the end user, who contracts, purchases, and utilizes the AI software in its processes? When it comes to AI software issues, regulators have previously targeted the end user, believing that the end user is ultimately responsible for the tool. Future enforcement actions could now also target the supply chain. While it may have been easier to attribute misuse of software tools to an end user, recent research has revealed that the disparate impact of seemingly race-neutral AI is the result of more fundamental flaws.
How can negative outcomes be prevented?
Wherever the blame lies, it is becoming more apparent that the utilization of AI can and does go awry. Tainted data sets used to create or train the AI can lead to erroneous conclusions. The fact that programmers are human can lead to the inclusion of their implicit bias. Disparate impact may result from the best intentions, leading to the discrimination of protected classes. The FTC provides guidance on how businesses can minimize these risks. The first requirement is the use of “good data.” Incorporating good data hygiene, ensuring data completeness, and reviewing data sets from the start can minimize potentially negative outcomes and minimize risk for both the supply chain and the end user. Another essential risk mitigation is to implement checks and quality control measures into AI-controlled processes, as it is not always apparent at the data input phase what can or will trigger an adverse outcome. Organizations that contract and purchase AI-powered software can move the needle on these improvements by requesting and reviewing quality control data.
The second requirement is to maintain organizational transparency and employ external checks or audits. An organization should take proactive steps to maintain transparency, like opening source code to independent researchers. Employing external auditors and meeting independent standards could substantially aid in outcomes that are more equitable and help keep AI’s potential issues in check. The organization should honestly assess its AI algorithm’s capabilities. Perfect and completely unbiased results are virtually impossible to achieve. Claimed results like these are actually more likely to catch the eye of the Commission and result in an enforcement action.
What can result from poor conduct?
The FTC spells out in its guidance that it “has decades of experience with” section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act and that if organizations do not hold themselves accountable, the “FTC [will] do it for [them].” The FTC’s broad enforcement powers, coupled with the focus of the Biden administration, indicates that a strong federal response should be expected.
AI can be an important piece of a business’ technological tool kit. It can streamline decisions, provide insight, relieve resource strains, and improve functionality. It can also deceive, err, and result in unfair outcomes. If your business, like many others, is interested in providing information that may shape how financial regulators view liability over the creation, acquisition, and use of AI, your comments must be received by June 1, 2021. Regardless of whether comments are submitted, it is a certainty that the FTC and other regulatory bodies will be monitoring AI use carefully.