Seventh Hearing on Competition and Consumer Protection considers ethical, practical, and legal dimensions of artificial intelligence and machine learning.
On November 13 and 14, the Federal Trade Commission (FTC) held the seventh hearing in its series of nine planned Hearings on Competition and Consumer Protection in the 21st Century. In this hearing, the FTC invited comment on a broad host of artificial intelligence (AI) and machine learning topics, including identifying and accounting for bias in consumer- and employee-facing algorithms and collusion between pricing algorithms. Panelists generally agreed that enhanced pricing algorithms and developments in machine learning will be deeply consequential for consumers, businesses, and regulators in the coming years. But the discussions revealed deep disagreements about what these changes portend for consumer protection and antitrust law and how regulators and the legal system should adapt.
Hearing #7’s Big Idea: Is Antitrust Smart Enough for AI?
Hearing #7 focused on the economic and legal impact of AI and how regulators and policymakers should adjust consumer protection and antitrust enforcement to account for this impact.
AI is not a precise term. In its most broad sense, AI includes any technology wherein a computer performs a task that historically required human intelligence, including decision-making, speech recognition, and visual perception. The two key AI-related concepts that panelists discussed at Hearing #7 were algorithms and machine learning. Very simply, algorithms are calculations performed by computers and networks that process data and deliver outputs. For example, an on-demand video platform algorithm will take a user’s viewing history (data) and process it with an algorithm to deliver recommendations to the user (outputs). A ride-share service will observe the locations of a customer, nearby drivers, and the destination (data) and use an algorithm to calculate the ride fare (output). Machine learning or “deep learning” refers to an algorithm that progressively improves performance on a specific task over time without being explicitly programmed for how to achieve those improvements.
Experts believe that as computational processing power continues to improve and consumers continue their migration to digital platforms for economic transactions, AI will play an increasingly important role in the economy. Jobs that humans have historically performed — e.g., driving taxis, diagnosing patients, assessing creditworthiness, setting prices for product — will become increasingly reliant on AI for analysis and for outputs. At Hearing #7, panelists and outside experts agree that collaboration from stakeholders throughout the economy will be necessary to effectively optimize AI’s deployment and reduce risks to consumers. Many thought leaders — including the World Economic Forum’s Centre for the Fourth Industrial Revolution, of which Latham is a partner — are currently researching policy frameworks and governance protocols in this area. Does our increasing reliance on AI pose unique risks for consumers? And if so, are the current antitrust and consumer protection regimes flexible and robust enough to account for these risks?
Panelists and presenters at Hearing #7 grappled with these two questions across several key domains:
• Do pricing algorithms create an unreasonable risk for unfair discrimination against certain customers or employees?
• Does the use of algorithms in pricing applications create a risk of collusion or other anticompetitive effects?
• Should federal regulators issue proscriptive rules for pricing algorithms in competitive environments or implement certain screens to check whether an algorithm creates anticompetitive effects?
Key Remarks
• “When it comes to bias and fairness [in AI], perfection is not possible.” Jennifer Wortman Vaughan, Microsoft Research
Every day, companies make decisions that lead to the preferential treatment of some individuals over others. Credit rating agencies assess an individual’s creditworthiness by observing data including debt and asset ownership (against other individuals); retailers offer promotions to individuals in certain geographic locations but not others; a private school interviews multiple candidates for one position. In each of these situations, a company will favor some individuals over others. As companies become increasingly reliant on AI to facilitate these decisions, experts worry that bias may adversely affect some individuals — particularly individuals belonging to historically disadvantaged groups.
While “perfection is not possible,” as Vaughan said, some panelists suggested methods companies can use to detect and mitigate some of the potential harms that may result from bias in algorithms. Nicol Turner-Lee, a Fellow at the Center for Technology Innovation at the Brookings Institution, noted that companies need to preemptively ensure that algorithms account for the “proper coloring of those folks that are going to be the subject or the targeted focus of algorithms.” Turner-Lee suggested that companies “start with the presumption” of bias rather than wait to assess the effects of the application. Other participants, like John Dickerson of the University of Maryland and Vaughan focused on the data collection process as a source of bias. Vaughan noted that companies “should pay special attention to potential biases in the data source and data preparation process,” while Dickerson examined some of the many difficulties in using algorithms to “de-bias” a biased data set. Panelists generally agreed that these types of discrimination problems will continue to be a persistent cause for concern and require vigilance on the parts of both regulators and algorithm implementers.
• “If individuals can’t [collude] very well, would algorithms do this a lot better?” Kai-Uwe Kühn, Charles River Associates
As algorithms are increasingly entwined with companies’ price setting, panelists considered whether there is an increased risk that algorithms will collude or otherwise set prices at supra-competitive levels. Participants in the Algorithmic Collusion panel wrestled with the likelihood that algorithms will collude and how antitrust should adapt to this risk. Some, like Ai Deng of Bates White, pushed back on the notion that algorithms pose a new or different risk for identifying collusion: “[Current evidence shows] a lack of support for the popular belief that any sort of learning algorithm that tries to maximize a firm’s profits would eventually learn to tacitly collude.” Deng and others emphasized that profit-maximizing algorithms are likely to intensify price competition, particularly in markets with more than two competitors. Kühn noted that achieving and maintaining tacit collusion is more difficult than many realize; it requires algorithms to employ equilibrium strategies in which no party defects by lowering prices. These panelists argued that instead of devoting resources to the relatively remote possibility that algorithms will tacitly collude, regulators should focus on the possibility that competitors will use algorithms to consciously collude.
Other panelists expressed concern about the risks of tacit collusion. Nicolas Petit of the University of Liège School of Law identified a “black box problem” that he argued will intensify as algorithms become more complex and firms’ reliance on algorithms increases. From this perspective, pricing decisions made by algorithms that incorporate machine learning could not be easily deciphered by programmers or regulators. In the future, it may be impossible to determine whether prices set by algorithms are supracompetitive because observers will not be able to discern how the prices were set.
• “Better laws and better regulations are possible and should be developed … but that will be woefully insufficient in the algorithmic era to keep up with the types of violations [made by algorithms].” Michael Kearns, University of Pennsylvania
Panelists at Hearing #7 expressed concern that current legal and regulatory frameworks are ill-equipped to stop or rein in consumer protection and antitrust violations committed by algorithms. Some participants advocated for regulators to issue proscriptive guidelines for algorithms. Rosa M. Abrantes-Metz of the Global Economics Group noted that regulators already have guidelines for communications among competitors and should therefore also issue guidelines for how algorithms behave. Others, such as Kühn, expressed skepticism about regulators’ abilities to regulate using guidance or proactive screening for anticompetitive algorithms: “There are a couple of markets where the structure of the price setting in the market is very clear, but there are other markets—like retail markets—where price setting is complex.” Kühn continued, “Where the European Commission tried [to implement screens] it has generally failed.”
Panelists generally agreed that regulatory agencies can do more to investigate and address the potential anticompetitive effects of pricing algorithms. Many panelists suggested that the Department of Justice and the FTC hire additional technology specialists and develop their own AI-powered tools for detecting antitrust and consumer protection violations. Maurice Stucke of the University of Tennessee advocated for regulators to establish incubators to perform experiments and determine under what scenarios and market conditions tacit collusion is likely to occur, and to prevent those conditions from arising through merger policy. Stucke noted that the US has “a market power problem” and that oligopolistic markets facilitate collusion, including algorithmic collusion.
