Online pricing tools, including pricing algorithms, are increasingly prevalent and relied upon by retailers. The effective use of algorithms has the potential to make companies more efficient and reduce their costs which, in turn, can lead to: a reduction in costs for consumers; better quality goods and services; more choices; and innovative new products. But algorithms also have the potential to “learn” collusive behavior and aid in the implementation of collusive agreements, thereby threatening harm to consumer welfare. This battle between risk and reward is already playing out in the public domain in high-profile cases involving Amazon, Google, and Uber. So, what should be the role of competition authorities and civil claimants in this constantly evolving world? How does the rise and sudden prevalence of algorithms challenge traditional antitrust concepts of agreement and collusion? And do enforcers of antitrust law have the necessary tools to analyze and enforce breaches of competition law when algorithms are involved? These are the key questions currently being asked of authorities and litigants throughout the world.
The US Department of Justice (“DOJ”), in collaboration with the Federal Trade Commission (“FTC”) and the European Commission (“EU Commission”), has recently made submissions in this respect to the Organization for Economic Co-operation and Development (the “OECD”), and there are varying degrees of confidence in the current tools available to combat anti-competitive online behavior. At the same time, various perspectives abound in the antitrust bar, many of which suggest a range of proposed defenses for corporate defendants in hypothetical scenarios or express pessimism about the ability of the antitrust laws to address adequately these new and unprecedented technological scenarios. We take a very different view, namely that the antitrust laws are robust and not in need of further tinkering simply because of the evolution of pricing systems. These laws reflect over a hundred years of complex anticompetitive agreement analysis, and have been no less vital over the last two decades of rapid technological change. In short, the present antitrust analytical frameworks can capably root out—and punish—collusion, even among robots. This article attempts to distill some of the recent dialogue and debate on the issue.
What is an algorithm?
An algorithm (at least in this context) is a piece of software that sets forth a process or set of rules to be followed in order to solve a particular problem. By following this process or set of rules, an algorithm can automatically make decisions depending on the data fed into it. The utility and effect of an algorithm depends on how it is designed and the quality of the input data. Algorithms generally follow human instruction, but an algorithm can also be programmed to amend its own decision-making rules to account for past experience - becoming a self-learning algorithm. Self-learning algorithms form the basis for technologies such as search engines and self-driving cars, and unsurprisingly pose the greatest threat to effective regulation because of their potential for unchecked anticompetitive behavior.
How are algorithms used?
The capabilities of algorithms are constantly evolving. Algorithms can be designed to track online prices in the market; adjust prices instantaneously to undercut prices offered by competitors; tailor products or service offerings to consumers; or assist consumers to find the lowest price of a product or service. And, again, all of this can be automated and designed to reduce or eliminate the need for human assistance or supervision.
The EU Commission’s recent Preliminary Report on the E-commerce Sector Inquiry reported that two-thirds of retailers who track their competitors’ prices use automatic systems to do so.  Of those companies employing algorithm software, 78% subsequently adjusted their own prices. Another recent study found that over 500 sellers on the Amazon Marketplace are using algorithmic pricing.
What anticompetitive risks do algorithms pose for consumers?
Despite the potential for utility and cost benefits, algorithms pose threats to consumers. Algorithms can be used to spark, implement, and monitor vertical and horizontal anticompetitive restraints among companies. For example, algorithms can be used to facilitate the monitoring of resellers who are unwilling to respect the resale price recommendations of their suppliers; algorithms can also be used to monitor agreed-upon prices. Algorithms can also provide companies with automated mechanisms to signal price changes, implement parallel/common policies, and monitor and punish deviators that are a party to a price-fixing agreement.
US enforcement agency and practitioner views on algorithms and competition law
The DOJ and FTC’s policy paper for the OECD, along with comments by FTC Commissioner Terrell McSweeny and acting FTC Chair Maureen Ohlhausen, suggest that US regulators are confident that existing antitrust regimes and principles can capably address harms associated with algorithms. Not everyone in the antitrust bar seems to agree, however; various commentators have advanced a different view, pointing out defenses that might exist for claims of algorithmic collusion, and/or otherwise proposing that the existing laws are insufficient to address collusion given these new technological scenarios.
Acting FTC Chair Ohlhausen’s recent remarks before the Concurrences Antitrust in the Financial Sector Conference expressed optimism: “From an antitrust perspective, the expanding use of algorithms raises familiar issues that are well within the existing canon.” Similarly, the DOJ and FTC’s paper submitted to the OECD echoed the same sentiment regarding algorithms impact on antitrust enforcement by analyzing scenarios and recent cases that ultimately showed that antitrust laws are equipped to handle such conduct.
The DOJ/FTC paper analyzed cases dating back to 1993 that demonstrated how technological developments do not preclude antitrust liability. Specifically, in the Airline Tariff Publishing Company case, the court declared that the use by competitors of a common computer system to establish or implement an illegal pricing agreement violated the US antitrust laws. The two US agencies also highlighted a 2015 case in which DOJ brought criminal price-fixing charges against retailers who had agreed to use pricing algorithms on Amazon to eliminate competition among themselves. These cases reaffirmed, from the enforcers’ viewpoint, that price-fixing cartels aided by technology can be challenged successfully, regardless of the means by which they are implemented or operated. Acting FTC Chair Ohlhausen likewise noted: “[W]hether it’s calls, text messages, algorithms, or Morse code, the underlying legal rule is the same . . . agreements to set prices among competitors are always unlawful.”
Notwithstanding this confidence, US regulators are mindful of the challenges that pricing algorithms may present. Former FTC Commissioner McSweeney’s speech at Oxford on algorithms and coordinated effects noted that “[c]oncerns of algorithmic tacit collusion are still largely theoretical at this point . . . We have a lot to learn about the effects of pricing algorithms and artificial intelligence. Further research will contribute to better and more effective competition enforcement in this area.” Recognizing the need to better understand how algorithms and artificial intelligence software works, she noted that the FTC has created an Office of Technology, Research, and Investigation, which now includes technology specialists and computer scientists.
Some prominent US antitrust attorneys have expressed a contrary view: that the Sherman Act and a century of jurisprudence are likely ill-equipped to address this new technological frontier of pricing decisions made through automation, far removed from their human counterparts. In particular, a number of recent articles by practitioners have advanced the view that prosecutions and civil litigation will be hampered by evidentiary challenges. Smart robots may well find ways to cover their tracks.
The US antitrust authorities have stressed, however—and we agree—that there is already a rich body of case law concerning anticompetitive conduct carried out through agents and employees of the companies acting within the scope of their employment and authority. It is no defense to suggest that algorithms, programmed for autonomy, have learned and executed anticompetitive behavior unbeknownst to the corporation. The software is always a product of its programmers—who of course have the ability to (affirmatively) program compliance with the Sherman Act, even erring on the side of caution. Nor can the company’s decision not to supervise, monitor, or check for compliance on a continuing basis prevent antitrust liability; on the contrary, the decision to employ algorithmic pricing carries with it significant corporate responsibility, as with any decision to disperse pricing authority widely among employees (and permit inter-competitor ‘discussions,’ interactions, and information-sharing).
In short, the technology may be different, but the antitrust principles—and harms—remain the same.
EU views on algorithms and competition law
The EU Commission’s paper published for the OECD in June 2017, combined with recent comments from the Commission’s antitrust chief Margrethe Vestager, show a vigilance by the Commission towards algorithmic pricing.
Thus, in its paper prepared for the OECD, the Commission cautioned that the use of algorithms that lead to collusion may breach competition law and already herald an increase in standard cartel fines. It also stressed that companies should ultimately be held responsible for the activities of any algorithm or pricing software they deploy, noting: “like an employee or an outside consultant working under a firm’s ‘direction or control,’ an algorithm remains under the firm’s control, and therefore the firm is liable for its actions.” 
Some practices are simply an online version of long-established unlawful offline conduct and therefore should not pose challenges to enforcers. However, the Commission recognizes that more subtle practices featuring ‘tacit’ rather than ‘explicit’ collusion may prove more troublesome to identify and regulate. Tacit collusion through algorithms may or may not fall foul of EU competition law, as proving the basic conditions of tacit collusion in order to evidence anti-competitive behavior may be complicated and unclear. The Commission also warns that some of the more sophisticated tools used by companies to observe others’ pricing may not be caught by the wording set out in the EU’s law against anticompetitive communications.
Speaking at the 18th Bundeskartellamt IKK Conference in Berlin, Commissioner Vestager warned: “automated systems could be used to make price-fixing more effective. That may be good news for cartelists. But it’s very bad news for the rest of us . . . we need to make it very clear that companies can’t escape responsibility for collusion by hiding behind a computer program.”
In the same vein, the highest European court made it clear in the recent Etura decision that companies cannot escape liability where collusion has been achieved and executed through automated systems. In that case, the operator of a Lithuanian travel booking system sent an electronic message to its travel agents that proposed to limit discounts to no more than 3%. The CJEU held that travel agents who saw the message and did not distance themselves from the proposal could be liable.
Antitrust authorities in Europe also have been reviewing how the use of algorithmic technology by platforms such as Google, Facebook and Amazon affects consumer markets. Notably, on June 27, 2017, the EU Commission announced a fine of €2.42bn against Google for abusing its dominance as a search engine by giving an illegal advantage to its own comparison-shopping service. The Commission found that Google denied rival shopping comparison websites the opportunity to compete by demoting them within search results through the use of algorithms, thereby denying consumers a genuine choice. The Commission is also currently investigating whether four makers of electronic goods prevented retailers from setting their own prices by using software to make the policing of retailers’ prices more effective.
Online investigations have become a major focus of competition work at the Competition and Markets Authority (“CMA”) in the UK. Last year, the CMA fined online sellers of posters for automated price collusion on Amazon Marketplace. The CMA is also preparing a report on the impact of digital comparison tools on consumer behavior which will focus on the effect of website and smartphone application price-comparison tools on consumer choice.
The way forward
It remains to be seen how the current competition laws will be used to combat algorithmic collusion. Despite differing degrees of confidence in the existing antitrust laws to root out and punish anticompetitive agreements among humans or robots, there can be no disagreement about the importance of keeping pace with developments in the technological sphere. In order to ensure effective, balanced enforcement, authorities will need to understand the inner workings of algorithms and their impact upon business models and competition more broadly. Continued economic studies across various markets and sectors will assist in this understanding by providing vital information on when algorithms result in coordinated behaviour, and under what conditions. It is particularly heartening to learn of the involvement of computer scientists in progressing our collective understanding of these issues.
We look forward to addressing these challenges as advocates and are buoyed by the measured calm reflected in recent enforcer remarks. On the US side, acting FTC Chair Ohlhausen has noted: “There is nothing inherently suspect about using computer algorithms to look carefully at the world around you . . . If conduct was unlawful before, using an algorithm to effectuate it will not magically transform it into lawful behaviour. Likewise, using algorithms in ways that do not offend traditional antitrust norms is unlikely to create novel liability scenarios.”
Similarly, for the EU, Commissioner Vestager has declared: “We certainly shouldn’t panic about the way algorithms are affecting markets. But we do need to keep a close eye on how algorithms are developing. We do need to keep talking about what we’ve learned from our experiences. So that when science fiction becomes reality, we’re ready to deal with it.”
Only time will tell whether the antitrust laws are up to the task and will keep pace with technology. Still, we expect to have some preliminary answers soon as these issues percolate up through the courts. Stay tuned for an update in 12-18 months.