Joint study by the Bundeskartellamt (German Cartel Office) and the Autorité de la concurrence (French Competition Authority) indicates need for higher compliance standards

Digital ubiquity, and the resulting rules, do not only concern tech giants. To the contrary, digital regulation is relevant for companies in all market sectors (see our recently published global study on regulation in digital markets – A Turning Point for Tech). The ever-growing amount of data and its use inevitably involves the use of artificial intelligence and in particular algorithms. On the one hand, algorithms have increasingly become key in making (digital) business models more successful, efficient and innovative by creating significant competitive advantages.

On the other hand, algorithms can also have a negative competitive impact as they might be used by companies to facilitate collusion. Consequently, it is not a surprise that antitrust authorities, especially in Europe, have their radars set, even though there are still very few precedents.

On 6 November 2019, the German Cartel Office and the French Competition Authority published a joint study on the competition risks resulting from the use of algorithms (the “joint study”) and a conference was held in Paris to present the study. That joint study was part of the broader context of their collaboration in relation to “Algorithms and Competition” initiative. The joint study, published more than a year after the report of the German Monopolies Commission on “Algorithms and Collusion” and just one month after the bill for digitalisation of the German civil code, illustrates that the German Cartel Office is determined to become a pioneer in antitrust law enforcement in digital markets. Other competition authorities are also tackling this issue. For instance, the UK Competition and Markets Authority published a working paper on pricing algorithms on 8 October, 2018 and the Italian competition authority participated in the conference held in Paris.

Algorithms in the spotlight of antitrust authorities

Although it is true that general use software is usually an innocuous product in terms of competition law, it does involve the heavy use of algorithms. Therefore, it is vital that the set-up of these algorithms complies with antitrust rules. Unsparingly, there have been hot debates as well as plenty of academic research in relation to algorithms in general, their use in businesses and their assessment in terms of competition law. Typically, the focus has been around the use of algorithms in a commercial context, i.e. on pricing (so-called “pricing algorithms”), as well as the potential effects of algorithms on competition in general. By contrast, in their joint study, the German and French competition authorities engage in a broader approach, in the sense that the joint study concerns the whole concept and mechanics of algorithms as well as their various fields of application and the possible anti-competitive aspects. In particular, the joint study doesn’t deal only with the specific (market-related) application of algorithms, for example in pricing, but also with the underlying data used as input for the running of algorithms and the significance of algorithms in the context of machine learning, i.e. “artificial intelligence”.

Algorithms and collusion

The joint study covers the mechanics of algorithms as well as their scope of application, focusing on the potential competitive impact through collusion. The joint study addresses the following three scenarios relating to the use of algorithms:

Scenario 1 – Algorithms and their facilitating and monitoring role in “traditional” cartels

According to the joint study, a scenario which is particularly prone to cause antitrust law violations is that of algorithms being used in order to enforce or monitor already existing anti-competitive agreements. Such situation, which is typically referred to as a “traditional cartel”, in practice concerns only algorithms as a supporting and monitoring tool, facilitating the implementation of the illegal agreement.

The joint study further notes that algorithms could serve as collusion tools in both horizontal and vertical agreements. A vertical scenario for example could be one where algorithms are being used to detect price deviations in vertical retail price maintenance agreements, thereby facilitating the enforcement of the illegal agreements by manufacturers.

The joint study concludes that this first scenario does not raise specific antitrust issues as such practices could be sanctioned without any further consideration of the algorithms involved. However, the competition authorities point out that a deeper understanding of algorithms would allow them to analyse both efficiencies derived from such algorithms as well as the negative effects/aggravating circumstances resulting from the use of such algorithms.

Scenario 2 – Algorithm-driven collusion between competitors involving a third party

In this scenario, the algorithm-related conduct is one where an external consultant or software developer provides the same algorithm or coordinated algorithms to competitors. The interesting aspect of this scenario results from the fact that there is no visible (or event intended) coordination between competitors. However, there is effectively a coordinated behavior as a matter of fact. According to case law of the European Court of Justice (VM Remonts and Eturas), antitrust law infringements predominantly depend on whether the undertakings involved are aware of the behavior being unlawful, or whether at least they could have foreseen the anti-competitive behaviour.

Scenario 3 – Collusion induced by the parallel use of individual algorithms

The third scenario mentioned in the joint study explores cases where competitors use different and independently designed algorithms, i.e. without any communication or coordination between companies. However, as mentioned in the joint study, this is not necessarily enough to exclude a coordinated market behaviour. In particular, the fact that competitors would be relying on pricing algorithms might lead to a higher degree of convergence between the market activities of these competitors simply because of the computers’ interaction.

Further, the joint study raises the interesting point of whether algorithms, in and of themselves, could reach a level of tacit coordination that would resemble the explicit forms of traditional collusion. However, algorithmic communication is still an unexplored territory and therefore algorithmic communication is usually discussed within the context of self-learning “black box” algorithms.

Such “black box” algorithms raise a question of liability for companies. The joint study indicates that some authors consider that these black box algorithms should be treated as company’s employees and therefore trigger the company’s liability for introduction and use of such algorithms. The joint study explains that other authors consider that companies should be held liable for their algorithms only if they breached ” a reasonable standard of care and foreseeability” and concludes that the competition authorities’ approaches may vary between these positions. However, during the conference held in Paris on November 6, 2019, Isabelle de Silva, President of the French competition authority, insisted on the fact that companies should consider themselves as responsible for the algorithms they use even when provided by third parties thereby hinting that the French competition authority may lean towards a stringent approach regarding black box algorithms. This supports the approach taken by the EU Commissioner for Competition, M. Vestager, already a couple of years ago at a conference of the German Cartel Office, suggesting that “companies can’t escape responsibility for collusion by hiding behind a computer program”.

Unsurprisingly, algorithmic collusion is already on the radar of antitrust authorities. However, its plausibility as well as its concrete technical implementation and the resulting implications and risks are yet to be assessed in more detail. Admittedly, it would be of great interest to see if parallel uncoordinated market behaviour through self-learning algorithms would be deemed illegal, in particular given that conscious parallel behaviour is not prohibited. Therefore, if algorithms are used in a way that enable a company to adjust its market behaviour to that of its competitors on the basis of publicly observable behaviour, then, as the joint study points out, such behaviour should be categorised as market intelligence and not as illegal coordination.

Conclusion and outlook

Traditionally, collusive effects have been considered in the context of the game theory in terms of their economic competitive significance. Algorithms are the new ‘players’ in that ‘game’ and their use can have a great effect in maintaining the stability of anti-competitive agreements. This new situation begs the question of how competition law should be dealing with that issue.

Although the study does not give away the answer to this question, it identifies a key problem associated with the use of algorithms: i.e., in the future, collusion might not even depend on actual communication between competitors anymore.

Overall, the study emphasises that current antitrust rules are flexible enough to deal with competition law violations caused by the use of algorithms. Both at European and Member State level, competition authorities have indicated that they will not shy away from applying antitrust rules to the novel “digital cases”. However, as it’s pointed out in the joint study, it is still impossible to predict how competition authorities will apply competition law in practice in the context of digital cases. Similarly, it cannot be ruled out that there might be a real need for creating a new legal framework or analytical tools in the foreseeable future in order to deal with such issues more effectively.

Practical considerations and digital antitrust compliance

So far, there is no established practice as to how compliance and liability standards should be assessed in the context of algorithms (for a more in-depth analysis of this problem see Marx/Ritz/Weller, ‘Liability for outsourced algorithmic collusion – A practical approximation’ in Concurrences Review No 2-2019).

However, the joint study illustrates once more that antitrust authorities, and in particular the French and the German competition authorities, are seeking to achieve a higher level of expertise in the field of AI and algorithms through close cooperation both in terms research as well as in terms of enforcement priorities.

Companies should view this as an opportunity to adjust their antitrust compliance systems to the challenges of digitalisation. This is particularly so for compliance systems dealing with matters such as Big Data, algorithms and digital platforms.

In sum, companies should bear in mind the following when using price algorithms:

  • Do not coordinate with competitors in relation to the set-up of pricing algorithms, i.e. in relation to the type and/or nature of the algorithms or data inputs used. Such coordination should be avoided even if it is indirect, i.e. through some external third party IT service provider.
  • Ensure that the IT service providers (internal and external) are subject to strict compliance standards regarding the development and use of pricing algorithms (so-called Compliance by Design).
  • Involve IT colleagues in compliance programs and training sessions, put in place a reporting system through which the IT department would be regularly reporting current and future plans in relation to the use of algorithms, and ensure that legal teams have a good understanding of algorithms used.