Following Facebook’s decision to ban news outlets in Australia, which has since been reversed after the government made media code amendments, the issue of Big Tech regulation has once again been thrust into the public eye. Head of Media Disputes Emily Cox explores why it could be a long road ahead for Big Tech regulation in an article for Information Age. 

Tech companies make up five of the six most valuable companies in the world today. Our heavy reliance on Big Tech services, which offer undeniable positives, has only increased during the pandemic, as we work, shop and play online.

The unchecked concentration of power in “digital monopolies” is, however, also having negative impacts on people and businesses, in the form of data practices, content and media control, and misuses of market power. Examples include Facebook’s Cambridge Analytica scandal, Covid-19 conspiracy theories running rampant on social media and Google’s preferential ranking of its comparison-shopping service in its search results pages.

This situation has developed because, as with the banks pre-financial crisis, governments and regulators did not fully understand these companies and their business models. But as negative impacts seemingly grow, governments are waking up to the need for regulation. It is no longer a question of if we should regulate Big Tech, but rather how we regulate.

Innovative thinking to regulate innovative industries

While China has nationalised many of its powerful tech companies, in the West, we have relied on anti-trust laws which are slow-moving and ill-equipped to deal with Big Tech — highlighting the need for innovative thinking on regulation.

The prevailing concern about regulation is how to realign consumer interests with the technology revolution without stifling innovation. However, there is a compelling argument that Big Tech has not innovated in some time. For example, Facebook’s acquisitions of ‘disruptors’ WhatsApp and Instagram prevented them from becoming competitors and stifled innovation. Similarly, Big Tech could have innovated on content moderation and online harms. But it failed to do so to a satisfactory degree given the lack of commercial imperative or regulation.

What is happening: a global view

Jurisdictions across the globe are seeking regulatory answers from different perspectives. The US has recently looked to the reinvigoration of anti-trust enforcement as the answer — examples include the Department of Justice’s anti-trust investigation looking into Google and the Federal Trade Commission’s anti-trust suit against Facebook. However, we wait to see what approach President Biden will adopt.

The EU is leading the way on regulating behaviour before it happens, rather than policing outcomes. It is formulating a raft of legislation, the flagship being the Digital Services Act package published last December. This comprises the Digital Markets Act, which regulates the behaviour of core platform services acting as “gatekeepers” between business users and their customers and enjoying a significant market position. It includes clear do’s and don’ts, such as a prohibition on self-preferencing and obligations on data-sharing.

The accompanying Digital Services Act includes a range of transparency measures. For example, all online intermediaries must verify that the businesses using their services are genuine and adhere to traceability rules, to help track down sellers of illegal goods or services. Platforms must also be transparent on advertisements and the algorithms being used to display them as well as having to inform relevant authorities if they suspect a serious criminal offence such as a threat to life or safety of persons. The Terrorist Content Regulation will also empower EU authorities to issue orders requiring platforms to eradicate terrorist content within one hour, or face sanctions.

What are the actions taken so far by UK government?

The UK government has confirmed that it will establish a new pro-competitive digital regulator, the Digital Markets Unit, in April 2021. This will work with a more principles-based framework, rather than a fixed black and white list of behaviours. The Digital Markets Taskforce advice to the government proposes individual, tailored rules for companies with ‘strategic market status’ (based on market power rather than valuation or revenue). The challenge will be that this flexible and collaborative approach to regulation does not come at the cost of legal certainty.

Another significant step will be the Online Safety Bill, set to be rolled out in 2021. This will put a statutory duty of care on social media platforms and search engines to protect all users from illegal content, with particular regard to children and harmful content.

Innovation is key to success

There are undoubtedly different approaches being taken across the globe when it comes to Big Tech regulation. Yet the most effective regulation, with which others converge in time, will be that which adopts a holistic approach, merging competition, data privacy and consumer protection laws. While there are promising signs in this direction, the EU’s clearance of Google’s data-driven acquisition of Fitbit suggests that a holistic approach is not a done deal.

To be successful, Big Tech regulation should be as innovative and nimble as the industry itself, ensuring that the industry’s ability to continue to innovate is not hamstrung by regulatory over-reach.

This article was published in Information Age on 24 February 2021. Click here to view.