Organisations face fines of up to 10% of annual global turnover or £18 million (whichever the greater) for failure to comply.
On 15 December 2020, the UK government published its full response to the Online Harms White Paper consultation, which sets out final proposals for the new regulatory regime. The response confirms that companies in scope will face a range of new obligations relating to both illegal and harmful content, in addition to the threat of significant fines and other sanctions in the event of non-compliance. The proposed regulatory framework will be introduced in 2021 in the form of the Online Safety Bill.
The response comes more than a year and a half after the Home Office and the Department for Digital, Culture, Media and Sport (DCMS) first published the Online Harms White Paper in April 2019, which proposed a new compliance and enforcement regime to tackle online harms. In February 2020, the government set out preliminary details of the proposed regulatory regime as an initial response to the white paper. For background to this consultation, see Latham’s previous blog posts (White Paper launch; government interim response).
Organisations and Services in Scope
The new regime will apply to companies whose services host user-generated content or facilitate interaction between users, one or more of whom is based in the UK, as well as search engines. Services playing a functional role in enabling online activity will fall outside of the scope of the regime.
The regime will apply to a wide range of social media sites, online marketplaces, video games, video sharing platforms, and search engines. Exemptions will apply; for example, business-to-business services and content published by a news publisher on its own site will be excluded. There will be further carve-outs for “low-risk businesses with limited functionality”, such as tools for providing product reviews on retailers’ websites, and comments sections on news publishers’ sites. Overall, the government anticipates fewer than three per cent of UK businesses will fall within the scope of the regime, although the basis of this conclusion remains unclear.
The regime will apply to both public communication channels and “services where users expect a greater degree of privacy”, such as online instant messaging services and closed social media groups. At this stage, the response has not detailed how organisations will be expected to balance the latter with their obligations under applicable privacy legislation and/or law protecting communications content and traffic data. This potential tension was picked up in various submissions to the preliminary consultation and, while the response indicates that codes of practice will confirm details of what actions companies are expected to take in the context of private communications, it notes that companies will need to “consider the impact on users’ privacy”.
Duty of Care
As anticipated, the regime will establish a “duty of care” between organisations and their users that will require companies to prevent the proliferation of illegal content and activity online. Although the precise formulation of the duty is not yet confirmed, all companies within the regime’s scope must ensure that appropriate safeguards are in place to protect users from harmful content and activity, proportionate to the level of risk posed to users by the relevant service and/or content.
The legislation will set out a general definition of “harmful content and activity” based on an objective standard: “content or activity which gives rise to a reasonably foreseeable risk of harm to individuals, and which has a significant impact on users or others”. Secondary legislation will also set out priority categories of pre-defined harmful content considered to pose the greatest risk to users. This list will include certain criminal offences (such as child sexual exploitation), in addition to certain harmful content affecting children (such as violence) and adults (such as content regarding eating disorders).
Ofcom will publish codes of practice to establish how organisations are able to fulfil the new legal duty of care and comply with other obligations under the new regime, such as responding to information requests from the regulator. In the meantime, the UK government has released interim codes of practice on terrorist content and activity online, as well as child abuse and exploitation.
In addition to the duty of care, companies will face a range of more specific responsibilities. These will include maintaining an “effective and accessible” reporting and redress mechanism for users, with the latter designed to give users the ability to challenge harmful content and activity they see online, a company’s compliance with the regime or the infringement of users’ rights, for example, in relation to specific content-removal decisions. Users will also have the ability to report their concerns to Ofcom as regulator.
Substantial Fines for Non-Compliance, with Ofcom to be Appointed as Independent Regulator
Ofcom will be appointed as the independent regulator to enforce the new regime. Its responsibilities will include setting clear safety standards within the existing duty of care, monitoring compliance with reporting requirements, and utilising substantial enforcement powers. Ofcom’s appointment means there will be two significant, independent regulators in the UK online space, given the existing role of the Information Commissioner’s Office in relation to data protection enforcement. Companies that exceed a specific revenue threshold will also need to register with Ofcom and pay an annual fee.
The regime provides that Ofcom will be able to levy substantial fines against companies of up to 10% of their annual global turnover or £18 million (whichever is greater). Guidance on how penalties will be decided is to be provided by Ofcom, based on its operating principles.
The regulator will also be empowered to adopt “business disruption measures”, including — ultimately — blocking access to non-compliant online services in the UK by requiring providers to withdraw access to key services. The response notes that the government reserves its position on potential criminal sanctions for senior managers, although has confirmed that any such sanctions will not be introduced during the first two years of the new regime being in place.
Additionally, Ofcom will have a range of lower-level investigatory and enforcement powers, including broad rights to gather information, the ability to enter companies’ premises, and scope to require companies to instruct third party “skilled persons” reports on specific issues of concern (which could include, for example, assessing the effectiveness of content moderation software).
While Ofcom will not investigate individual cases of alleged non-compliance, a “super complaints” function will be established to enable organisations acting on behalf of several users to raise systemic compliance concerns. The response confirms that there is no intention to introduce any new, independent resolution mechanism (such as an alternative dispute resolution forum), or to establish any new legal avenues for individuals to pursue action against companies.
Risk-Based Approach and Proportionality
The regulator will take a risk-based and proportionate approach to applying the new regime.
The regulatory framework will also establish a tiered approach to the treatment of services, with a view to protecting freedom of expression and avoiding disproportionate burdens on small businesses. The majority of services will be classified as “Category 2 services” — the providers of which will need to take proportionate steps to address relevant illegal content and activity, as well as protecting children. A smaller group of “high-risk, high-reach” services will be designated as “Category 1 services”, which will additionally be required to take steps regarding content that is legal but harmful to adults. Ofcom will be responsible for maintaining a register of Category 1 services, based on its assessment against a list of factors to be set out in legislation (such as size of user base) and thresholds relating to those factors.
All Category 1 services will be required to publish transparency reports containing details of their efforts to tackle posts relating to online harms on their services. The Secretary of State for DCMS will have the power to extend this reporting obligation to services beyond Category 1. The reporting requirements’ contents will vary between services and will be determined by the regulator, which will consider the service’s audience and type before determining what information is required in each individual service provider’s report. The list of high level categories of information that companies are expected to report on will likely include details regarding enforcement of the company’s terms and conditions and the measures and safeguards in place to protect fundamental rights, amongst others.
Interplay with Existing Regimes
The response confirms that the new regulatory framework will not generally alter the current law (particularly stemming from the e-Commerce Directive 2000/31/EC) regarding the liability of internet intermediaries for specific content hosted on their services, including the so-called “safe harbour” defence. Under the safe harbour defence, the intermediary’s liability for illegal content is limited, until the intermediary becomes aware of such content – on the basis that it does not proactively monitor – and the intermediary is then obliged to remove it. The new UK regime specifically precludes the safe harbour defence, however, in relation to terrorism content and child sexual exploitation content, for which Ofcom will be empowered to require platforms to utilise monitoring tools to proactively identify and remove relevant content “where this is the only effective, proportionate and necessary action available”, subject to further conditions. As discussed, companies will also need to assess how to achieve compliance with the new regime in a way that is consistent with their responsibilities under applicable privacy legislation and communications legislation , including the protection of users’ rights online to freedom of expression and privacy.
The interplay between the new regime and the UK’s Audiovisual Media Services Regulations 2020 is important to note. The Audiovisual Media Services Regulations, implementing amendments to the EU’s Audiovisual Media Services Directive, introduced rules in the UK regarding the limitation of harmful content across video sharing platforms. These rules are likely to be superseded by the new Online Safety Bill regime.
The Online Safety Bill — which will propose the statutory framework for the UK government’s full response to its consultation — is expected to be introduced in 2021. The published Bill is by no means final, and the legislative process for enacting the Bill into UK law provides a number of opportunities for debate and amendments.
In the meantime, organisations that anticipate being caught by the new regime are advised to — if not already — commence taking preparatory steps, including the following:
- Ensuring compliance with the interim codes of practice on content relating to terrorism and child abuse and exploitation
- Preparing (or reviewing existing) policies, procedures, and systems regarding the designation and moderation of ‘acceptable’ content on (and the removal of ‘unacceptable’ content from) their platforms, and processes for enabling and addressing user complaints
Similarly, the EU Commission recently published the draft EU Digital Services Act, as part of the European Digital Strategy. Key proposals of the draft legislation include enhanced obligations on service providers to monitor and remove illegal content and specific obligations on very large platforms to manage systemic risks. These developments at EU level — and the Online Harms Bill in the UK — are indicative of a global trend towards the increasing regulation of digital services and particularly those that host and facilitate user generated content.