Last month, the UK Government published its initial response (the 'Initial Response') to 2019’s Online Harms White Paper (the 'White Paper'). The response is described as an indication of the Government’s direction of travel, rather than a detailed update on all policy proposals, whilst also clarifying areas of uncertainty and criticism. Ahead of the Government’s full response, which is expected later this spring, we recap the key provisions and issues of the White Paper since its inception last year.
In April 2019, the UK government produced the White Paper, setting out proposals for keeping UK internet users safe online and managing ‘online harms’, covering:
‘online content or activity that harms individual users, particularly children, or threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration’
The White Paper proposes protecting internet users by introducing a wide-ranging regime of internet regulation in the UK, establishing a new statutory duty of care on companies to ‘take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services’. This duty of care would be overseen by an independent regulator (likely to be OFCOM) that will set out how businesses can comply with the duty of care in codes of practice. The regulator would be equipped with certain enforcement powers and sanctions to ensure that relevant companies are compliant with applicable codes and, even where no specific code exists, companies would be expected to be ‘assessing and responding to the risk associated with emerging harms or technology’.
Who does the White Paper apply to?
The proposals apply to businesses that allow users to share or discover user-generated content or interact with each other online. In practice, this definition captures social media platforms, cloud hosting providers, file hosting sites, public discussion forums, retailers who allow users to review products online, messaging services and search engines.
What does this mean for companies?
- Companies within scope would be required to take action appropriate to the scale and severity of the harm in question. More stringent requirements would be imposed for harms that are clearly illegal, such as terrorist activity and hate crime
- The regulator would take a risk-based and proportionate approach and any action would be assessed according to the size and resources of the company and the age of those at risk of harm
- Each company within scope of the legislative framework would need to ensure that their terms and conditions comply with the duty of care and codes of practice and must be sufficiently clear and accessible to all audiences, including children
- Companies would be expected to have user-friendly complaints and appeals procedures in place
- The regulator would have power to require annual transparency reports from companies which must include evidence of effective enforcement of the company’s terms and conditions, the processes in place for reporting online harms, the number of reports received and any action taken by the company
- In addition, the regulator would have the power to impose fines, disrupt business activity, block services and impose liability on individual members of senior management for non-compliant organisations
What has the response been?
The Government received over 2,400 responses with concerns focusing on the impact on freedom of expression and the businesses in scope of the duty of care. The Government has considered such feedback and revised its approach, as set out in the Initial Response.
Freedom of expression: the consultation highlighted concerns that the White Paper would negatively affect freedom of expression. The Initial Response has tackled this concern by focusing on systems and processes that platforms have in place to address harmful content, rather than the removal of specific content. In a development from the White Paper, the Initial Response sets out how regulation will establish different expectations on companies for content or activity that is outright illegal and content or activity that is legal but still potentially harmful. In relation to legal content, companies will have the freedom to state explicitly, for example in terms and conditions, what content and behaviour is acceptable/unacceptable, with a higher level of protection required for children. Illegal content will need to be removed swiftly and companies will need to have effective mechanisms in place to prevent such content appearing in the first place. Companies will also be required to have effective and proportionate user redress mechanisms to allow users to report harmful content and challenge content takedown where necessary. The regulator will not, however, adjudicate on individual complaints, or require the removal of specific content.
Codes of practice: whereas the White Paper anticipated a separate code applicable to each harm (both legal and illegal), the Government has changed tack in the Initial Response by suggesting that codes of practice would be produced in relation to terrorist or child sexual exploitation content, and any other codes would focus on systems and processes, rather than specific harms.
Clarity for businesses: the Government will provide clear and practicable guidance to help business understand potential risks arising from different types of service, and the actions that businesses would need to take to comply with the duty of care as a result.
Businesses in scope: feedback on the White Paper included suggestions that companies should be allowed to self-assess whether their services came within scope, this has not been incorporated into the Initial Response, with the Government stating that the regulations will only apply to companies that facilitate the sharing of user-generated content e.g. through comments, forums or video-sharing, and only where the service is provided directly. This results in an estimate of only five per cent of UK businesses falling within the envisaged scope. The Government will introduce guidance for businesses to give ‘clarity on whether or not the services they provide or functionality on their website would fall into the scope’.
Identity of the regulator: the Government has suggested it will appoint OFCOM as the online harms regulator.
Proportionality: the Government is keen to ensure that the proposals do not place an undue burden on businesses and therefore suggests that companies take reasonable and proportionate steps to protect users. Such steps will vary according to the company’s associated risk, size and resources available to it, as well as the risks associated with the service provided.
Transparency: effective transparency reporting will help ensure that content removal is substantiated and freedom of expression is protected. The Government has addressed concerns around a ‘one size fits all’ approach to transparency by applying proportionality to reporting requirements – i.e. reporting requirements will vary in proportion with the type of service being provided and the risk factors involved. To maintain a proportionate and risk-based approach, the regulator will apply minimum thresholds in determining the level of detail that an in-scope business would need to provide in its transparency reporting, or whether it would need to produce reports at all.
What are the grey areas?
Neither the White Paper or Initial Response cover jurisdictional issues, for example, what is considered a ‘UK Business’? Is this a business headquartered in the UK, with significant operations in the UK, or simply registered in the UK? How much business does a company need to undertake in the UK to be within scope? As with GDPR, the White Paper suggests that the eventual UK Online Harms legislation may have extra-territorial effect, meaning that businesses outside the UK would also need to comply.
It is also unclear to what extent the new regulator, Ofcom, and the role of the Information Commissioner’s Office ) will overlap, particularly in relation to the protection of children online, particularly given the ICO’s recently published Age Appropriate Design Code; a code of practice for online services aimed at children. It appears that the remit of both Ofcom and the ICO will overlap significantly in several areas, which creates a grey in terms of dual regulations and potentially dual fines. It is also yet to be confirmed how the new regulator will be funded.
In term of enforcement powers, the Initial Response does not reveal whether Ofcom will have a similar penalties regime to the ICO (with fining powers of four per cent annual global turnover or €20 million). Will Ofcom’s current penalties structure be extended to govern its new remit, or a new penalty regime introduced?
The Government proposes publishing a more detailed proposal in spring and social media platforms and other providers of online content will hope that this proposal will address issues not covered in the Initial Response, such as jurisdiction and enforcement powers.
In the interim period until Online Harms legislation comes into force, the Government is also expected to produce voluntary codes which tackle other harms ‘where there is a risk to national security or to the safety of children’, such as online terrorism and child sexual exploitation.