Hot on the heels of the EU adopting the new platforms regulation and the new copyright directive, the Department for Digital, Culture, Media & Sport and the Home Office have jointly proposed a new statutory duty of care aimed at reducing “online harms”, backed up by a regulator. We explain the implications.

Who will be affected?

The proposal is for a new statutory duty of care to apply to any “companies that allow users to share or discover user-generated content, or interact with each other online”, regardless of whether they are based in the UK or not. The report focuses on the activities involving “Hosting, sharing and discovery of user-generated content (eg, a post on a public forum or the sharing of a video)” and “Facilitation of public and private online interaction between service users (eg, instant messaging or comments on posts)”.

While the proposals appear to be targeted at the large social media companies, the proposals are not limited to such organisations. Subject to the outcome of the consultation and the following legislative process, the new duty of care could apply to private messaging apps as well as review platforms, e-commerce websites that permit reviews and websites that enable comments.

The report specifically states that “companies of all sizes will be in scope of the regulatory framework”, leading to concerns that start-ups without the financial clout of the large social media companies will struggle to comply. While all companies will be required to take reasonable and proportionate steps to “tackle harms on their services”, the government wants to “minimise excessive burdens” by taking into account the size and resources of the particular company. While this is helpful for companies that only fall into the scope of the new duty of care by virtue of some ancillary service they provide, it may also create uncertainty about what is or is not reasonable and proportionate.

What problem is being addressed?

The government is concerned over “online content or activity that harms individual users, particularly children, or threatens our way of life in the UK”. The government believes that “illegal and unacceptable content and activity is widespread online”. The government is particularly concerned about the effect of such content and activity on children and young people, including in terms of child sexual exploitation and abuse.

The White Paper lists various harms that are in scope, including clearly defined illegal activities (eg, child sexual exploitation, terrorism and hate crimes), as well as less clearly defined harms (eg, intimidation and disinformation). The White Paper also lists harms that are out of scope (eg, harms suffered by legal entities rather than individuals, breaches of data protection legislation and other problems addressed by other means).

What is the duty of care?

The government has said that it will legislate to establish a new duty of care on companies “to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services”. A few points arise out of this:

  • The duty is not limited to protecting users from illegal activity – it covers “harmful” activity as well. Some commentators have criticised the government for placing such an obligation on businesses.
  • More stringent requirements will be placed on companies to prevent illegal behaviour than harmful behaviour. While we await the development of the codes of practice, there is some uncertainty over how the companies will be expected to correctly distinguish illegal behaviour from harmful behaviour, eg harassment vs. unwanted attention, or even harmful behaviour from harmless behaviour, eg revenge porn vs the sharing of private images between consenting adults.
  • Illegal and harmful activity might relate to non-users, covering situations such as the sharing of private images and video (including revenge porn). Where there is no victim to report the activity, this suggests that some form of monitoring will be required.
  • The obligation is to take “reasonable steps”, and not to prevent such harm. In particular, the White Paper states that there will be no obligation to “undertake general monitoring of all communications”, which is a requirement on the UK under Article 15 of the E-Commerce Directive 2000/31/EC. However, the White Paper states: “The government believes that there is … a strong case for mandating specific monitoring that targets where there is a threat to national security or the physical safety of children, such as child sexual exploitation and abuse (“CSEA”) and terrorism.” What this means in practice, in particular for messaging apps such as WhatsApp, is unclear. While the White Paper acknowledges the effect of Article 15 and says the proposals will comply with the E-Commerce Directive, there is uncertainty over whether the obligations can be reconciled with EU law.
  • The proposals place great faith in the ability of technological solutions to help companies meet the new standards required of them. Companies are expected to invest in the development of such “safety technologies”. The White Paper states: “It is therefore vital to ensure that there is the technology in place to automatically detect and remove terrorist content within an hour of upload, secure the prevention of re-upload and prevent, where possible, new content being made available to users at all.” As is clear from the recent Christchurch terrorist attack, that technology is not available at present.

How will it be enforced?

Ofcom is expected to become the independent regulator, at least in the interim, with the possibility of a new regulator being created to police the new statutory duty. The proposals suggest that the regulator will have extensive powers, including the ability to issue fines, compliance notices, requiring information (including the ability to “request explanations about the way algorithms operate”, suggesting the regulator is expected to delve into the technical detail of companies) and naming and shaming non-compliant businesses. The government is considering allowing ”super complaints” by designated bodies, which is a system already in place under s.11 of the Enterprise Act 2002 in relation to feature(s) of markets that may harm the interests of consumers.

Controversially, the government is consulting on more draconian powers, such as forcing services to close down, blocking websites and apps (in a similar manner to the blocking of websites used for downloading pirated content), and personal liability for “senior managers”.

The government has indicated that the regulator’s findings could assist individuals in bringing claims for negligence, breach of contract and breach of a statutory duty. This may open the door to “follow on” claims being brought against companies found to have fallen short of the new standards.

Feeling the pressure

With the GDPR less than a year in force, the ink barely being dry on the new Copyright Directive and the new Platforms Regulation, the ePrivacy Regulation still to come, and various investigations by data protection and competition authorities ongoing, many of the companies being targeted by the White Paper will be feeling the pressure.

The question of how governments regulate platforms is a difficult one. The legal position has been to accept that platforms have no control over what is said and done using their services until they have been made aware of those activities. That idea is being tested at the moment, with governments seeking to limit that safe harbour and make operators of social media and messaging platforms responsible for the way their services are used.

Anyone wishing to contribute to the consultation can do so here. The consultation closes on 1 July 2019.