"Fake news" has dominated the headlines since the Trump presidency began both in the USA and across the globe. High profile individuals and politicians regularly defend allegations by complaining about inaccurate digital and hardcopy print.

However, individual members of the public are also suffering at the hands of social media more and more. In response to the rise in online offending, the CPS published Guidelines on social media offences at the end of 2016.

Social media offences

There are a large number of offences that the CPS must tackle, and a wealth of legislation to rely on, including but not limited to: the Offences Against the Person Act 1861, the Malicious Communications Act 1988, the Protection from Harassment Act 1997 and Communications Act 2003. Indeed, in practice it is a criminal offence to publish material which contains (not an exhaustive list):

  • A credible threat;
  • An indecent, grossly offensive or false communication made with intent to cause distress or anxiety (including racial abuse);
  • A private sexual photograph or film (unless given consent);
  • An unsubstantiated assertion that causes serious harm to someone’s reputation; or
  • An assertion you know to be false with intent to make financial gain.

This covers a broad range of online material and these offences can now be committed through countless social media platforms, not just Facebook and Twitter. Given this, one would expect social media start-ups, social media giants, and journalists to be obliged to comply with certain standards.

Is this the case in the practice?

According to EU Directive 2000/31/EC: i) social media companies should not be liable if they are unaware of illegal material on their platforms, or act expeditiously to remove or disable access to the information once discovered, and ii) Member States should not oblige social media companies to police illegal activity on their service.

In the UK, according to a parliamentary publication in early 2017, social media companies currently face almost no penalties for failing to remove illegal content.

Within the EU, change does appear to be on the horizon. In a communication by the EU on 28 September 2017, it was suggested that online platforms “should adopt effective proactive measures to detect and remove online content”.

More recently, on 1 January 2018, Germany imposed punitive measures on social media companies for allowing unlawful content on their digital platforms. What constitutes illegal online material is broadly similar in the UK as it is Germany, save for a few distinctions in the realm of holocaust denial and related issues. The German Network Enforcement Act (NetzDG) however will now subject social media companies to the following obligations:

  • They must offer users an easily recognisable and accessible procedure for reporting criminally punishable content;
  • They must take down or block access to obviously unlawful content within 24 hours of receiving a complaint about said content;
  • Less obvious criminal content must be taken down or blocked within seven days of receiving a complaint. Alternatively, the material can be referred to an institution who can make the final decision on whether the content is in fact unlawful;
  • They must inform complainants of decisions taken in response to their complaints and provide reasons for their decisions; and
  • They must submit sufficiently detailed, and publically available, biannual reports on their handling of complaints regarding criminally punishable content.

Where criminal content is not deleted in full, on time or at all, a regulatory offence may have been committed. The law applies to any companies that have an internet platform with over two million users, and if found in breach of the rules, companies can be fined up to €50 million.

The NetzDG does not directly deter people from publishing illegal online material. It shifts the culpability to the forum on which the material can be disseminated, and forces these platforms, through fiscal measures, to self-censor themselves.

Other states such as the UK will likely consider implementing similar legislation shortly. The Committee on Standards in Public Life has recommended a legislative framework that would make social media companies liable for illegal content on their platforms.

On 24 January 2018, in her speech at the Davos World Economic Forum, the Prime Minister lambasted social media companies for not doing enough to tackle extremism and child abuse.

Whilst this will resonate favourably with the public, the issue of tackling “fake news” may be more controversial.

Recently, it has been reported that the Prime Minister is set to authorise a rapid response unit to stop “fake news” spreading online, and to “reclaim a fact-based public debate”. This government-led approach may appeal to some, given that the state would decide what constitutes removable content. However, concerns have already been raised about a self-serving intention behind the proposal – for example, the Prime Minister recently suggested the unit would deal with comments such as an allegedly false statement about her government made by a radio presenter. Hopefully this will not detract from the legitimate work that could be done by such a unit, and work that may well be in the public interest.

What remains clear is that the proliferation of social media and online news will require increasing regulation and/or legislation. The UK legislation mentioned above was all enacted either before social media really existed, or at least before it was widely used globally. As user numbers increase, and further platforms develop, both the frequency and types of offending rise. Time will tell how effective the new German model is, and the UK will certainly be watching. We may well soon see a government “rapid response unit” for “fake news”, shortly followed by a requirement for social media giants to self-censor their own platforms for illegal content.