Tweeting, trending, blogging, posting - year on year, social media networking and information sharing (sometimes, oversharing) become more prominent in today's modern world. As of August 2013, there were 1.4 billion Facebook users worldwide and an average of 190 million tweets posted each day. The Internet undoubtedly opens up an exciting virtual reality to its users, a limitless forum enabling us to 'stay connected'. But, as ever, with its benefits come challenges; not only for users, but now also increasingly for media operators themselves. These may include social media operators, content service providers, or even news websites on which people are able to post comments ("Operators").
Recent case law has not only shown the increased willingness of both civil claimants and criminal prosecutors to pursue identifiable posters of defamatory comments, but also a rise in legal action being taken in the context of the anonymous poster. It is this, in particular, that presents a significant risk for Operators. This article will consider the risks Operators face, the legal protection (if any) which the Defamation Act 2013 proposes to offer them, and the practical steps Operators can take towards mitigating the risk of exposure to potential liability in this context.
The case of the anonymous poster
The case of the defamatory post or tweet is all too well-known. A recent high profile example is West Green v Bercow, where civil proceedings were brought against Sally Bercow for her defamatory tweet, "Why is Lord McAlpine trending? *Innocent face*" two days after Newsnight broadcasted a report including an allegation that a "leading Conservative politician from the Thatcher years" had abused a young boy in the 1970s and 1980s. Ultimately, the High Court found that this tweet was "seriously defamatory" and Bercow was ordered to pay Lord McAlpine damages. But what if there had been no Sally Bercow? What happens if the claimant cannot identify the poster because they have posted anonymously or under a pseudonym? This is primarily where the danger lies for Operators.
In this context, claimants may seek to bring a claim against the Operator concerned, to obtain disclosure of the poster's personal details. An example of this was seen in 2011 when Ryan Giggs successfully applied to the High Court for an order requiring Twitter to disclose the identity of a user who revealed details about a gagging order over his alleged affair with a model. It is unclear what further action, if any, Giggs took against the user.
The risk posed by the anonymous poster
Today, the risk faced by Operators extends beyond the disclosure of personal information, as shown in the case of Tamiz v Google Inc. Mr Tamiz complained that anonymous comments posted on the "London Muslim" blog hosted on Blogger.com were defamatory. Upon receiving the complaint, Google Inc. forwarded it to the offending bloggers and the comments were taken down voluntarily. Nevertheless, Mr Tamiz brought a defamation claim against Google Inc. for the reputational damage incurred in the period between notification to the bloggers and when the comments were taken down. Although it was considered, "highly improbable that a significant number of readers would have accessed the comments after the earliest point at which the defendant could have become liable and prior to their removal", the Court of Appeal found that:
"[if] the defendant had allowed defamatory material to remain on its platform after it had been notified of its presence and had had a reasonable time within which to act to remove it, it could be inferred to have associated itself with, or to have made itself responsible for, the continued presence of that material on the blog and thereby to have become a secondary publisher".
The finding that an Operator could be a secondary publisher of defamatory information - and thereby liable for defamation - is of concern for all Operators who are faced with the task of monitoring millions of posts per day. What is clear from the Tamiz v Google judgment is that these cases are very fact-specific.
This raises a difficult question for Operators; should they protect the confidentiality of the anonymous poster and risk defending an action as a secondary publisher of the information, or should they disclose the poster's personal information to the defamed user and risk defending a breach of privacy claim or a claim for breach of Article 10 ECHR (freedom of expression)? Similarly, if the Operator is to remove the post, at what point should they be expected to do so: should removal take place upon receipt of a single complaint? It is clear that the faster an Operator takes action, the better.
Social media is a global phenomenon and this characteristic only adds to the risks faced by Operators. Although this article only considers the risks faced under English law, Operators should be mindful to assess the risks facing them in all jurisdictions in which they operate.
The Defamation Act 2013 (the "Act")
The Act, which comes into force on 1 January 2014, has been heralded as a piece of legislation which swings the balance back in favour of freedom of expression, with a number of changes that will affect the online environment and provide greater protection to website operators and other online intermediaries.
Perhaps most notable for Operators, section 5(2) of the Act introduces a new defence for Operators where they can show that it was not they who posted the statement on the website. However, this defence is defeated if the claimant can show that: i) it was not possible for him to identify the person who posted the statement to an extent which enables the claimant to bring proceedings against the poster ii) he gave the Operator a notice of complaint in relation to the statement; and iii) the Operator failed to respond to that notice in accordance with the provisions of the Defamation (Operators of Websites) Regulations 2013 (the "Regulations").
The Regulations, which will also come into force on 1 January 2014, require the Operator to act within 48 hours of receiving a complaint. This means that the Operator would need to notify the poster of the complaint or, if this is not possible, remove the statement within this 48-hour timeframe. This proposed 48-hour turnaround time arguably places an onerous administrative burden on Operators. Perhaps a glaring omission from this new defence is the lack of any protection/defence for Operators in respect of posts by anonymous posters.
Section 10(1) of the Act provides that the court does not have jurisdiction to hear an action for defamation brought against a person who was not the author, editor or publisher of the statement complained of unless the court is satisfied that it is not reasonably practicable for an action to be brought against the actual author, editor or publisher. As yet, it is unclear how the courts will construe the standard of 'not reasonably practicable' but a reading of Hansard would indicate that the section has been introduced as a means of protecting secondary publishers:
"[Section 10] is in accordance with our aim of ensuring that secondary publishers are not unfairly targeted … the purpose of clause 10 is to encourage claimants to pursue the author, editor or primary publisher of defamatory material where possible, to reduce the likelihood of secondary publishers being threatened by libel proceedings".
Although the introduction of the Act seeks to provide greater protection to Operators, sections 5 and 10 do not offer absolute protection and their application has not yet been tried and tested. In the meantime, Operators should take practical steps to reduce their exposure to liability.
Practical tips for Operators to mitigate exposure to liability
To mitigate the risk of liability as a secondary publisher, Operators should consider:
- implementing a complaints system which would ensure that any action they take satisfies the courts if a claim is subsequently brought against them.
- as part of internal policies, taking a view as to when a debate or the voicing of an opinion crosses the threshold into offensive content.
- adding a 'report abuse' link to their site to make it quick and easy for a user to complain of an offending post;
- introducing a set of criteria, for example a 'traffic light' system, against which a complaint may be assessed to determine its seriousness and the appropriate corresponding scale of target reaction times, e.g. 'red' flag would require action within 24 hours, 'amber' flag would require action within 48 hours and 'green' flag would require no action
- including in the Terms & Conditions to which each user must subscribe (the "T&Cs"), a clear definition of an 'offending post' and an explanation that, if the Operator receives a legitimate complaint of an offending post and on the complainant's request, the post may be removed;
- including in the T&Cs clear 'acceptable use' policies, removing rights of user generated content and adding a 'disclaimer' to their site, stating that the Operator is not involved in the creation of the content posted;
- introducing an 'opt in' check box to accept the T&Cs;
- conducting risk-based assessments of key jurisdictions (e.g. key markets, known problem markets etc.) and tailoring their policies for use in those jurisdictions accordingly; and
- making internal policies consistent with the Regulations on publication.
It is clear that Operators have a difficult balancing act to perform; providing a service which encourages free speech and fosters lively debate on the one hand, while protecting themselves against individuals who are offended or wronged by defamatory posts of social media users on the other. In this context, until the Regulations come into force and are put into use, it is difficult to judge, in practice, how sections 5 and 10 of the Act will protect Operators against claims from social media users. As a result, Operators should consider putting in place measures to protect themselves as far as possible against the risk of litigation, particularly in the face of anonymous posters.