It was not so long ago that we wrote that Twitter appeared to be grasping the nettle and taking its role as a responsible social media platform seriously. However, Twitter's abject failure to deal with the recent abuse levelled at Stan Collymore, an ex-premiership footballer, suggests our assessment might have been ill-judged.
Collymore is no stranger to Twitter abuse having been targeted on a number of occasions before. Where the identities of these trolls are ascertainable, the victim can seek to take the appropriate action relying on criminal and/or civil laws. In 2012, for instance, Newcastle University law student, Josh Cryer, was given a two-year community order for sending a series of racist tweets to Collymore.
In July 2013, MP Stella Creasy, amongst others, was fiercely critical of Twitter's dismissive response to her reporting the abuse she experienced, with the company suggesting that she"simply block" those who offend her.
Twitter eventually succumbed to calls for greater protection and decided to review its abusive behaviour policy and introduced the 'report abuse' button on all platforms in late 2013. But all was not as it seemed. Twitter recently rebuffed the Union of French Jewish Students (UEJF) who had asked the company to reveal the identities of users who had posted what the French courts regarded as anti-Semitic and racist statements.
Roughly 6 months has passed since #trollgate and Twitter has again found itself under attack for not doing enough to support victims. Collymore stated last week: "I accuse Twitter directly of not doing enough to combat racist/homophobic/sexist hate messages, all of which are illegal in the UK."
Whilst Collymore was effusive in his praise for certain police forces for their response to the abuse he received, not all online threats and harassment are investigated so thoroughly. Some police forces are far too under-resourced to task officers to investigate what can often be time consuming matters. Where cases are investigated, Twitter's uncooperative stance has left officers, according to Collymore, "banging their heads against a brick wall, having to make requests to get reports and profiles processed."
Despite these problems, we would urge victims to consider reporting the abuse to the police - if only to get a crime reference number.
But where police action is either inadequate or even non-existent, what are the victims' other options?
- Twitter remedies:
For what it is worth a victim can report the abuse and/or block the offender. We would like to hear from those who have reported the abuse for them to tell us whether the abuse then stopped. There is no data (of which we are aware) to make good Twitter's apparent belief that this is an adequate remedy. A moment's thought tells you that blocking is fatuous; the victim continues to be abused but does not see the abuse directly from the abuser's Twitter feed.
- Norwich Pharmacal Order:
A Norwich Pharmacal Order ("NPO"), requiring disclosure by a third party who has unknowingly facilitated an actionable wrong, has been commonly used by claimants to obtain identification details of internet users from websites, ISPs and, in more recent years, social media platform providers. Once an anonymous abuser has been unmasked, a claim can then be made for an injunction restraining him/her from continuing the abusive behaviour coupled with a claim in damages.
In order to obtain an NPO, the following conditions must be satisfied:
- a wrong must have been carried out, or arguably carried out, by an ultimate wrong-doer;
- there must be the need for an order to enable action to be brought against the ultimate wrong-doer; and
- the person against whom the order is sought must (a) be mixed up in the wrong-doing so as to have facilitated it and (b) be able, or likely to be able, to provide the information necessary to enable the ultimate wrong-doer to be sued.
At first glance this looks to be an attractive option to force Twitter to disgorge the identity of the abuser. However, even if these conditions are met and the NPO is subsequently granted, the putative claimant will then have to face arguments that the UK court does not have jurisdiction to enforce the order against Twitter. Twitter is Californian based and has argued on a number of occasions in the past that revealing the names of its users would violate the First Amendment. As Twitter lawyer, Alexandra Neri, noted last October, "our data is stored in the US, so we must obey the rule of law in that country". And of course any potential claimant has to consider the cost of litigating against Twitter.
Ms Neri may have spoken too soon. Tugendhat J in Judith Vidal-Hall and others v Google Inc  EWHC 13 refused Google's application to have those claimants' claim for misuse of private information, breach of confidence and breaches of the Data Protection Act struck out. Google argued that the English courts had no jurisdiction to try these claims. Tugendhat disagreed and the matter goes to trial in the High Court in London (although it is reported that Google are to appeal). So, if the victim can frame his complaint as a misuse of private information or a breach of confidence or the DRA originating from a US social media platform then not only can the NPO be obtained from the English courts but disobedience of the order can be subject to the UK courts' sanctions. Will the Courts come to treat a complaint in online harassment in the same way?
- Section 5 Notices
Section 10 of the Defamation Act 2013 provides protection for a person or company against a defamation action provided that they are "not the author, editor or publisher of the statement complained of unless the court is satisfied that that it is not reasonably practicable for an action to be brought against the author, editor or publisher." Twitter is not an author, editor or publisher; it is instead deemed to be an intermediary. Accordingly, it is able to rely on this provision.
Of course a defamatory statement is quite distinct from a claim in harassment or abuse generally. But they can often be interlinked, and where they are victims may find help in section 5.
Where the identity of the troll is unknown Twitter might be forced to rely on alternative defences to avoid liability. Under section 5 of the Defamation Act, a website operator will not be liable for defamatory comments posted on its website so long as they comply with certain requirements. These requirements have been studied in depth (see: The Section 5 Defamation Act Regulations: A complex red herring - Ashley Hurst; Defamation Act 2013: Section 5, it's decision time for website operators - Ashley Hurst; and Anonymous posters and the new Defamation Act: the draft regulations - Graham Smith. In short, a website operator will either have to provide details of the poster to the complainant or remove the offending post within a specified period of time to take advantage of the section 5 defence.
We have yet to see how Twitter will respond to valid section 5 notices. Perhaps the threat of becoming liable for failing to comply with section 5 requirements will force it to act with greater urgency when notified of defamatory material. Indeed, the Vidal-Hall judgment - whilst concerning privacy issues - might make Twitter (and other US based tech companies) reconsider more generally the jurisdictional arguments upon which they have traditionally sought to rely to defeat claims brought by UK based litigants.
But even if the victim is successful he or she still faces big hurdles. There are other defences, such as section 1 of the Defamation Act 1996 and Regulation 19 of the E-Commerce Regulations 2002, which predate the Defamation Act 2013 and remain in force. Twitter could seek to rely on these provisions instead of section 5 of the new Act.
Further, the Speech Act 2010, introduced by President Obama, makes foreign libel judgments unenforceable in the US if they conflict with American laws on free speech. A libelous retweet in this country, for instance, may well be protected by section 230 of The Communications Decency Act in the US.
Following Collymore's complaints, Twitter issued a statement on 22 January 2014 emphasising its commitment to the responsible use of technology: "Our Trust and Safety team works 24 hours a day to respond to reports and we are increasing the size of this team to make our response time even faster." Promising news or hackneyed rhetoric? We shall see.