Internet service providers have frequently been caught in the cross-fire in defamation claims, with claimants increasingly targeting them to get comments removed quickly. This, combined with a low harm threshold and ease of establishing jurisdiction in the UK has, according to the government has had a “chilling effect” on freedom of speech. The new draft Defamation Bill plans to change all this.
For decades the United Kingdom has been known as the ‘libel capital of the world’. The relatively low harm threshold in the UK, combined with the ease of establishing jurisdiction, the multiple publication rule1 and substantial damages awards2 have provided an attractive mix for claimants to issue or threaten proceedings in UK courts.
As the Internet has grown to dominate commerce, internet service providers (ISPs) have increasingly been caught up in defamation claims by claimants eager to intercept defamatory statements. The result, according to the Ministry of Justice, which has conducted a review into defamation law over the past year, is a ‘chilling effect’ on freedom of speech. It is becoming increasingly common for ISPs, conscious of the risk of being held liable as a joint tortfeasor or publisher/editor of the defamatory material, to remove allegedly defamatory statements without questioning whether such remarks are unlawful or not.
There has therefore been a general perception for many years that defamation law, based on well-developed case law and the oft-maligned Defamation Act 1996 (the Act), is ripe for reform. In March 2011, a draft Defamation Bill (the Draft Bill) was published by the government for consultation and on 9 May 2012, the Queen’s speech committed the government to a new Defamation Bill. The Draft Bill, amongst other things, explores alternative ways to protect ISPs, including giving the complainant a right of response, and taking the ISP out of the equation, as explored further below.
The background legislation and case law
The Act currently provides ISPs, who would otherwise be liable as ‘publishers’ or ‘editors’ of the defamatory material, with a defence provided that they took ‘reasonable care in relation to its publication’ and, crucially, ‘did not know, and had no reason to believe, that what he did caused or contributed to the publication of a defamatory statement’ (Section 1 of the Act). This is echoed in the E-Commerce Regulations 2002 (SI 2002/2013), which implemented the E-commerce Directive (2000/31/EC) in the UK and provides that ISPs will not be liable where they act as mere conduits or hosts of defamatory information, or cache such information, provided they remove it expeditiously on receipt of notice.
Godfrey v. Demon Internet Ltd  201 (QB) (Demon Internet) established that when an ISP has been put on notice and has knowledge of a defamatory act, it becomes a ‘publisher’ of the defamatory statement as common law. As soon as it has knowledge of the defamatory statement, the ISP’s liability is engaged and it must take the defamatory statement down ‘expeditiously’ in order to escape liability. In other words, ignorance is bliss for ISPs.
Following Demon Internet, there were concerns that the law was harming freedom of speech – leaving ISPs having to make quick decisions (‘expeditiously’ could mean as little as 24 hours) as to whether content on their website to which a third party objected was defamatory or not, after they had been put on notice. This can have a serious impact on online review websites (many of which do not make vast profits or have deep pockets), whose very purpose is to provide its users with an open forum through which they can express their opinions.
The choice for ISPs now is to monitor or not to monitor. There are some websites, by virtue of the content (e.g. news pages with blogs on highly emotive topics such as the Breivik murders, sites for children, or blogs on websites for major corporations) where there is simply no real option other than to moderate. Such ISPs therefore lose the protection of the E-Commerce defences and have to walk the tightrope of liability, at times employing teams of dedicated staff to monitor and remove inappropriate content, or outsourcing the job to a third party supplier. This makes moderating content a relatively high-risk, expensive and labour-intensive approach, and as a result, many sites choose not to moderate, but to rely on a complaints process and notice and takedown procedure.
Due to the costs, uncertainty and administrative burden in dealing with complaints, when ISPs which do not monitor receive a complaint, they often remove the content immediately without reviewing it, regardless of whether they are defamatory or not, which defeats the very purpose of such review sites and the reason for giving ISPs immunity (in certain conditions) in the first place. The risk for ISPs is that, if they do not act expediently each time defamatory, offensive or otherwise infringing material is posted on the websites, then the damage might already have been done by the time the offending content is removed. This inevitably has a damaging impact on freedom of speech.
The ruling in Kaschke v Gray and Hilton  EWHC 690 (Kaschke) made life even harder for ISPs. It suggests that the threshold for becoming ‘active’ is worryingly low. In Kaschke, it was held that the correction of spelling and grammar mistakes went beyond the mere storage of information, and because it actively engaged with the content, even in a minimal way, the ISP lost the protection of the E-Commerce Directive. The threshold for becoming active in defamation proceedings seems much lower than in, say, trademark cases, where, for instance, an ISP’s liability may be engaged for optimising the presentation of online offers for sale of infringing goods or promoting those goods (L’Oreal –v- eBay  EWHC 1094 (Ch)) which suggests a more tangible, greater involvement than the mere correction of text. There is also an obligation on ISPs (under Article 6 of the E-Commerce Directive) to know their own clients, which presents obvious difficulties when the poster is anonymous.
Perhaps in response to these concerns, there appears to have been a conscious move towards granting ISPs greater protection over the last few years.
In Metropolitan International Schools v. Designtechnica Corp  EWHC 1765 (QB) (Designtechnica) Mr Justice Eady ruled that Google, Inc. would not be liable for a defamatory ‘snippet’ appearing in its search results. This is because as a search engine it was essentially a passive facilitator, not a publisher, and as the searches were performed automatically in response to a search enquiry, Google could not control the content appearing on its search engine. It was enough that Google had blocked access to specific URLs identified by the claimant.
The recent case of Tamiz v. Google, Inc. ( EWHC 449 (QB) held that Google did not become the author or authoriser of a publication, and therefore a ‘publisher’, just because it had the technical ability to take down the defamatory posts. Google did not have ‘actual knowledge’ of the unlawful activity (Regulation 19 of the E-Commerce Regulations) and was therefore not obliged to take the protestations of the complainant at face value.
These decisions are welcome news for ISPs but they are very much fact specific and service specific (e.g. Google as a search engine), and still broadly in line with the principles established in Demon Internet.
The Draft Bill
The Draft Bill committee believes that online intermediaries should be afforded a greater level of protection in defamation proceedings than they can currently rely on from Section 1 of the Act, and Regulation 19 of the E-Commerce Regulations.
The ‘re-think’ on defamation does not represent a complete overhaul of the law; instead it tweaks the existing common law and legislation, attempting to redress the balance weighted in favour of (would-be) claimants by, amongst other things, raising the harm threshold up a notch from substantial harm to ‘serious harm’, and eliminating the multiple publication rule, thus suppressing vexatious claims (and the Committee hopes, ‘libel tourism’), whilst protecting those who should legitimately be able to bring a claim.
For ISPs, the recommendations centre around whether the author of the offending post can be identified or not:
- Where an author can be identified, the ISP would be obliged immediately to place the complaint alongside the author’s contribution – the rationale being that this ‘takes an element of sting out of the reputational challenge.’3 If the host does not publish the complaint, then the internet host, having made a publishing decision, would qualify as a ‘publisher’ for the purposes of the Act and risks being liable for the defamatory content. The Committee had suggested that ISPs should be allowed to keep allegedly defamatory comments online as long as the author of the comment is identified and the complaint is published next to the comment. This proposal was rejected by the government on 29 February 2012 due to practical and technical difficulties of ISPs complying with this requirement. There was however, no alternative approach proposed, and therefore the original options discussed in the Draft Bill are seemingly still on the table.
- Where the author cannot be identified, ISPs would act as an ‘initial liaison’ point between the complainant and the author of the allegedly defamatory material, putting the two together by passing correspondence between the disputing parties in an attempt to resolve the dispute. This is where the proposed liability of ISPs would start and end. While the government said that the ‘initial liaison’ phase would have to be strictly time limited with appropriate safeguards in place, it considered this to be a more promising proposal. An alternative proposal put forward was that if a third party objects to online material posted by an anonymous source, then upon receiving a complaint from the objector, the ISP should take it down. If the host feels that there are ‘public interest’ reasons for not taking it down (what constitutes ‘public interest’ reasons is still not entirely clear) then the ultimate decision will be for a judge to make.
Comparing ISP’s liability in copyright and defamation cases
The apparent willingness of the courts and legislature to minimise the role of ISPs in defamation proceedings can be contrasted with the court’s recent approach to ISPs in copyright cases.
In Motion Picture Association v BT  EWHC 608 (ch), the High Court ordered that BT must block access to the pirate site ‘Newzbin2’ pursuant to section 97A of the Copyright Designs and Patents Act 1988. Deeming that in the circumstances it had ‘actual knowledge’ of the infringement on its network, Mr Justice Arnold held that site-blocking was proportionate, and that the claimants’ property rights outweighed BT’s Article 10 rights of freedom of expression. The Court’s position in Newzbin2 can be contrasted with the position in the Draft Bill, where the immunity of ISPs is seemingly being given much more importance than the reputations of claimants. The movement in defamation towards resolving disputes between the complainant and the author of a statement, with the ISP merely acting as a facilitator, seems a world away from copyright law, where orders are seemingly being granted against ISPs with ‘actual knowledge’ as a matter of routine.
The difference between the recent approaches to ISP liability under defamation and copyright law is probably driven by policy concerns as much as anything – on one hand, to drive out vexatious defamation litigants and libel tourists, and on the other to give copyright owners a helping hand when rogue websites attempt to skip out of the jurisdiction and escape liability whilst knowingly still infringing copyright. Further, it is undoubtedly harder for an ISP to determine if a comment is defamatory as there is an inherent degree of subjectivity involved. Intellectual property rights, by contrast, are often more clear cut (although this is not always the case, as the recent spate of AdWords cases suggest).
The Draft Bill paves the way for online intermediaries to have a clearer mechanism to absolve themselves of liability, although the proposals are still evolving. As the proposals are still relatively broad brush and uncertain, it seems premature for ISPs to make wholesale changes to their websites until we know exactly how the law will change. In the meantime, ISPs should consider taking the following precautions to minimise or avoid liability:
- Choose whether or not to moderate content Regardless of whether or not you choose to moderate, have a complaints process which is simple, effective and easy to understand. Even moderated sites should consider operating a simple complaints process. The less time that offending content appears online, the fewer people will see it – and that could impact on liability or reduce an award of damages
- If you get a complaint, respond quickly
- If you choose not to moderate, do not make changes to the text published by the author, even if they are merely cosmetic, and consider removing any automated system which filters, edits, orders, or in any way interferes with content
- Have an Acceptable Use Policy, which is clearly visible on your website and consider including a pop up box reminding the user to act responsibly when using the site before they can make a comment (although careful judgement is required as to when to use these so as not to detract from the user experience). Consider also the possibility of requiring users to give their name and email address before posting, as this may encourage greater accountability, although such data must be held and processed in accordance with all applicable data protection legislation
- If you choose to moderate, prepare clear moderation guidelines for moderators and educate them (or impose contractual obligations on suppliers providing moderating services) to act in accordance with these guidelines. The moderators should be aware of how to identify potentially defamatory and other infringing postings and the importance of removing the content as soon as they are notified
- Grant ‘editing rights’ to authorised, executive staff only – this should be maintained and reviewed regularly
- If technically feasible, keep an ‘audit trail’ to establish which user posted the relevant content
- In anticipation of the likely reforms, consider how technologically feasible it would be for your website to place a complaint alongside user generated content.