The introduction of the safe harbour regime in the E-commerce Directive in the early 2000s sought to provide online service providers (“OSPs”) with a measure of immunity from liability arising from certain actions including those by users of their services. That immunity extends to ‘mere conduits’ such as Internet service providers, and certain acts of caching and hosting by OSPs. Although the regime does not absolve OSPs from all potential liability, provided the OSP acts passively, does not exercise control and does not know about any illegal activity/information it is likely to fall within the safe harbour. Additionally, member states are not permitted to impose obligations on OSPs requiring them to monitor information they transmit or store or to seek out facts/circumstances about illegal activity.
In recent times, there has been what can fairly be described as a discernible shift in the balance of responsibility under the safe harbour regime which has seen a much greater emphasis placed on the role of OSPs in addressing illegal activity online. There are a number of reasons for this shift.
The spate of recent terrorist attacks has caused national governments in particular to pay much closer scrutiny to the role OSPs might play to assist in identifying the perpetrators of these acts and preventing further incidents in the future.
In the wake of the terrorist attack on London Bridge in June 2017, Theresa May indicated that internet regulation was a key part of her strategy for tackling terrorism, saying “we cannot allow [extremism] the safe space it needs to breed. Yet that is precisely what the internet - and the big companies that provide internet-based services - provide.” More recently, Germany introduced a new law requiring social media platforms to remove ‘hate speech’ within prescribed periods of time or face fines of up to €50 million. And at a meeting earlier this year in Paris, President Macron and Theresa May were reported to be exploring the possibility of introducing legal liability and fines for OSPs who fail to remove inflammatory content.
There is some evidence that OSPs are already responding to this rhetoric. Facebook, Twitter, Microsoft and YouTube combined to create the Global Internet Forum to Counter Terrorism, promising to focus on technological solutions and partnerships with governments and other agencies, the inaugural conference for which was attended by the UK Home Secretary Amber Rudd, who has been a vociferous advocate for greater participation by online platforms in tackling online extremism. In May 2016 the same four companies, together with the EU Commission, unveiled a Code of Conduct to combat the spread of online hate speech in Europe. The Commission has also renewed its focus on OSPs as part of its Digital Single Market strategy, stating its intention to encourage EU-wide self-regulation by OSPs.
The ripples of these calls to arms are also being felt beyond the fight against terrorism, hate speech and the more malign of illegal activities. Recent guidance issued by the EU Commission (see our report here) recommends that OSPs take a more proactive role in policing illegal activities online more generally, regardless of their nature. This ‘one size fits all’ approach puts OSPs in the invidious position of being arbiters over all manner of legal issues from terrorism, fraud and counterfeiting to defamation and copyright infringement. There may be a case for a more nuanced approach, where the measure of OSP responsibility is determined according to the particular illegal activities in question.
The Commission’s guidance marks a change in emphasis under the EU safe harbour from passive conduct towards proactive policing and enhanced notice and takedown procedures. It begs the question what sort of sanctuary the EU safe harbour now offers OSPs. It is also difficult to see how the approach can be reconciled with the requirement not to impose a general monitoring obligation on OSPs which has always been understood to form an integral part of the EU safe harbour.
Although the EU Commission generally claims to favour self-regulation by OSPs, it has also shown an inclination towards regulating some OSP behaviours.
As part of its Digital Single Market strategy, the Commission has proposed a directive on copyright, Article 13 of which requires OSPs that provide access to large amounts of content uploaded by their users to cooperate with right holders, deploy measures such as content recognition technologies to automatically detect unauthorised content, and provide related information to right holders. It is very difficult to see how these requirements of Article 13 can be reconciled with the requirement not to impose a general monitoring obligation on OSPs. Unsurprisingly, the provision has been the subject of much debate and some criticism, precisely because of the uncertainty that it introduces into the safe harbour regime.
The EU Court of Justice (“CoJ”) has also had a hand in disturbing somewhat the balance of responsibility under the EU safe harbour. The EU safe harbour provides the framework for when an OSP will not be liable and - broadly speaking - it does so by reference to legal concepts of ‘knowledge’ and ‘control’ which are typically associated with indirect liability. Clearly it would have been a very cumbersome task for the EU to have even tried to harmonise OSP liability generally. In the field of copyright law, however, the CoJ has recently embarked upon its own harmonisation agenda.
In GS Media (C-160/15), the CoJ decided that the OSP in that case had, by hosting on its website hyperlinks to unauthorised copies of copyright-protected works, made an unauthorised communication to the public of those works. The CoJ was prepared to attribute this liability to such an OSP on the basis that it had provided the hyperlinks in pursuit of financial gain which gave rise to a rebuttable presumption of knowledge of the unauthorised status of the copyright-protected works. This presumption of knowledge effectively denudes such an OSP from immunity under the EU safe harbour by making it a primary infringer of copyright.
Ostensibly, the EU Commission maintains that the safe harbour hasn’t really changed. However, it’s quite evident that OSPs are now being expected to take a much more pro-active approach which requires greater engagement with right holders, enforcement agencies and their users. The emphasis on OSPs taking more pro-active steps means that they are more likely to obtain the knowledge or control over illegal content that they might be hosting and in respect of which they would be otherwise immune from liability as a passive intermediary (see Google France C-236/08 to C-238/08).
Whilst it is still open to OSPs to act expeditiously to remove or disable access to illegal content upon obtaining actual knowledge or awareness of it in order to stay within the safe harbour, in many instances they will need to make a value judgment as to whether content is legal or not, such as whether an intellectual property right has been infringed or whether a listing is defamatory. The burden OSPs face in administering their new responsibilities looks set to increase and their more pro-active role is likely to require considerable investment. Earlier this year Facebook announced plans to add 3,000 people to its team responsible for screening potentially harmful posts. There is, perhaps, a sense that OSPs find themselves between the devil and the deep blue sea.