Long ago, back in the last century (in 1999, to be precise) I wrote and directed a play. I was in my last year at University and probably should have been concentrating on my law finals, but I did it anyway. Long after its trivial charms have faded from everyone else’s memory, and after the original student newspaper clipping has been buried in a box of archived papers during three or four subsequent house moves, I quite like the fact that if I type my name and the play’s title into a search engine, the review in our student newspaper is still accessible at a moment’s notice.

Update [30/5/14]: It only took a couple of weeks, but Google has now responded in precisely the pragmatic way recommended below, launching a streamlined online procedure for "forget me" requests, and pledging to strike an appropriate balance between the rights to free speech and the proper removal of outdated or irrelevant material: https://support.google.com/legal/contact/lr_eudpa?product=websearch. No doubt a number of other search providers will shortly follow suit.

What, though, if the information about me was something a little less innocuous, a little more embarrassing or distasteful? Is that, too, something that should linger around the search results connected to my name for the rest of my life like an unpleasant odour, forever resurfacing when I least wish to be reminded of it, tainting others’ impressions of me in perpetuity? Is there, in short, a right to be forgotten on the internet in certain circumstances, and if not, should there be?

This is the question that the Grand Chamber of the European Court of Justice has just been considering in the case of C-131/12 Google Spain v AEPD and Mario Costeja Gonzalez. In that case, Mr Gonzalez had got into difficulties paying his social security debts and ultimately his house had been repossessed and auctioned in order for these to be paid. This was in the late 1990’s and search results in connection with Mr Gonzalez were still pointing to an online copy of an article referring to this story in a regional Spanish newspaper many years later. Mr Gonzalez had asked the Spanish Data Protection authority to have the article itself removed, which the authority declined to do. It did, however, contact Google Spain to request the removal of links pointing to the article.

Google Spain resisted. Consistently with Google’s traditional position in relation to complaints about defamatory material appearing in search results that it returns, it maintained that it exercised no control over the material which its algorithms produced in response to keyword searches and could not be regarded as a data controller in respect of that material.

For some time, it looked likely that this position would be reflected in the Court’s decision. One stage of decision making at the ECJ which is absent in domestic litigation is the preparation of a report on the state of the law in connection with the proceedings, which in this case was prepared by the Advocate General. That report indicated a likely conclusion that there was no right to be forgotten on the internet, and warned of a variety of dangers attendant on the establishment of such a right. Likewise it seemed that the European Union legislature believed that there was no such pre-existing right, given that its inclusion has been a topic of considerable debate and negotiation in the preparation of the new European Data Protection Regulation (about which more on another occasion).

The decision of the ECJ, though, has been to uphold the Spanish Data Protection authority’s request, and to direct Google Spain to remove the offending links. In its decision, the Court indicated that there was no place on the internet for material that was “inadequate”, “irrelevant or no longer relevant” or “excessive” in light of the passage of time, and required Google Spain to delete it.

This decision has been met already with considerable consternation on the part of internet service providers and other providers of links to historic online content. There has been considerable focus on the additional logistical challenges that such a ruling presents, the increased costs to the operators of these services, and a concern that it will lead to “private censorship” and therefore represents an assault on free speech.

But should Google and other search engine providers really be so concerned? There are several points that they, and everyone else, should be bearing in mind.

First, the ruling imposes no proactive duty on the operators of search engines or other online services. Far from being obliged to make their own assessment of relevance or adequacy in connection to the links they generate, they can simply sit back and wait to receive notification from the relevant Data Protection authority. Those authorities themselves are subject to an obligation to make an assessment of the request made to them for the removal of the link in question, and will not needlessly restrict the processing of data which has a genuine relevance or interest to the public, and which is not excessive in all the circumstances.

Second, the idea that this ruling is going to impose significant additional costs of compliance needs to be carefully examined. It is entirely possible that for some organisations that will be true, and those affected will need carefully to consider how best to provide for such compliance through policies and systems which ensure that they do not fall into breach, while still being able to deal with such requests in a cost-effective manner. But for search engines, and Google in particular, this is really no different to the procedure that they already have in place to deal with links to content which infringes copyright, or archive material which points to deleted pages which had contained defamatory or other unlawful content. It is hard to believe that a very significant additional burden is going to be imposed in complying with the modest number of requests which successfully make their way past the scrutiny of the various domestic Data Protection authorities.

Third, there is not really any credible argument that this can amount to a curtailment of freedom of expression. As the Gonzalez case itself demonstrates, the underlying web page on which the article was published was not required to be removed. As was the case in the days before the ubiquity of the internet, research by way of an examination of the archives of the regional newspaper in question would still yield the relevant information. Most civilised countries allow people, after a suitable period of rehabilitation, to move on from even the most serious offences – and this decision simply brings the internet into line with the “real” world in that regard.

Finally, I think that the search engines are missing an opportunity to put a positive spin on this decision. The fact is that the amount of data on the internet is increasing at a dramatic rate. As at the date of writing this article (14 May 2014) there were at least 2.26 billion pages on the indexed internet (source: www.worldwidewebsize.com). The Cisco VNI forecast for 2012-17 predicts that global monthly IP traffic will have risen from 43.6 exabytes per month in 2012 to 120.6 exabytes per month in 2017. The permanence of data online means more pages to index, and more potentially irrelevant or outdated results to be waded through for users of search engines to find what they are actually looking for. For organisations that pride themselves on their algorithms’ abilities to weed out the irrelevant and archaic and to return the timely and pertinent, there seems to be little downside to be seen to be working (in what will only ever be a very minor way) to make the results they return a little less “inadequate” or “inaccurate”.