Future of Data – Episode 1

During the 1990s, when the emerging tech giants were internet service providers, they managed, in key jurisdictions, to secure legislation which largely absolved them from responsibility for the content carried via their systems. As search engines became the window to the world for many of us, and social media platforms overtook television in terms of viewing time, it has become more difficult for Big Tech to claim to be ‘just’ technology companies.

The Right to be Forgotten

In 2014 the European Court of Justice (ECJ) heard a case from a Spanish national named Mario Costeja González who was concerned that two newspaper pages published in 1998 were still available online. They concerned repossession of a property he owned, in connection with his social security debts. The debts had long since been resolved, but the articles retained a prominent position in the results of searches for his name. The ECJ agreed that this remained a stain on his reputation and that Google had to withdraw the data from its indexes. The information could remain on the newspaper’s website as it had been lawfully published. However, the remedy was a practical one for a person in González’s position (not González himself whose debts became famous), because the lack of any centralised index to the newspaper articles rendered them “practically obscure”, and therefore effectively harmless.

Accepting the decision, Google set about writing the rule-book for the right to be forgotten. This involved it in a number of expensive and hard fought pieces of litigation, in various EU member states working through other individuals’ claims. The system for erasure is triggered by an online request followed by a manual review.

Google revealed in 2018 that it had received requests to take down about 2.4 million URLs from its search results from 2014 to 2017, but it had only delisted 43 percent of them. Many of the refusals to delist would have been on the basis of public interest.

Although perhaps not perfect, it represents a major step in (reluctant) self-regulation. The right to erasure is now enshrined as a data subject right in the General Data Protection Regulation (GDPR).

The fake news industry

If the legislative and technical response to the right to be forgotten appears costly and Byzantine, this effort may yet pale into insignificance alongside what we face with fake news. Unlike the circumstances leading to erasure claims, which are likely to find a steady state over the next few years, fake news threatens to grow exponentially. This is for a number of reasons:

• the use of bots makes the sheer quantity of false postings vastly scalable;

• as with much viral online material, there is often a profit motive, with a drive to achieve high levels of clickthrough, leading to advertising revenues; and

• we now have a warped culture where ‘free speech’ is believed by many to mean ‘I can say whatever I like’.

Add to this the spectre of malevolent state actors, as well as genuine curtailments on free speech, and you have a perfect storm.

How fake news can be alleged

Fake news (or online disinformation) means different things to different interest groups:

• to political candidates, and anyone wishing to uphold free and fair democratic elections, the inaccurate portrayal of candidates is fake news (although some might say this is merely placing age old soap box disparagement onto an industrial scale);

• to health authorities, and anyone wishing to eradicate serious diseases, untruthful portrayals of the effects of vaccination programmes is enormously damaging, whilst some wish to impose their views of government interference on others;

• repressive regimes may see any dissent or opinion aimed at those in authority or concerning matters of state security (however widely drawn), as objectionable, and those responsible for such messaging deserving of penalty; and

• to almost anyone, the lack of relevant context to provide a balanced picture, or the editorisalisation of an event. Consider, for example, how social unrest and protests around the world are portrayed, depending on which way the wind is blowing and the publication’s leanings. Or how a tweeted video can go viral before being shown as totally misleading by missing a crucial perspective.

What is crucial therefore is:

• how fake news is defined;

• whether the response is legislative; or

• whether the response is to encourage selfregulation.

The Singapore example

On 3 June 2019 the Protection from Online Falsehoods and Manipulation Act 2019 was introduced in Singapore following two days’ debate in Parliament. The law is widely drawn and proscribes the communication of false statements of fact accessible to one or more end-users in Singapore through the internet, MMS or SMS. In addition to falsity, the sender must know or have reason to believe that the statement is likely to:

• be prejudicial to Singapore’s security;

• be prejudicial to public health, safety, tranquillity or finances;

• be prejudicial to Singapore’s friendly relations with other countries;

• influence the outcome of an election;

• incite feelings of enmity, hatred or ill-will between different groups; or

• diminish public confidence in any public authority.

Fines and sentences are significant (S$50,000 or 5 years imprisonment for individuals; corporations face fines of S$500,000). The maxima of these fines and sentences are doubled where the person has disseminated the false statements using a bot.

The Government is empowered to order an internet access service provider to disable access by end-users in Singapore to content determined to be infringing. An affected person may appeal to the High Court.

“Tackling online disinformation: a European approach”

In September 2018 the EU signed its Code of Practice on Disinformation (Code). Rather than produce an EU wide Directive (to be adopted by national laws of individual member states) the Commission has tried to bring the large technology companies onside to regulate fake news themselves, within the framework of the Code.

“Disinformation” is defined as “verifiably false or misleading information” which, cumulatively:

• “is created, presented and disseminated for economic gain or to intentionally deceive the public”; and

• “may cause public alarm”, intended as “threats to democratic, political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security”.

In particular “Disinformation” does not include misleading advertising (which is regulated elsewhere), reporting errors, satire and parody, or clearly identified partisan news and commentary.

The Code has been signed by Facebook, Google, Twitter, Microsoft, Mozilla and by members of the advertising industry. The Commission was keen to ensure the Code was activated in time for the European Parliament elections in May 2019, and monitored results over this period.

The Code consists of a number of different measures. Signatories need not take up all measures, and may withdraw from the Code at will, so long as they publicise their measures and/or withdrawal to their co-signatories and the Commission. Signatories also commit to writing an annual account of their work to counter Disinformation.

The measures prescribed by the Code include the following:

• scrutinising ad placements: policies and processes to disrupt advertising and monetisation incentives for misrepresentations;

• political advertising and issue-based advertising: should be clearly distinguishable from editorial or news content; there should be public disclosure of political advertising;

• integrity of services: adopt policies regarding the mis-statement of identity and the misuse of automated bots;

• empowering consumers: steps should be taken to dilute the visibility of Disinformation by improving “findability” of trustworthy content; users should be provided with easily accessible tools to report Disinformation; users should be able to understand why they have been targeted by a given political or issue-based advertisement; objective indicators of the trustworthiness of content sources should be provided.

On 14 June 2019 the Commission reported that its Action Plan against Disinformation had helped to ensure stronger preparedness and coordination in the fight against disinformation in the run-up to the European elections. Also, it claims further progress made in relation to the transparency of issuebased advertising, and an improvement on the scrutiny of ad placements to limit malicious clickbaiting practices.

Hong Kong’s position?

Hong Kong has a long tradition of enjoying wide press and personal freedoms of speech and expression. It is unlikely that its legislature will wish to be seen to be restricting those freedoms unnecessarily, particularly at present.

The Hong Kong Privacy Commissioner has shown interest in considering adoption of the Right to be Forgotten. It could be that, if the EU’s approach to fake news and the protocols developed by the technology giants in response to the EU Code are seen to be successful, Hong Kong may encourage the adoption of such protocols across its territory in the future.

Looking ahead

In the meantime, however, fake news seems to be proliferating. Whilst measures permitting erasure of unwanted historical personal data are operational, the battle against fake news has barely begun, and will take significant political will. At present, fake news sits outside the data protection or cybersecurity portfolios. It desperately needs a home.

The issue is made more pressing by the rise of “deep fakes” – for example, manipulated video imagery that appears to show a political figure making statements they never uttered. This will require more than good law – it will require an ever-evolving sophisticated approach to authenticating data, imagery and events.