There is a lot going on with privacy around the world. As discussed in the chapters of this book, significant new laws are being adopted or taking effect, important judicial decisions are being decided to interpret existing legal requirements, and citizens are contending with their own expectations about confounding new technologies and business models. It is not clear, however, that the public policy being developed in any country is a thoughtful reaction to the promises and perils of today’s digital economy, rather than a knee-jerk over-reaction to imagined harms and a handful of high-profile incidents.
The reason for this potentially counter-productive state of regulatory affairs is the failure of any polity to confront the meaning of privacy – or more precisely, the specific privacy harms that privacy regulation is meant to protect against. There ought to be a greater effort among policymakers to correlate what companies are required to do to protect privacy with actual, identified harms or abuses. Protecting against privacy chimera is simply a waste.
Government action is plainly essential to address privacy violations that result in pecuniary injury, such as identity theft or financial fraud. Regulation is also appropriate to discipline abusive data practices that discriminate, defame or embarrass on the basis of race or other sensitive category, or to tame practices that egregiously insult human dignity. However, the value to society of the highly prescriptive (and formalistic) requirements or proposals of the EU’s General Data Protection Regulation (effective 25 May 2018), the new California Consumer Privacy Act (enacted 28 June 2018), or the “White Paper of the Committee of Experts on a Data Protection Framework for India” (published for comments that were due on 31 January 2018), is debatable.
There is nearly universal recognition that the use of personal information for a broad range of commercial applications of Big Data analytics, artificial intelligence, location-based or personalised offerings, etc, can be highly valuable, desirable and convenient for society. Therefore, why is it that privacy regulation is evolving around the world in a potentially innovation-risking or business-stifling manner?
The answer may be that, unlike typical government regulation (for food safety or to protect the environment, for example), privacy regulators typically make little effort to define, quantify and weigh the costs and benefits of the rules they impose. Regulators simply assume that if privacy is at stake, regulation is ipso factojustified. And the reason for this is, of course, because privacy has been idealised as an inviolable “fundamental right.”
There is nothing controversial or even debatable about data privacy being a fundamental right. In 1974, in one of the seminal data protection laws enacted anywhere, the US Congress stated in the federal Privacy Act that “the right to privacy is a personal and fundamental right protected by the Constitution of the United States.” In December 2009, with the entry into force of the Lisbon Treaty, the EU’s Charter of Fundamental Rights guaranteed both privacy and data protection as two of the Charter’s 50 enumerated fundamental rights. Most recently, in 2017, the Supreme Court of India recognised in its landmark Puttaswamy decision that privacy was a fundamental right guaranteed under that country’s constitution.
There is, therefore, a nearly universal recognition that informational privacy rights are crucial to society, along with an awareness that the digital domain poses greater risks than the analogue realm in terms of both comprehensive and pervasive quantity of data and often their qualitative sensitivity as well. It is no surprise, then, that in 2014 the US Chief Justice acknowledged for a unanimous Supreme Court in Riley that protecting “[p]rivacy comes at a cost.” It is a cost that civilised and democratic societies are generally prepared to pay.
But governments must – or at least ought to – be concerned by just how much cost is at stake, and to what end. Regulating without taking into account cost-benefit analysis is simply foolish. Moreover, acting on privacy without consideration of all relevant consequences is not required by any constitution, charter, law or moral code. Of course, the relevant “costs” to be considered are not just financial expenses in support of compliance, or lost profits due to privacy restrictions, but also the cost of imposing unnecessary expenditures to be borne by a country’s consumers or the lost opportunities for a country’s technological advancement.
In the US, the right to privacy is largely subject to a rule of reason, reflecting whether society would view any given alleged infringement as highly offensive to a reasonable person. In Europe, the fundamental rights of privacy and data protection are also concededly not absolute. Indeed, they are expressly subject to the principle of proportionality, and must be balanced against the other rights and freedoms specified in the EU’s Charter of Fundamental Rights (such as, for example, free speech, due process, property and business rights).
India’s 2018 experts’ report on data-protection legislation likewise acknowledged that while privacy was a fundamental right, the Supreme Court ruled it is not absolute. Indeed, the experts noted that regulation of informational privacy must be reconciled with other legitimate social objectives, including “encouraging innovation”:
The [Puttaswamy] Court recognised ‘informational privacy’ as an important aspect of the right to privacy that can be claimed against state and non-state actors. The right to informational privacy allows an individual to protect information about herself and prevent it from being disseminated. Further, the Court recognised that the right to privacy is not absolute and may be subject to reasonable restrictions…. It has expressly recognised “protecting national security, preventing and investigating crime, encouraging innovation and the spread of knowledge, and preventing the dissipation of social welfare benefits” as certain legitimate aims of the State. [pp. 15-16]
In sum, governments have an obligation to protect the interests of their citizens in data privacy, but they should make an effort to protect their citizens from real privacy threats, and not from illusory ones – or from harms that are merely assumed rather than demonstrated. As the US Supreme Court said in its 2016 Spokeo decision, however, while “tangible injuries are perhaps easier to recognise,” and intangible harms may be more difficult to analyse, they nonetheless can be just as real and concrete as pecuniary damage.
Policy makers must also identity and quantify what society will lose by diverting resources from more productive purposes to compliance-intensive practices that do not yield societal benefits commensurate with their cost. There is no genuine benefit from protecting individuals from risks they do not really worry about, and which may not actually be harmful at all. The tendency of some national privacy rules to treat all personal information as sensitive or easily susceptible to abuse – or to expand the definition of personal data unreasonably – is not costless to society.
While the traditional US model for regulating commercial privacy is dismissed in some precincts of the world, the fact is that by identifying what sensitive data requires special protection and focusing regulation on abuses of such data – financial, medical, communications, children, students, etc – there is a lower risk in the United States of over-regulating privacy to the detriment of technological innovation, economic prosperity and consumer choice and convenience. By resisting the temptation to adopt all-encompassing or omnibus data-protection legislation, countries are free to pursue more incremental and tailored regimes. Smarter approaches to regulation would target real privacy problems and avoid chasing imagined ones.
Moreover, the general (“catch-all”) privacy enforcer in the US, the Federal Trade Commission, is explicitly mandated by Congress to apply a cost-benefit balance before it may challenge a business practice that is deemed “unfair” to consumer privacy rights. The statutory “balancing” requirement limits enforcement to harms that are substantial, rather than minimal, that are unreasonably imposed on consumers, and which do not provide offsetting advantages. The FTC’s cost-benefit standard is codified as follows:
The Commission shall have no authority under this section … to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. [15 U.S. Code 45(n) (emphasis added)]
As Europe’s GDPR is fully implemented, complied with and enforced in the coming year and years, as the impacts of California’s new EU-like law are assessed, and as India chooses a path forward for its new privacy framework, the world will judge whether policy makers there and elsewhere around the globe have regulated well or over-regulated badly.
This article was first published on July 12, 2018 by Chambers and Partners Data Protection & Cyber Security 2018