Practical Wisdom, Trusted Advice.
www.lockelord.com
This edition is updated as of June 2015. To obtain a copy of this edition
by email or to be placed on the mailing list for future editions, please
email [email protected].
To learn more about our firm, or our Privacy and Data Protection
Practice, please visit lockelord.com.
Everyone’s Nightmare:
Privacy and Data Breach Risks
A Locke Lord Privacy White Paper
Locke Lord LLP Privacy & Cybersecurity Group
Co-Chairs
Bart Huffman, Partner 512-305-4746 Austin [email protected]
Laurie Kamaiko, Partner 212-912-2768 New York [email protected]
Members
Ted Augustinos, Partner 860-541-7710 Hartford [email protected]
Chris Bakes, Partner 916-930-2540 Sacramento [email protected]
Barry Bendes, Partner 212-912-2911 New York [email protected]
Michael Bennett, Partner 312-201-2679 Chicago [email protected]
Gregory Casamento, Partner 212-812-8325 New York [email protected]
Thomas Cunningham, Partner 312-443-1731 Chicago [email protected]
Ed Glynn, Partner 202-478-7069 Washington, DC [email protected]
Patrick Hatfield, Partner 512-305-4787 Austin [email protected]
Martin Jaszczuk, Partner 312-443-0610 Chicago [email protected]
Molly McGinnis Stine, Partner 312-443-0327 Chicago [email protected]
Alan Meneghetti, Partner +44 (0) 20 7861 9024 London [email protected]
Rory Radding, Partner 212-912-2858 New York [email protected]
Jennifer Rangel, Partner 512-305-4745 Austin [email protected]
Mark Schreiber, Partner 617-239-0585 Boston [email protected]
David Szabo, Partner 617-239-0414 Boston [email protected]
Tammy Woffenden, Partner 512-305-4776 Austin [email protected]
Robert Courtneidge, Global Head
of Cards and Payments
+44 (0) 20 7861 9019 London [email protected]
Dave Anderson, Counsel 310-860-8710 Los Angeles [email protected]
Brett Foster, Counsel 214-740-8414 Dallas [email protected]
John Kloecker, Of Counsel 312-443-0235 Chicago [email protected]
Glenn Pudelka, Counsel 617-239-0371 Boston [email protected]
Thomas Smedinghoff, Of Counsel 312-201-2021 Chicago [email protected]
Stephen Ucci, Counsel 401-276-6426 Providence [email protected]
Vita Zeltser, Senior Counsel 404-870-4666 Atlanta [email protected]
Natasha Ahmed, Associate +44 (0) 20 7861 9048 London [email protected]
Karen Booth, Associate 860-541-7714 Hartford [email protected]
Laura Ferguson, Associate 713-226-1590 Houston [email protected]
Aaron Igdalsky, Associate 860-541-7766 Hartford [email protected]
Sean Kilian, Associate 214-740-8560 Dallas [email protected]
Matthew Murphy, Associate 401-276-6497 Providence [email protected]
Chuck Salmon, Associate 512-305-4722 Austin [email protected]
Michaela Tabela, Associate 617-239-0734 Boston [email protected]
Nora Valenza-Frost, Associate 212-912-2763 New York [email protected]
August 2015
2015 EDITION
TABLE OF CONTENTS
-i-
Everyone’s Nightmare: Privacy and Data Breach Risks
Page
I. INTRODUCTION: THE EXPANDING SCOPE OF PRIVACY AND DATA
BREACH RISKS ................................................................................................................ 1
II. THE TYPES OF INFORMATION AND PRACTICES AT RISK .................................... 2
1. Personal Information in the U.S. ............................................................................. 2
a. The Expanding Definitions of Personal Information .................................. 5
b. What is Protected Health Information (PHI) ............................................... 7
2. Personal Information in the EU and UK ................................................................. 8
3. Breaches of Data Other Than Personal Information ............................................. 10
a. Secrets of All Sorts .................................................................................... 11
b. Cyber Spies and Hacktivism ..................................................................... 12
c. Cyber Attacks with Physical Effects or Business Disruption as Focus .... 16
4. The Scope of What Constitutes a “Data Breach”: Not Just Electronic –
Paper Too .............................................................................................................. 21
5. Privacy and Data Breach Concerns in Cloud Computing ..................................... 22
a. Considerations in the U. S. and Generally ................................................ 22
b. Recent Developments in the EU ............................................................... 25
6. Privacy and Data Breach Concerns in Social Media ............................................. 29
a. Social Media as Target and Source of Data Breaches............................... 29
b. Social Media as Source of Statutory and Regulatory Violations .............. 32
i. In the U.S. ...................................................................................... 32
ii. In the UK ....................................................................................... 37
7. Privacy Issues Arising Out of Behavioral Advertising and Online Tracking ....... 39
a. In the United States ................................................................................... 39
i. The FTC Recommendations .......................................................... 39
ii. Industry Self-Regulation ............................................................... 42
iii. Do Not Track Class Actions .......................................................... 43
iv. Do Not Track Legislation .............................................................. 45
b. EU Positions on Online Behavioral Advertising ....................................... 47
-ii-
8. Mobile/Apps as a Growing Exposure ................................................................... 50
9. The Importance of Privacy Policies ...................................................................... 52
10. New Technologies Bring New Risks .................................................................... 52
III. THE U.S. REGULATORY AND STATUTORY LANDSCAPE: OBLIGATIONS
UNDER DATA PRIVACY AND SECURITY LAWS AND REGULATIONS ............. 53
1. State Data Privacy and Security Requirements ..................................................... 54
a. Restrictions on Collection of Personal Information .................................. 54
b. Protection of Social Security Numbers ..................................................... 56
c. Record Disposal Requirements ................................................................. 56
d. Data Breach Notification Requirements.................................................... 57
e. Data Security Requirements: Massachusetts Remains at the U.S.
Forefront .................................................................................................... 60
f. Privacy Policies and Protections: The California Example....................... 63
i. California’s Shine the Light Law .................................................. 64
ii. California’s Online Privacy Protection Act .................................. 64
iii. California’s Social Eraser Law ..................................................... 65
iv. Confidentiality of Medical Information Act ................................. 66
g. New Trend in State Regulation: Social Media .......................................... 66
2. Federal Requirements ............................................................................................ 67
a. FTC Regulation of Privacy and Data Protection ....................................... 68
b. Gramm-Leach-Bliley Act .......................................................................... 70
i. Regulation S-P and SEC Enforcement of Privacy, Data
Protection and Cybersecurity ........................................................ 71
c. Federal Trade Commission “Red Flags” Rule .......................................... 72
i. Affected “Financial Institutions” and “Creditors” ........................ 72
ii. Covered Accounts ......................................................................... 74
d. Federal Information Security Management Act - FISMA ....................... 74
e. Department of Homeland Security - SAFETY Act .................................. 75
f. The Health Insurance Portability and Accountability Act - HIPAA ......... 75
i. Overview of HIPAA and the HITECH Act .................................. 75
ii. HIPAA Privacy and Security Rules .............................................. 76
iii. Breach Notification Rules ............................................................. 77
iv. FTC Health Breach Notification Rule ........................................... 77
-iiiv.
HIPAA Breach Notification Rule ................................................. 80
vi. HIPAA and HITECH Act Enforcement ........................................ 82
(1) Regulatory Enforcement ................................................... 82
(2) Private Enforcement Actions ............................................ 84
(3) State Laws and Preemption Issues .................................... 84
g. Additional Data Privacy Requirements for Educational Institutions -
FERPA ...................................................................................................... 84
h. Further Protection for Minors – COPPA ................................................... 88
i. Telecommunications ................................................................................. 89
j. Telephone Consumer Protection Act – TCPA .......................................... 91
3. Federal Agency Privacy Guidances ...................................................................... 95
a. SEC Guidances .......................................................................................... 95
i. SEC Guidance Regarding Public Company Obligations to
Disclose Cyber Security Risks and Incidents to Investors ............ 95
ii. OCIE Cybersecurity Initiative for Broker-Dealers and
Investment Advisors ...................................................................... 96
b. Department of Justice Incident Response Guidance ................................. 97
c. Food and Drug Administration Guidance regarding Medical Devices ..... 97
d. Critical Infrastructure – The NIST Cybersecurity Framework ................. 98
e. Additional Recent Federal Activity and Proposals ................................. 101
i. Federal Privacy, Data Security and Cyber Security Proposals ... 101
ii. White House Initiatives ............................................................... 102
iii. Congressional Activity on the Legislative Front ......................... 105
iv. Additional Federal Agency Privacy and Cybersecurity
Initiatives ..................................................................................... 106
(1) Federal Trade Commission ............................................. 106
(2) U.S. Department of Commerce ....................................... 107
(3) Securities and Exchange Commission ............................ 108
v. Additional Federal Developments ............................................... 109
(1) Office of the Cyber Czar ................................................. 109
(2) Government Accountability Office Reports ................... 109
4. PCI -The Payment Card Industry Standards for Protection of Payment Card
Information .......................................................................................................... 111
a. PCI-DSS .................................................................................................. 111
-ivb.
Incorporation of PCI-DSS into State Law ............................................... 114
i. Minnesota .................................................................................... 114
ii. Nevada ......................................................................................... 115
iii. Washington .................................................................................. 115
IV. THE REGULATORY AND STATUTORY LANDSCAPE OUTSIDE THE U.S. ....... 116
1. Introduction to the International Scope of Privacy and Data Protection ............. 116
2. The Dilemma of Whistleblower Hotlines ........................................................... 116
3. The European Union ........................................................................................... 117
a. EU Data Protection Directive .................................................................. 118
b. Cookies and other tracking technologies................................................. 122
c. Mobile Privacy ........................................................................................ 123
4. Selected Countries’ Data Protection Laws .......................................................... 124
a. United Kingdom ...................................................................................... 124
b. Germany .................................................................................................. 125
c. France ...................................................................................................... 126
d. Spain ........................................................................................................ 128
e. Sweden .................................................................................................... 129
f. Austria ..................................................................................................... 130
g. Canada ..................................................................................................... 130
h. China ....................................................................................................... 131
i. Hong Kong .............................................................................................. 134
j. India ......................................................................................................... 136
k. Mexico ..................................................................................................... 137
l. Turkey ..................................................................................................... 137
V. THE EXPOSURES PRESENTED BY DATA BREACHES ......................................... 139
1. The Breadth of the Problem ................................................................................ 139
a. The Big Picture: Number of Breaches and Associated Costs ................. 139
b. The Industries, Assets, and Types of Data Most Frequently
Compromised .......................................................................................... 142
c. Causes ...................................................................................................... 150
d. Breach Discovery and Response ............................................................. 152
2. The Importance of Timely and Proper Notification ............................................ 153
3. The Potential Costs and Damages of a Breach ................................................... 155
-va.
First-Party Costs ...................................................................................... 156
b. Fines and Penalties .................................................................................. 156
c. Third-Party Claims .................................................................................. 157
i. Consumer Claims ........................................................................ 157
ii. Bank Claims ................................................................................ 158
iii. Other Third-Party Claims ............................................................ 162
VI. INSURANCE COMPANY EXPOSURES ..................................................................... 163
1. Exposure of Companies in the Insurance Industry as Entities Subject to Data
Breaches .............................................................................................................. 163
2. Potential Insurance Coverages for Data Breaches and Privacy Related
Claims .................................................................................................................. 166
a. Cyber Risk/Data Breach/Privacy/Network Security Policies ................. 167
b. Property Policies – First-Party ................................................................ 169
c. Fidelity / Commercial Crime Insurance .................................................. 170
d. CGL – Third-Party Claims ...................................................................... 171
i. Coverage A – Bodily Injury and Property Damage .................... 172
ii. Coverage B – Personal and Advertising Injury ........................... 174
iii. The “Damages” Hurdle ............................................................... 179
e. Professional Liability/E&O ..................................................................... 180
f. D&O ........................................................................................................ 181
g. Kidnap and Ransom/Cyber Extortion ..................................................... 184
VII. PRIVACY LITIGATION IN THE U.S.: CURRENT ISSUES ...................................... 184
1. Article III Standing .............................................................................................. 184
2. Cognizable Injuries ............................................................................................. 187
3. Breach-Related Lawsuits ..................................................................................... 192
4. Privacy Practices Lawsuits .................................................................................. 194
a. Point of Sale Data Collection Practices ................................................... 194
b. Call Recording Practices ......................................................................... 197
c. Data Collection Practices by Application Developers ............................ 198
d. Suits Alleging Violations of California’s “Shine the Light” Law .......... 199
e. Collection of Data Regarding Video Viewing Selections ....................... 200
f. TCPA ....................................................................................................... 201
g. Stored Communications Act ................................................................... 202
-vi-
VIII. MITIGATION OF EXPOSURES ................................................................................... 203
1. Data Breach Exposures ....................................................................................... 203
a. Compliance with Applicable Data Security Requirements ..................... 203
b. Instituting Reasonable Security Procedures ............................................ 203
c. Limiting Access to Personal Information ................................................ 204
d. Training/Awareness ................................................................................ 204
2. Risks of Collecting/Using Personal Information Improperly ............................. 205
3. Contract and Vendor Management ..................................................................... 206
a. Vendor Contracts ..................................................................................... 206
b. Vendor Due Diligence ............................................................................. 207
Conclusion ................................................................................................................................... 207
This paper discusses the regulatory and statutory framework for privacy and data
breach risks as of June 1, 2015. Some of the significant regulatory and statutory
developments during the month of June are also noted. For example, on June 15,
2015 the EU Council of Ministers reached a General Approach to the draft EU Data
Protection Regulation, and on June 18, 2015 Canada passed into law Bill S-4, The
Digital Privacy Act, and we have included references to these developments. There
are also a few significant court decisions that were issued in July 2015, before this
edition was finalized, which are noted in the pertinent sections.
This white paper is for guidance only and is not intended to be a substitute for
specific legal advice. If you would like further information, please contact the
Locke Lord LLP attorney responsible for your matters or a Locke Lord LLP
attorney in our Privacy & Cybersecurity Group. Attorney Advertising. 2015
Locke Lord LLP.
2015 Edition
Everyone’s Nightmare: Privacy and Data Breach Risks
I. Introduction: The Expanding Scope of Privacy and Data Breach Risks
The rapid growth of the collection, usage, storage and transmission of information in electronic
form, and the expansion of the interconnectivity of processes and devices, has resulted in a
concomitant exposure of companies to risks, regulation and liabilities associated with these
developments.
In years past, the focus of regulatory, consumer and business concerns, and our paper, has been on
risks and regulations concerning information about individuals. Personal information remains a
major focus of regulatory scrutiny and litigation, and well-publicized stories of large data breaches
demonstrate that businesses remain subject both to their own data breaches and to breaches of other
entities that collect, maintain or disseminate information on their behalf. Studies of data breaches
confirm that they present a costly and significant exposure to companies in all lines of business.
In recent years, the regulatory and legislative focus has expanded to encompass the business
practices of collecting and using personal information and the transparency of such practices.
Regulatory and legislative developments in this area arise from concerns both about the privacy
rights of individuals, and the increase in exposure to businesses and individuals presented by such
practices due to the risk of unauthorized access and usage of the information collected.
There is also now recognition that increasingly the targeted theft of information is expanding
beyond information about individuals to include confidential information of all kinds, including
companies’ intellectual property and trade secrets.
There is a substantial body of law, regulations and agency guidances in the U.S. and globally
directed at data security and companies’ responses to breaches of personal information. Moreover,
there is an expanding body of regulations and statutes that govern companies’ business practices
that involve the collection and use of information about individuals, and the disclosures they are
required to make. The challenge of compliance with this growing body of law, and the fines and
penalties that can result from violations, is one of the new and expanding exposures that businesses
face.
Also essential to businesses’ success is the uninterrupted operations of their electronically
controlled systems. There is an increasing recognition of the importance of protecting operating
systems, particularly in industries involved in critical infrastructure. Recently, there has been a
marked expansion of regulations and government guidelines directed at increasing security of
networks and IT infrastructure generally. While the goal is to increase the awareness of companies
to cyber risks, the result can be an increase in liability as guidelines for cybersecurity become
expectations and industry practice.
This paper discusses the growing body of law and regulations governing cybersecurity and breach
response. Its focus is primarily on the U.S.; however, it also discusses developments in other
countries, including recent developments with regard to the EU Data Protection Regulation and
Canada’s passage of The Digital Privacy Act. The paper focuses on the requirements for data
security and breach response for security incidents involving personal information, the growth of
-2-
regulatory scrutiny of companies’ collection and usage of information about individuals, the types
of exposure and liabilities these present, and the lines of insurance potentially affected. This paper
also discusses cyber attacks involving categories of information other than personal information and
targeting business operations, and as well as some of the privacy issues arising out of the increasing
use of social media and new technologies.
II. The Types of Information and Practices at Risk
1. Personal Information in the U.S.
Protecting individuals from identity theft1 has become a significant focus of U.S. state and federal
agencies, and of state and federal laws and regulations. In furtherance of that goal, regulatory and
legislative efforts have focused on the security of data concerning consumers, including Social
Security Numbers, drivers’ licenses and state identification card numbers, health and medical
information, financial data such as bank account and credit card information, and, more recently,
online login credentials.2
In the U.S., categories of information about individuals that can be used for identity theft and
fraudulent financial transactions are generally referred to as “Personal Information.”3 Laws and
regulations vary from state to state, and between state and federal law, as to exactly what
information comprises “Personal Information.” Generally, in state statutes setting forth breach
notification and data security requirements, “Personal Information” means a name (first initial and
last name often suffices), and some additional item of information that could be used to steal a
person’s identity or access his or her financial accounts (or, in some cases, healthcare information or
online account) without authorization. A definition of “Personal Information” combining the data
elements required by most states is as follows:
1 As defined in the Federal Fair and Accurate Credit Transactions Act of 2003 (“FACTA”), “identify theft” is a fraud
committed using the identifying information of another person. 15 U.S.C. 1681a(q)(3).
2 Cal. Civ. Code § 1798.82(h)(2), Fla. Stat. § 501.171(1)(g)(1)(b).
3 For purposes of this paper, we refer generally to protected information about an individual as “Personal Information” or
“PI.” There are differences in the terminology used in statutes and regulations of various jurisdictions, however, such as “personal
information” versus “private information” versus “personally identifiable information” or “PII.” We note that “personal information”
is the term used in the Massachusetts Data Security Regulations, while other statutes use terms such as “personal identifiable
information” or “private information.” New York Gen. Bus. Law § 899-aa, however, defines “personal information” as “information
concerning a natural person which, because of name, number, personal mark, or other identifier, can be used to identify such natural
person,” and defines “private information” as “personal information consisting of any information in combination with any one or
more of the following data elements, when either the personal information or the data elements is not encrypted, or is encrypted with
an encryption key that has also been acquired: (1) Social Security number; (2) driver’s license number or non-driver identification
card number; or (3) account number, credit or debit card number, in combination with any required security code, access code, or
password that would permit access to an individual’s financial account; ‘private information’ does not include publicly available
information which is lawfully made available to the general public from federal, state or local government records.” See also New
York State Technology Law §208, applicable to State entities as defined by the statute, and the New York City Administrative Code,
Title 10, §10-501, applicable to City agencies, which refers to “personal identifying information” that includes a person’s date of
birth, mother’s maiden name, and other information not included in New York Gen. Bus. Law § 899-aa. Breach notification
requirements are generally triggered by unauthorized access to or acquisition of “private information,” but acquisition of “personal
information” that is limited to a name or personal mark unaccompanied by other information such as a Social Security number,
driver’s license or credit/debit card number may not trigger notification requirements under data protection statutes and regulations.
Other states’ statutes refer to “personally identifiable information” (PII), see e.g., Vt. Stat. Ann. tit. 9, § 2430(5)(A).
-3-
An individual’s first initial or first name and last name, plus one or more of the following:
Social Security number;
Driver’s license or government issued identification card number;
Account number of any kind (such as a credit card or other financial account number);
A unique electronic identifier or routing code, and corresponding security codes or passwords that
would permit access to an individual’s financial or on-line account;4
Unique biometric data, such as a fingerprint, retina or iris image, or other unique physical
representation or digital representation of biometric data;
Medical or health insurance information;
User name or email address and a password or security question for an online account (in a small
minority of states); and
Passport number (in a small minority of states).
As regulations and statutes directed at protecting Personal Information proliferate, however, the
scope of protected information is expanding. The federal Red Flags Rule, discussed below, requires
covered entities to develop programs to prevent identity theft, which is the fraud involving
“identifying information” – meaning any name or number that may be used, alone or in conjunction
with any other information, to identify a specific person, including any:
(1) Name, Social Security number, date of birth, official state or government issued
driver’s license or identification number, alien registration number, government
passport number, employer or taxpayer identification number;
(2) Unique biometric data, such as fingerprint, voice print, retina or iris image, or other
unique physical representation;
(3) Unique electronic identification number, address or routing code; or
(4) Telecommunications identifying information or access device.5
4 There seems to be a trend to expand the definition of “Personal Information” in state breach notification statutes to include
this factor, see, e.g., amendment to Nevada statute effective July 1, 2015, NRS 603A.040 as amended,
http://www.leg.state.nv.us/Session/78th2015/Bills/AB/AB179_EN.pdf.
5 16 C.F.R. § 603.2. The terms “telecommunications identifying information” and “access device” are defined in 18 U.S.C.
§ 1029(e). “Telecommunications identifying information” means the electronic serial number or any other number or signal
that identifies a specific telecommunications instrument or account, or a specific communication transmitted from a
telecommunications instrument. “Access device” means any card, plate, code, account number, electronic serial number, mobile
identification number, personal identification number, or other telecommunications service, equipment, or instrument identifier, or
other means of account access that can be used, alone or in conjunction with another access device, to obtain money, goods, services,
or any other thing of value, or that can be used to initiate a transfer of funds (other than a transfer originated solely by paper
instrument).
-4-
HIPAA, also discussed below, protects “individually identifiable health information,” which
includes all health information in oral, written, or electronic form that can be identified to a specific
individual. Any health information, including demographic information that relates to the past,
present, or future physical or mental health or condition of an individual, and with respect to which
there is a reasonable basis to believe the information can be used to identify the individual, is
protected information under HIPAA.6
A data breach involving unauthorized access of Personal Information triggering notification
obligations can result from an event as simple as a loss of a laptop that contains personal
information of customers or employees.7 In recent years, publicity has focused on large data
breaches that involve sophisticated attacks by wide-ranging criminal rings or politically motivated
hackers (“hacktivists”) on the databases of companies storing Personal Information of thousands or
even millions of individuals. Cyber criminals often target institutions that maintain Personal
Information of large numbers of individuals in an effort to achieve large returns from their efforts.
Hacktivists may have other motives, such as embarrassment to the company whose databases are
accessed. Publicized data breaches of payment processing companies and retailers in which the
credit and debit card information of millions of consumers was obtained by cyber criminals
demonstrate the scope of such attacks, and the resultant costs to the targeted company. Costs to
victimized companies include the direct costs of assessing and responding to the breach, as well as
exposure to third-party claims brought by consumers, employees, and others affected by the breach,
and the loss of business and damage to reputation from the publicity following a large breach.8
Not all data breaches involving Personal Information actually result in identity theft. As discussed
below, however, the mere occurrence of an event that falls under an applicable legal definition of a
“data breach” (or similar term) involving Personal Information can trigger time-sensitive and broadranging
notification requirements imposed on the entity that sustained the breach, at significant cost
to that entity. If the loss or theft of Personal Information does not actually result in identity theft,
the company sustaining the breach may be able to avoid or at least minimize common law claims
for damages from the individuals whose Personal Information was improperly accessed, but in
many cases it still must comply with applicable statutory and regulatory notice obligations triggered
by the breach.
6 45 C.F.R. § 160.103.
7 Three hundred twenty-nine organizations reported 86,455 laptops lost at an average cost of $6.4 million per company, and
an overall cost of over $1 billion. Ponemon, The Billion Dollar Lost Laptop Problem, Sept. 30, 2010.
8 The typical data breach of Personal Information involves either the inadvertent loss or the criminal theft of data containing
Personal Information. However, there is also a theory of data breach referred to as a “voluntary data breach” in which intentional
dissemination of information unintentionally results in unauthorized distribution of Personal Information. In late 2009, Netflix, Inc.
was sued based on a claim of “voluntary privacy breach” based on the video rental company’s purported dissemination to contest
participants of data sets containing the rental preferences and ratings of subscribers. Although Netflix encrypted the identities of its
subscribers in the data sets, the complaint alleges that researchers were able to crack the encryption and identify individual
subscribers. The complaint, filed in the United States District Court for the Northern District of California, pled violations of the
Video Privacy Protection Act, which prohibits the disclosure of information identifying a person as having requested or obtained a
specific video rental. The parties to the lawsuit reached a confidential settlement in March 2010. As discussed below, increasingly,
there are data breaches that involve theft of intellectual property and other confidential information as a result of commercial or
political espionage.
-5-
a. The Expanding Definitions of Personal Information
What constitutes Personal Information subject to legal protection is evolving, with courts
interpreting existing statutes more expansively and legislatures considering new statutes.
In the data breach context, online account login credentials (for all types of accounts, and not just
financial accounts) are now deemed “Personal Information” in some states – California9 was first to
include this type of information in its definition of “Personal Information,” with Florida10, and most
recently, Nevada11 following suit. Other states are likely to do so as well, as they did after
California enacted the first U.S. breach notification statute in 2003.12 Other expansions of PI have
been proposed that would include geolocation information, biometric information and consumer
online searching and purchasing history as PI whose unauthorized acquisition would trigger
notification obligations. 13
The increase in concern about protecting individuals’ information that can be used for identity theft
has also led to many companies reporting unauthorized access to information that may not itself be
protected Personal Information, but can be used to gain access to such Personal Information, such as
in the increasing number of incidents of hackers obtaining customer email addresses. For example,
in 2011, Epsilon Data Management LLC14 announced that the customer data of many of its more
than 2,500 corporate clients was exposed by an unauthorized entry into Epsilon’s email system.
The intruder apparently obtained email addresses and/or customer names. Although email
addresses are not generally considered to be Personal Information under most U.S. laws and
regulations that trigger notification requirements, Epsilon notified its clients, many of whom sent
notifications to their customers regarding the unauthorized entry to Epsilon’s database. Similarly,
in October, 2014, JP Morgan disclosed that names, addresses, and email addresses associated with
76 million households and seven million small businesses were compromised.15 A major concern in
the Epsilon and JP Morgan breaches was that the hackers could use the email addresses in phishing
9 Effective January 1, 2014, SB 46 expanded the definition of “personal information” in California’s breach notification
statutes applicable to businesses (Cal. Civ. Code § 1798.82) and government agencies (Cal. Civ. Code § 1798.29) to include “user
name or email address, in combination with a password or security question and answer that would permit access to an online
account.”
10 Fla. Stat. § 501.171(1)(g)(1)(b).
11 A.B. 179, 78th Leg. Sess. (Nev. 2015), effective July 1, 2015.
12 Within a few years, most other states adopted breach notice requirements modeled in varying degrees on California’s.
13 As of June 2015, Illinois Bill SB 1833 was passed by both houses of the state legislature and is awaiting the Governor’s
signature. It would expand the scope of “personal information” to include geolocation information generated or derived from the
operation or use of an electronic communications device, and consumer marketing information defined as consumer’s online
browsing history, search history or purchasing history, and require notification to the attorney general (but not the individual if that is
the only PI breached). The Bill’s content and status is available at
http://www.ilga.gov/legislation/BillStatus.asp?DocNum=1833&GAID=13&DocTypeID=SB&SessionID=88&GA=99#actions.
14 Epsilon provides consulting, marketing data, technology and agency services to major retailers. Elinor Mills, Who is
Epsilon and Why Does It Have My Data?, Apr. 6, 2011, http://news.cnet.com/8301-27080_3-20051038-245.html.
15 Elizabeth Weise, JP Morgan reveals data breach affected 76 million households, USA Today, Oct. 3, 2014,
http://www.usatoday.com/story/tech/2014/10/02/jp-morgan-security-breach/16590689/.
-6-
attacks to send emails that seemed to come from trusted sources, leading unsuspecting customers to
reveal Personal Information that would then be used for identity theft.16
There is also an expanding definition of what constitutes Personal Information as a growing number
of statutes and court decisions are directed at protecting consumer privacy, rather than minimizing
identity theft.17
ZIP codes, for example, are now “personal information” under some states’ laws limiting
businesses’ rights to collect or record PI of its customers. In 2011, the California Supreme Court
held that businesses’ practice of recording customer ZIP code along with customer names violates a
California statute, the Song-Beverly Credit Card Act,18 which prohibits businesses from requesting
“personal identification information” during a credit card transaction that is recorded.19 The
California Supreme Court noted that the statute demonstrated legislative intent to prohibit retailers
from requesting and recording information about cardholders that are unnecessary to the credit card
transaction. The Court held that the word “address” in the statutory definition of personal
identification information should be construed to encompass not only a complete address, but also
the components of an address. A significant factor in the Court’s decision was the ability of
retailers to utilize a software program that could identify a customer’s full address from the name
and ZIP Code and use it for marketing purposes for itself or to sell to others. In March 2013, the
Massachusetts Supreme Judicial Court similarly held that ZIP codes are “personal identifying
information” under a Massachusetts statute, and may not be collected and recorded as part of a
credit card transaction if not required by a credit card company or necessary for the transaction.20
Other states may follow in the footsteps of California and Massachusetts, although as discussed
below, courts in some of those jurisdictions have refused to accept such a broad interpretation of
what constitutes PI.21 There is a continually developing body of case law and statutes that can
impact the scope of what is considered Personal Information and the protections afforded it.
Further, changes to the Children’s Online Privacy Protection Act (“COPPA”) Rule include an
expanded definition of personal information with regard to information collected online from
children under 13, which includes persistent identifiers (with some exceptions), geolocation data,
16 “Phishing” is the practice of sending an email that is purportedly from a well-known organization to induce the recipient to
reveal information for use in identity theft. The recipient clicks on a link that appears to lead to a legitimate organization’s website,
but that silently redirects the user to a website that then requests and collects the user’s personal information for fraudulent purposes.
17 For example, legislation proposed in Illinois as of May 2015 would radically expand the definition of “Personal
Information” under the Illinois data breach notification statute to include “consumer marketing information,” defined as information
related to a consumer’s online browsing history, online search history, or purchasing history. S.B. 1833, 2014 Leg., 99th Sess. (Ill.
2015).
18 Ca. Civ. Code § 1747.08(b).
19 Pineda v. Williams Sonoma Stores, Inc., 51 Cal. 4th 524 (Ca. 2011). See Locke Lord LLP Client Advisory, California
Supreme Court’s ZIP Code Decision Exposes Retailers to New Litigation Hazard, Statutory Fines, Apr. 2011,
http://www.lockelord.com/client-advisory---california-supreme-courts-zip-code-decision-exposes-retailers-to-new-litigation-hazardstatutory-
fines-04-07-2011.
20 Tyler v. Michaels Stores, Inc., 464 Mass. 492 (Mar. 11, 2013). See Edwards Wildman LLP (now known as Locke Lord
LLP) Client Advisory, Massachusetts Supreme Judicial Court Expands Consumer Zip Code Privacy Protection in Tyler v. Michaels
Stores, Mar. 2013, http://www.lockelord.com/Edwards-Wildman-Client-Advisory---Massachusetts-Supreme-Judicial-Court-
Expands-Consumer-Zip-Code-Privacy-Protection-in-Tyler-v-Michael-Stores-03-12-2013.
21 Hancock v. Urban Outfitters, Inc., 32 F.Supp. 3d 26 (D.D.C. Mar. 14, 2014) (in which a court in the District of Columbia
decided not to follow in the footsteps of California and Massachusetts with regard to Zip Code litigation). See Section VII.4. a. on
Privacy Litigation below.
-7-
photos and audio of children.22 At the state level, there is also significant activity regarding
legislation aimed at expanding privacy protections of student information. For example,
California’s recently enacted Student Online Personal Information Protection Act,23 which will
become effective on January 1, 2016, and which only applies to online services targeted at K-12
students, adopts a very broad and virtually all-inclusive definition of the Personal Information that
is being protected.24
The U.S. is gradually shifting toward the broader definitions of Personal Information generally
followed in the EU and other countries, and continued expansion of protections to PI afforded by
statutes, agency regulations and court decisions can be expected. 25
b. What is Protected Health Information (PHI)
Often data breaches involve individuals’ information that is not what is typically defined as
Personal Information under state statutes, but rather is of individuals’ health information. This
generally occurs when the data breach is of a healthcare or other entity that obtains health
information as part of its business. When that information falls within the scope of “Protected
Health Information” (“PHI”), it is subject to additional statutory and regulatory oversight and
breach response requirements.26
The Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) defines PHI as
“individually identifiable health information” that is held or transmitted by a HIPAA-subject entity
(e.g., a physician, hospital, health insurer, or business associate) and relates to:
the individual’s past, present or future physical or mental health or condition;
the provision of health care to the individual; or
the past, present, or future payment for the provision of health care to the individual;
and that identifies the individual, or for which there is a reasonable basis to believe it can be used to
identify the individual.
22 16 CFR Part 312.
23 S.B. 1177, 2013-2014 Leg. Sess. (Ca. 2014).
24 “Covered information” means personally identifiable information or materials, in any media or format that meets any of the
following: (1) Is created or provided by a student, or the student’s parent or legal guardian, to an operator in the course of the
student’s, parent’s, or legal guardian’s use of the operator’s site, service, or application for K–12 school purposes. (2) Is created or
provided by an employee or agent of the K–12 school, school district, local education agency, or county office of education, to an
operator. (3) Is gathered by an operator through the operation of a site, service, or application described in subdivision (a) and is
descriptive of a student or otherwise identifies a student, including, but not limited to, information in the student’s educational record
or email, first and last name, home address, telephone number, email address, or other information that allows physical or online
contact, discipline records, test results, special education data, juvenile dependency records, grades, evaluations, criminal records,
medical records, health records, social security number, biometric information, disabilities, socioeconomic information, food
purchases, political affiliations, religious information, text messages, documents, student identifiers, search activity, photos, voice
recordings, or geolocation information. S.B. 1177(i), 2013-2014 Leg. Sess. (Ca. 2014).
25 See section below on the International Regulatory and Statutory Landscape.
26 The pertinent statutes are discussed below, in the sections discussing HIPAA, the HITECH Act, and the Health Breach
Notification Rule.
-8-
Name, birth date, address, and Social Security number are typical examples of “individually
identifiable health information” when paired with information relating to the health of that
individual, such as a diagnosis, treatment plan, or payment for medical services.
Certain statutory and regulatory exceptions apply to the definition of PHI including exceptions for
education records covered by the Family Educational Rights and Privacy Act27; employment
records held by a HIPAA-subject entity; records regarding a person who has been deceased for
more than 50 years; and certain records for a student who is eighteen years of age or older, or is
attending and institution of postsecondary education. Furthermore, information that has been “deidentified”
in accordance with HIPAA is not considered to be PHI.28
2. Personal Information in the EU and UK
The EU and countries in the EU have a much more expansive view of what constitutes Personal
Information, (or “personal data” under the terms of the Data Protection Directive (95/46/EC) (the
“Directive”) and enabling local applicable law) under which generally any data that relates to an
individual who can be identified or is identifiable from the data or other information with the data is
PI. In the EU, under the Data Protection Directive on (95/46/EC) defines "personal data" as ‘any
information relating to an identified or identifiable natural person (“data subject”); an identifiable
person being one who can be identified, directly or indirectly, in particular by reference to an
identification number or to one or more factors specific to his physical, physiological, mental,
economic, cultural or social identity’. The definition is deliberately broad.
In July 2014, the Court of Justice of the European Union (“CJEU”) provided further clarification as
to what constitutes personal data under the Directive in its ruling in the joint cases of YS, M and S v
Minister voor Immigratie, Integratie en Asie.29 The case concerned applications for residence
permits in the Netherlands. YS, who was denied residency, asked to see a copy of the immigration
officer’s report on his request. Immigration officers’ reports generally contain information about
applicants (including their name and other personal data) and the officers’ legal analysis of the
requests. The Dutch court referred certain questions to the CJEU, focusing on whether the legal
analysis constituted personal data of YS. The CJEU held that “although [the legal analysis] may
contain personal data, it does not in itself constitute such data within the meaning of Article 2(a) of
Directive 95/46,”30 on the basis that the legal analysis is simply an application of the law to certain
facts. The CJEU therefore found that YS did not have a right to access the legal analysis.
As it is a directive, the Directive is required to be implemented in each Member State of the EU by
local enabling legislation. This has resulted in each Member State having transposed the Directive
into law to a different extent and with local nuances and variations, which has given rise to a
fragmented approach to data protection across the EU. In order to address this lack of harmony
across the EU, a regulation has been proposed and is working its way through the approvals
27 See Section III.2 , Further Protection for Minors – FERPA, below.
28 See 45 CFR 160.103.
29 Joined cases C-141/12 YS v Minister voor Immigratie, Integratie en Asiel and C-372/12 Minister voor Immigratie,
Integratie en Asiel v M and S, 12 December 2013, at
http://curia.europa.eu/juris/document/document.jsf?text=&docid=155114&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&
part=1&cid=83308.
30 YS, M and S v Minister voor Immigratie, Integratie en Asiel, supra, para. 39.
-9-
process. Regulations, in contrast to directives, have “direct effect” and do not require Member
States to implement them in order to come into effect; rather, they come into effect directly – and,
crucially for these purposes, in the same manner and to the same extent - in each Member State on
the date specified (either in the regulation itself or in the Official Journal of the European Union).
The text of the Draft General Data Protection Regulation (the “Proposed Regulation”) as originally
proposed by the European Commission sought to clarify the definitions of ‘data subject’ and
‘personal data’. The original proposed text defined personal data as ‘any information relating to a
data subject’ (which is essentially what the Directive says), but went on to provide that an
‘identifiable person’ (which is defined as in the Directive) is a natural person who can be identified
‘by means reasonably likely to be used by the controller or by any other natural or legal person’,
listing the potential identifiers and including two new identifiers (‘location data’ and ‘online
identifier’). On the one hand, this definition may be interpreted as giving greater protection to
personal data and data subjects, by clarifying that the protection to be given to personal data
pursuant to the Proposed Regulation will apply irrespective of whether the personal data is being
processed by the data controller or ‘by any other natural or legal person.’ On the other hand, if the
words ‘by means reasonably likely to be used’ are interpreted narrowly, then if methods considered
not to be ‘reasonably likely’ are used to identify an individual, this proposed definition may in fact
have the result of narrowing down the definition of ‘data subject’, thereby giving less protection to
individuals than is currently granted by the Directive.
In any event, the draft text of the Proposed Regulation as amended by the Council of the European
Union reverts to the original (and arguably broader) definitions or ‘personal data’ and ‘data subject’
as used in the Directive. The revised text also includes new defined terms to help clarify what
constitutes ‘personal data’ in light of modern technological advancements. For example, the revised
Proposed Regulation includes a definition for ‘pseudonymous data’, defined as ‘personal data that
cannot be attributed to a specific data subject without the use of additional information, as long as
such additional information is kept separately and subject to technical and organizational measures
to ensure non-attribution’. The level of protection given to pseudonymous data is less stringent than
the protection given to other types of personal data under the Proposed Regulation: a new Recital
58(a) states that pseudonymous data should not be presumed to ‘significantly affect the interests,
rights or freedoms of the data subject’. However, where ‘profiling’ (defined as ‘any form of
automated processing of personal data intended to evaluate certain personal aspects relating to a
natural person or to analyse or predict in particular that natural person’s performance at work,
economic situation, location, health, personal preferences, reliability or behaviour’) ‘permits the
controller to attribute pseudonymous data to a specific data subject, the processed data should no
longer be considered to be pseudonymous’.
In the UK, the primary and overarching definition of personal data is taken from the UK Data
Protection Act 1998 (the “DPA”), which implemented the Directive, and came into force on 1
March 2000. The DPA provides that personal data means data which relate to a living individual
who can be identified (a) from those data; or (b) from those data and other information in the (or
likely to come into the) possession of the data controller and includes any expression of opinion
about the individual and any indication of the intentions of the data controller or any other person in
respect of that individual.
That statutory definition above has been supplemented by UK case law. English courts have said
personal data also had to have an element of “biographical significance” or “focus” on the
-10-
individual in question, and that personal data must be “information that affects [a person’s] privacy,
whether in his personal or family life, business or professional capacity.” 31 This narrowing of the
definition of personal data has led to the conclusion so that the incidental inclusion of a person’s
name, for example, in a report that is otherwise not focused upon that person may not necessarily
constitute personal data. However, more recent UK case law has added a further clarification
gloss to the definition by explaining that context should also be taken into consideration and that a
name will always be personal data where the context in which it appears is such that a particular
individual could be identified from it.
Following the narrow interpretation of ‘personal data’ given by the Court of Appeal in its 2003
decision in Durant v Financial Services Authority , the Article 29 Working Party (an organization
which is made up of representatives from the data protection authorities of each EU Member State)
published an Opinion on the definition of personal data . This was swiftly followed by the issuance
by the Information Commissioner’s Office (ICO) (the independent authority in the UK responsible
for overseeing the protection of personal data) of its 2007 technical guidance note regarding what
constitutes personal data under the DPA (the “2007 TGN”) and a further updated technical
guidance note issued in 2012. The Article 29 Working Party’s Opinion and the ICO’s TGN
acknowledge that the definition of ‘personal data’ given by the Court of Appeal in the Durant case
was narrower than the interpretation envisaged by the DPA, and that whilst the ‘biographical
significance’ test may be an indicator of personal data (particularly in borderline cases), it is
certainly not an essential element of personal data.
In 2013, the High Court confirmed the ICO's approach in a decision addressing the nature of
personal data in the context of the Freedom of Information Act 2000 (FOIA). The court held that
prior leading case law on the meaning of personal data was limited to a particular factual scenario
and that the ‘biographical significance’ test is therefore only one of a number of tests that may be
applied in determining whether information is personal data. The court found that the Article 29
Working Party's Opinion and the ICO’s TGN must also be considered when determining if
information constitutes personal data. In 2014, the Court of Appeal followed suit. In particular,
the court found that a First-tier tribunal (determining whether the names of FSA employees were
personal data) had been wrong solely to follow the approach taken in the Durant decision. Instead,
the court specifically referred to the TGN.
The ICO has issued further guidance on the definition of personal data which, although not binding,
the courts are obliged to consider where applicable. (See Section IV for further information on EU
and UK Regulatory and Statutory Landscape, below).
3. Breaches of Data Other Than Personal Information
This paper focuses largely on data breaches involving Personal Information, but a data breach can
also involve other confidential information, the access to and dissemination of which may cause
substantial damages and give rise to legal liability. A data breach can be the result of deliberate
criminal activity, or of accidental devise loss. However, regardless, of motive, when the subject is
Personal Information, the breach often triggers required statutory responses, as discussed below.
31 Durant v Financial Services Authority [2003], EWCA (Civ) 1746.
-11-
Cyber attacks, especially when they are directed at networks, can also be conducted with the goal of
disrupting operations rather than accessing Personal Information, such as Denial of Service (DoS)
and Distributed Denial of Service (DDoS) attacks.32 There are also increasing reports of attacks
whose motive is to obtain confidential business information for commercial or political advantage,
in what has been called “cyber espionage.” These other types of cyber attacks and breaches are
discussed below, as the potential exposures they present are significant, and they are generating
increasing attention from both those seeking to affect such attacks and those seeking to protect
against them.
a. Secrets of All Sorts
Data required to be kept confidential is not limited to Personal Information, and financial gain
through identity theft is not the only goal of hackers.
Confidential data includes trade secrets, intellectual property, proprietary information (e.g.,
techniques, plans, processes, financial data, and similar business secrets) and other confidential
information that owners and keepers of such information want to keep secret, and that others may
seek to obtain for their own benefit or to harm others.
Recent reports confirm that confidential business information is a major target of hackers, in
recognition that trade secrets, company information about upcoming projects and bids, and similar
“corporate intellectual capital” can be a source of financial gain and competitive advantage through
unauthorized use or sale to others, although there can be political as well as financial motives.33
Customer and consumer documents as well as other business records are at risk, and in one study
90% of respondents reported that they are certain or believe it very likely that their organization
experienced leakage or loss of sensitive or confidential documents during the prior year.34
Data breaches involving confidential data that is not within the applicable statutory definitions of
Personal Information do not generally trigger the protection and notification obligations of the large
body of state and federal laws directed at protecting against identity theft. They can, however,
result in business losses to the breached company, as well as in liability claims by third parties
against the targeted company if they cause damage to others, such as the company’s clients.
Proprietary intellectual property has become a prime target for hackers, both private and
purportedly foreign government sponsored. Proprietary information is considered twice as valuable
32 A DoS attack is generally one in which an attacker “floods” a targeted network with requests so that it can’t be accessed. A
DDoS attack is one in which the attacker is using multiple computer to launch the DoS attack. See, e.g., Understanding Denial-of-
Service Attacks, www.us-cert.gov/ncas/tips/ST04-015.
33 See, e.g., Hackett, Robert, Diplomacy is failing to protect the United States’ trade secrets, Fortune, May 11, 2015; PwC
and The Center for Responsible Enterprise And Trade, Economic Impact of Trade Secret Theft, February 2014; Randal C. Coleman,
Assistant Director, Counterintelligence Division, Federal Bureau of Investigation, Statement Before the Judiciary Committee,
Subcommittee on Crime and Terrorism, Washington, D.C., May 13, 2014, https://www.fbi.gov/news/testimony/combatingeconomic-
espionage-and-trade-secret-theft; McAfee and Science Applications International Corporation, Underground Economies:
Intellectual Capital and Sensitive Corporate Data Now the Latest Cybercrime Currency, Mar. 28, 2011. See also Sentencing
Memorandum of first foreign hacker for theft of trade secrets from American companies to be convicted in the U.S., United States of
America, v. David Pokora, Case 1:13-cr-000789-GMS, United States District Court, District of Delaware (filed April 20, 2015).
34 See, e.g., Ponemon Institute, LLC, 2012 Confidential Documents at Risk Study, July 2012.
-12-
as day-to-day financial and customer data.35 As a result, security experts and law enforcement
officials report that a thriving criminal market has evolved for converting stolen trade secrets into
cash.36 Cyber espionage reportedly cost U.S.-owned business about $14 billion in economic losses
in just a six-month period.37
Conventional security does not seem to be working in the face of sophisticated attacks. One recent
study in which sensors were installed behind other security layers to gauge their success showed
that attacks are getting through multiple layers of conventional defense tools in the vast majority of
deployments. The report found that 96% of the systems examined across all industry segments were
breached and 27% of those breaches involved advanced malware. 38
b. Cyber Spies and Hacktivism
In addition to commercially motivated criminal hackers, there are reports of cyber espionage risks
from sophisticated industrial spies and nations. At times, the attacks may be politically motivated
and committed by what have become known as “hacktivists” (activist hackers), rather than
economically motivated, but the goal is still generally the theft of information with resultant loss of
valuable assets to the company attacked.
While the politically motivated activities of hacktivists are a form of cyber spying, recently there
has been increasing focus on cyber spying on a wide range of industries from foreign sources
seeking economic gain, trade secrets, and potentially advantages for use in hostilities.39
Definitive proof of foreign government sponsorship of cyber spying tracked to foreign sources has
been elusive, although a February 2013 report known as the “Mandiant Report” tried to close that
gap and provide objective evidence tying in cyber spying to the government of mainland China.
The Mandiant Report concluded that the cyber espionage unit under investigation is “likely
government-sponsored and one of the most persistent of China’s cyber threat actors,” and that it
receives direct government support.40 Whether the information provided in the Mandiant Report and
subsequent investigations will prove sufficient evidence in a court case to establish that a particular
attack or installation of spyware was government-sponsored remains to be fully tested. China’s
premier had issued statements disputing the Mandiant Report’s assertions that China’s military is
35 Byron Acohido, Social-media tools used to target corporate secrets, USA Today, Mar. 31, 2011,
http://usatoday30.usatoday.com/tech/news/2011-03-31-hacking-attacks-on-corporations.htm (citing both Forrester Research and
Simon Hunt, Chief Technology Officer of McAfee’s Endpoint Security Division).
36 Id.
37 Industrial Cyber Attacks, A Costly Game, Advisen, Apr. 20, 2012 (referring to an FBI report of losses between Oct. 2011
and Apr. 2012).
38 FireEye, Maginot Revisited: More Real-World Results from Real-World Tests, 2015.
39 Recent attempts by foreign nations or foreign state-sponsored actors to steal proprietary information from U.S. companies
for economic exploitation has gravely concerned U.S. military planners: “The immediate worry for military planners . . . is the
growing number of small scale attacks that occur daily on U.S. Companies.” Pentagon investing in cyber to stop growing attacks:
Pentagon hikes cyber spending, Advisen, June 27, 2013,
http://cyberfpn.advisen.com/fpnHomepagep.shtml?resource_id=2016334881094819870&[email protected]
m#top.
40 Mandiant, APT1: Exposing One of China’s Espionage Units, Feb. 2013,
http://intelreport.mandiant.com/Mandiant_APT1_Report.pdf. See David E. Sanger, David Barboza and Nicole Perlroth, Chinese
Army Unit Is Seen as Tied to Hacking Against U.S., The New York Times, Feb. 18, 2013,
http://www.nytimes.com/2013/02/19/technology/chinas-army-is-seen-as-tied-to-hacking-against-us.html?pagewanted=all.
-13-
behind many massive cyber attacks on U.S. entities.41 However, in May 2014, the U.S. Justice
Department considered the information of Chinese cyber spying sufficient to unseal its indictment
of five members of the Chinese People’s Liberation Army and charge them with hacking into the
networks of major U.S. companies such as Westinghouse Electric, the United States Steel
Corporation, U.S. subsidiaries of SolarWord AG, Allegheny Technologies and Alcoa.42
Other high profile cyber attacks on U.S. companies include the late 2014 attack on Sony, which has
been attributed to the North Korean government in reprisal for a Sony-backed movie comedy called
“The Interview” in which the story line centered on a plot to assassinate North Korea’s leader. The
cyber attack reportedly included theft of business secrets, unreleased movies and scripts, and
employee personal records, as well as the public release of corporate emails that included
confidential information and at times embarrassing comments.43 It was a reminder of the potential
vulnerability of even relatively sophisticated corporations to targeted attacks and the broad scope of
reputational damage and financial costs that can result.
The U.S. has also been the target of charges of cyber spying, particularly in light of the National
Security Agency (NSA)44 conduct revealed by Edwards Snowden, starting in early June 2013. The
Snowden revelations caused many countries (as well as many inside the U.S.) to point to the U.S. as
a major source of government sponsored cyber spying. 45
Information reported about cyber spying also has revealed the methodologies used for cyber spying
as well as their content, ranging from collection of information about individuals from the apps they
use on smart phones, to phishing scams to gain access.46
The scope of the problem has been identified and discussed in reports by private entities, such as the
Mandiant Report, and was also highlighted in government reports such as the October 2011 report
from the Office of the National Counterintelligence Executive (“ONCIX”) that found that U.S.
businesses are prime targets of foreign economic and industrial espionage, as other countries seek to
build up their domestic industries with stolen technology and intellectual property from more
advanced U.S. firms. The report specifically identified China and Russia as “aggressive and
capable collectors of sensitive U.S. economic information and technologies, particularly in cyber
41 See, e.g., Associated Press, China’s new premier rejects U.S. hacking claims, Yahoo! News, Mar. 17, 2013,
http://news.yahoo.com/chinas-premier-rejects-us-hacking-claims-100525298--finance.html.
42 See, Michael S. Schmidt and David E. Sanger, 5 in China Army Face U.S. Charges of Cyberattacks, The New York
Times, May 19, 2014; Pete Williams, U.S. Charges China with cyber-spying on American Firms, www.cnbc.com, 19 May 2014;
Edward Wong, U.S. Case Offers Glimpse Into Chinas Hacker Army, The New York Times, May 22, 2014.
43 Kroft, Steve, The Attack on Sony, 60 Minutes, CBS News, April 12, 2015, http://www.cbsnews.com/news/north-koreancyberattack-
on-sony-60-minutes; Peterson, Andrea, The Sony Pictures hack, explained, The Washington Post, December 18, 2014,
https://www.washingtonpost.com/blogs/the-switch/wp/2014/12/18/the-sony-pictures-hack-.
44 The NSA is a U.S. intelligence agency created by Executive Order 13470 in 2008, to collect intelligence to detect and
counter espionage and other threats and activities directed by foreign powers or their intelligence services against the U.S. and
interests.
45 See Ellen Nakashima, From obscurity to notoriety, Snowden took an unusual path, Washington Post, June 9, 2013;
Katherine Jacobsen and Elizabeth Barber, NSA Revelations: A timeline of what’s come out since Snowden leaks began, The Christian
Science Monitor, October 16, 2013; David E. Sanger and Nicole Perlroth, N.S.A. Breached Chinese Servers Seen as Security Threat,
The New York Times, March 22, 2014.
46 Spy Agencies Scour Mobile Phone Apps for Personal Data, Documents Say, The New York Times, January 27, 2014;
Danny Yadron, Alleged Chinese Hacking: Alcoa Breach Relied on Simple Phishing Scam, Wall Street Journal, May 19, 2014.
-14-
space.”47 The leading areas of theft were reported to be key components of the U.S. economy:
information technology, military technology, and clean-energy and medical technology.48 U.S.
defense officials report that more than 100 countries have tried to break into U.S. networks.49
Networks of at least 760 companies, research universities, Internet service providers and
government agencies were reportedly the target of China-based cyber spies in the last decade.50
Government agencies and contractors are also targets. Companies and agencies comprising the U.S.
Military Industrial Complex are a target of cyber attacks aimed at access to confidential information
other than Personal Information, and perhaps at business disruption.51 An early indication of this
was the reports that the Defense Department detected 360 million attempts to penetrate its networks
in 2008, up from six million in 2006. In the Spring of 2008, there was reportedly a breach of one of
the Pentagon’s Joint Strike Fighters weapons programs.52 Reportedly similar incidents resulted in
the breach of the Air Force’s air-traffic control system.53 One report of a U.S. Department of
Defense breach identifies a vulnerability faced by all companies: thumb drives.54 Recent statements
of government officials confirm that the attempted attacks continue: In March 2012, Defense
Secretary Panetta reportedly stated, “we are literally getting hundreds or thousands of attacks every
day that try to exploit information in various [U.S.] agencies . . . .”55
Private companies involved in development of products for the Defense Department are also
targets, with resultant costs including contractual penalties, business interruption and reputational
damage. This was demonstrated by the May 2011 cyber attack on Lockheed Martin, a major
defense contractor holding sensitive information (although the company reported its secrets
remained safe). This attack reportedly may be tied to an earlier hacking attack on the RSA security
division of EMC Corporation that reportedly may have comprised security products RSA supplied
to companies in the military industry and to other large corporations.56 The Defense Department
has admitted that 24,000 Pentagon files were stolen from a defense contractor around March 2011,
and the Pentagon acknowledged that the U.S. military had suffered a major cyber attack in 2008
47 ONCIX, Foreign Spies Stealing U.S. Economic Secrets in Cyberspace – Report to Congress on Foreign Economic
Collection and Industrial Espionage, 2009-2011, Oct. 2011,
http://www.ncix.gov/publications/reports/fecie_all/Foreign_Economic_Collection_2011.pdf.
48 Id.
49 Siobhan Gorman and Stephen Fidler, Cyber Attacks Test Pentagon, Allies and Foes, The Wall Street Journal, Sept. 25,
2010, http://online.wsj.com/news/articles/SB10001424052748703793804575511961264943300.
50 Michael Riley and John Walcott, China-Based Hacking of 760 Companies Shows Cyber Cold War, Bloomberg, Dec. 14,
2011, http://www.bloomberg.com/news/2011-12-13/china-based-hacking-of-760-companies-reflects-undeclared-global-cyberwar.
html.
51 See Mandiant Report, supra.
52 S. Gorman and Y. J. Dreazon, Obama Set to Create ‘Cyber Czar’ Position, The Wall Street Journal, May 29, 2009, page
A4.
53 S. Gorman, A. Cole, Y. Dreazen, Computer Spies Breach Fighter-Jet Project, The Wall Street Journal, Apr. 21, 2009, page
A1.
54 Deloitte, The Sixth Annual Global Security Survey at p. 32 (reporting media speculation that “a recent worm attack,
acknowledged by the U.S. Department of Defense (DoD) may have been linked to thumb drives after the DoD subsequently banned
them”).
55 Trent Nouveau, Cyber-attack spectre troubles Pentagon, TG Daily, Mar. 5, 2012.
56 Christopher Drew and John Markoff, Data Breach at Security Firm Linked to Attack on Lockheed, The New York Times,
May 27, 2011.
-15-
after malicious code was placed on a flash drive inserted into a U.S. military laptop, with the code
spread on both classified and unclassified systems.57
Think tanks have also been targeted. In December 2011, Stratfor, a security think tank, was
targeted by the hacking group Anonymous (sometimes referred to as “hacktivists”). Confidential
customer information was reportedly accessed, as well as individuals’ credit card numbers which
Anonymous reportedly used to make “donations” to charities.58 The attack demonstrates that
financial gain need not be the focus of a cyber attack for Personal Information to be involved, as
well as demonstrating the challenges for even sophisticated security entities to secure their systems
against cyber attacks.
The energy industry also has been a frequent target, with cyber attacks reportedly conducted against
private and state-owned oil, energy and petrochemical companies, targeting confidential and
proprietary information such as project financing bids and exploration plans for oil and gas field
operations. For example, one series of such attacks has been dubbed “Night Dragon” and identified
as originating primarily in China.59 In May 2012, Iran claimed that cyber attacks had caused the loss
of data at its Oil Ministry and its main oil export terminal. The forensic examination which
followed revealed a malware known as Flame, which is the most sophisticated espionage program
known to exist. It can activate computer microphones and cameras, log keyboard strokes, take
screenshots, and turn an infected computer into a beacon that can intercept and transmit Bluetooth
data.60 The Department of Homeland Security Industrial Control Systems Cyber Emergency
Response Team (“ICS-CERT”) published a notice in April 2012 concerning an ongoing series of
cyber intrusions directed at U.S. gas pipelines. It said that since December 2011, there have been
targeted spear-phishing61 exploits aimed at employees of natural gas pipeline companies. It is not
clear whether the intrusions were designed simply to map the gas systems, damage the pipelines, or
both.62
These attacks can have substantial financial impacts on their targets, including the loss to the
breached entity of its own information, business disruption, and potential contractual breaches and
resulting claims by third parties. Such cyber attacks on government facilities and critical
infrastructure industries raise for all countries in which they occur complex issues of national
57 Jason Ukman, Ellen Nakashima, 24,000 Pentagon files stolen in major cyber breach, official says, The Washington Post,
Jul. 14, 2011.
58 See, e.g., Sean Ludwig, 10 things you need to know about Anonymous’ Stratfor hack, Venture Beat, Dec. 28, 2011; Olivia
Katrandjian, Hacking Group ‘Anonymous’, ABC World News, Dec. 26, 2011.
59 See, e.g., McAfee, Global Energy Cyberattacks: “Night Dragon”, Feb. 10, 2011,
http://www.mcafee.com/us/resources/white-papers/wp-global-energy-cyberattacks-night-dragon.pdf.
60 Ellen Nakashima, Greg Miller and Julie Tate, U.S., Israel Develop Flame computer virus to slow Iranian nuclear efforts,
officials say, The Washington Post, Jun. 19, 2012, http://www.washingtonpost.com/world/national-security/us-israel-developedcomputer-
virus-to-slow-iranian-nuclear-efforts-officials-say/2012/06/19/gJQA6xBPoV_story.html.
61 “Spear-phishing” is an email fraud (phishing) attempt that targets a specific organization or person, seeking unauthorized
access to confidential data. Email messages, sent from what appears to be a trusted source, ask the recipients for information or to
click on links that ask them for information or install malware on their computers.
62 Darren Goode and Jennifer Martinez, Risk of cyberattacks clouds natural gas boom, PoliticoPro, May 8, 2012,
http://www.politico.com/news/stories/0512/76060.html; Michael Winter, Natural gas pipelines under cyber attack since December,
USA Today, May 7, 2012, http://content.usatoday.com/communities/ondeadline/post/2012/05/natural-gas-pipelines-under-cyberattack-
since-december/1.
-16-
security, public policy and the appropriate degree of cooperation between government and private
sectors.
Moreover, politically motivated attacks can trigger a more traditional data breach of Personal
Information. For example, hacktivist searches for information targeting company executives with
the goal of embarrassing them can also result in access to executive PI, or that of others in the
targeted company. These attacks can potentially trigger company obligations under breach
notification statutes or result in companies sending voluntary warnings to those who are perceived
as hacktivist targets. While breach notification statutes are not always triggered, the U.S.
Department of Justice and other law enforcement agencies have used other statutes, with varying
success, to try to hold hacktivists accountable when they can be identified. 63
c. Cyber Attacks with Physical Effects or Business Disruption as Focus
During the last few years, another type of cyber risk has become increasingly prominent: cyber
attacks that are directed not at illicit acquisition of information, but rather at causing significant
physical effects or business disruption, including destruction or disruption of computer control
systems, and the industrial systems and equipment on which industrial entities and public utilities
depend. Other times, attacks seeking information rather than disruption, either deliberately or
unintentionally, also cause disruption of the targeted entity’s operations, with resultant costs and
business consequences.
A major concern has long been the targeting of critical infrastructure such as utilities and
transportation by state-sponsored cyber attacks. Recently, however, financial institutions became
the focus of what appears to be hacktivism with resulting disruption of business operations in what
is generally referred to as “denial of service” (“DoS”) or “distributed denial of service” (“DDoS”)
attacks.64 Both U.S. and South Korean banks were targeted in March 2013, followed by an attack
on American Express’s website, which went offline for a couple of hours.65 Financial institutions
have been quick to adopt effective defenses against DoS and DDos attacks. In late 2013, several
banks, including Regions Bank and JPMorgan Chase, successfully defended themselves against a
fourth round of cyber attacks by the Al Qassam Cyber Fighters.66
63 See Allison Grande, Reuters Hack Attack Will Push Cos. To Firm Up Firewalls, Law 360, Mar. 15, 2013,
http://www.law360.com/articles/424273/reuters-hack-attack-will-push-cos-to-firm-up-firewalls (noting that the U. S. Department of
Justice filed an indictment in California federal court charging former Reuters Deputy Social Director with helping hacking group
Anonymous break into the Los Angeles Times’ website, utilizing the Computer Fraud and Abuse Act as well as a general conspiracy
statute U.S.C. Section 371). See stories of May 2014 indictments, Schmidt and Sanger, 5 in China Army Face U.S. Charges of
Cyberattacks, supra
64 A denial of service attack is an attempt to make a machine or network unavailable to its intended users. In a large-scale
attack, the attacker often attempts to overwhelm a site with so many requests for attention that the site is unable to respond to
legitimate requests and becomes unresponsive. See, e.g., Arik Hesseldahl, Denial of Service Attacks Are Getting Bigger and Badder,
Apr. 17, 2013, http://allthingsd.com/20130417/denial-of-service-attacks-are-getting-bigger-and-badder/.
65 See Nicole Perlroth and David E. Sanger, Cyberattacks Seem Meant to Destroy, Not Just Disrupt, The New York Times,
Mar. 28, 2013, http://www.nytimes.com/2013/03/29/technology/corporate-cyberattackers-possibly-state-backed-now-seek-todestroy-
data.html?pagewanted=all; see also Sean Gallagher, “Funded hacktivism” or cyber-terrorists, AmEx attackers have big
bankroll, Mar. 30, 2013, http://arstechnica.com/security/2013/03/funded-hacktivism-or-cyber-terrorists-amex-attackers-have-bigbankroll/.
66 Banks’ Improved Security Defenses Disarm Cyber Attackers [Payments Source (Online)], Advisen, Aug. 5, 2013,
http://cyberfpn.advisen.com/?resource_id=2036113831069372669#top.
-17-
Reportedly, the number of attacks reported to a U.S. Department of Homeland Security cyber
security response team grew by 53% in 2012—the agency received notice of 198 attacks, several of
which successfully infiltrated defenses.67
The potential vulnerability of U.S. infrastructure has been a growing concern in recent years. As
the then U.S. Deputy Secretary of Defense put it on September 28, 2011:
In a development of extraordinary importance, cyber technologies now exist that
are capable of destroying critical networks, causing physical damage, or altering the
performance of key systems. In the twenty-first century, bits and bytes are as
threatening as bullets and bombs.68
In March of 2014, Leon Panetta, the former U.S. Secretary of Defense, further cautioned that a
possible “cyber Pearl Harbor” may loom on the horizon.69 According to Panetta, a cyber attack
which could “devastate our critical infrastructure and paralyze our nation” is the “the most serious
threat [to the United States] in the 21st century.”70 Panetta characterized the ramifications of a
focused cyber attack on the nation’s infrastructure as being comparable in scope to the damage that
Hurricane Sandy inflicted on the East Coast in 2012.71 Emphasizing the necessity of public
awareness on this issue, Panetta stressed, “The American people need to understand that [] this is
not about hacking and identity theft, it has the potential for a major attack on the United States.”72
Perhaps the first major publicly reported cyber attack resulting in substantial operational disruption
was that of the 2010 discovery that the Stuxnet worm had successfully disrupted the logic control
system for the centrifuges that Iran uses to enrich uranium, making about 1,000 of them unusable.73
According to reports, the Iranian control system was not connected to the Internet, so it is believed
that the Stuxnet virus was transmitted by a USB stick that an unknowing person plugged into an
otherwise secure computer. The malware Flame has commonalities with Stuxnet. Initially, Flame
was thought to be a tool for espionage only, but after study, researchers have concluded that it has
the capacity to completely delete files from computers, which means it can disable operating
systems and can be used not only for espionage, but also to attack utilities and other critical
infrastructure systems.74
67 David Goldman, Hacker hits on U.S. power and nuclear targets spiked in 2012, CNN Money, Jan. 9, 2013,
http://money.cnn.com/2013/01/09/technology/security/infrastructure-cyberattacks/index.html. See ICS-CERT Monitor reports
published quarterly by the Industrial Control Systems Cyber Emergency Response Team of the U.S. Department of Homeland
Security.
68 William J. Lynn III, The Pentagon’s Cyberstrategy, One Year Later, Foreign Affairs, Sept. 28, 2011,
http://www.foreignaffairs.com/articles/68305/william-j-lynn-iii/the-pentagons-cyberstrategy-one-year-later.
69 Patrick Thibodeau, Cyberattacks could paralyze U.S., former defense chief warns, Computerworld, Mar. 11, 2014,
http://www.computerworld.com/s/article/9246886/Cyberattacks_could_paralyze_U.S._former_defense_chief_warns.
70 Id.
71 Id.
72 Id.
73 Kim Zetter, How Digital Detectives Deciphered Stuxnet, the Most Menacing Malware in History, Wired, Jul. 11, 2011,
http://www.wired.com/2011/07/how-digital-detectives-deciphered-stuxnet/; Ian Bremmer and Parag Khanna, Cyberteeth Bared, The
New York Times, Dec. 22, 2010, http://www.nytimes.com/2010/12/23/opinion/23iht-edbremmer23.html?_r=0.
74 Jim Finkle, Flame can sabotage computers, attack Iran: expert, Chicago Tribune, Jun. 21, 2012,
http://articles.chicagotribune.com/2012-06-21/business/sns-rt-us-cyberwar-flamebre85k1u0-20120621_1_hungary-s-laboratorystuxnet-
and-flame-western-national-security-officials.
-18-
A high level of expertise was needed to develop Stuxnet and Flame. Unfortunately, once a code is
used and discovered, it does not take as high a level of expertise to replicate it. Replicas can be
modified to target other industrial control systems.75 One researcher has reported that he created his
own version of Stuxnet in less than three weeks of work, spending less than $10,000 to replicate his
target hardware environment.76
More recently, in late 2014 a German steel factory reported suffered massive damage after hackers
apparently gained access to production networks, allowing them to tamper with the controls of blast
furnace. Access to the network was reportedly obtained by use of credentials obtained through
social engineering techniques. 77
Actual attacks, or at least intrusions, have been reported although of relatively modest effect so far
in the U.S.78 However, tests conducted by the U.S. Department of Homeland Security demonstrate
that cyber terrorists have the capability of disrupting, or even destroying, utilities such as electrical
generation and transmission facilities, water treatment facilities, and facilities of the fossil fuel
industry.79 Such attacks may result from what the industry refers to as an Advanced Persistent
Threat – that is, a group, such as a foreign government, with both the capability and the intent of
targeting a specific entity with a cyber attack.80
There has been a great deal of concern about the effect of a cyber attack on the electrical grid.
While demonstrated incidents have been rare, they have raised concern about the resultant
economic effect. In a report dated May 2015, the potential economic effect and in particular the
effect on the insurance industry of a cyber attack on the U. S. power grid was evaluated by Lloyd’s
and the University of Cambridge Center for Risk Studies.81 The scenario posited was of an
electricity blackout that plunges 15 U.S. states including New York City and Washington DC into
darkness and leaves 93 million people without power. Predictions include a rise in mortality rates
as health and safety systems fail; a decline in trade as ports shut down; disruption to water supplies
as pumps fail; chaos to transport networks; decrease in business productivity as workplaces close
and people are unable to get to work; a decrease in consumption after the initial panic buying due to
the failure of electronic methods of payment and shortage of serviceable ATMs to obtain cash; and
secondary effects of looking and social unrest. The total impact to the U.S. economy was estimated
at $243 billion, rising to more than $1 trillion in the most extreme version of the scenario, with
75 Kim Zetter, DHS Fears a Modified Stuxnet Could Attack U.S. Infrastructure, Wired, Jul. 26, 2011,
http://www.wired.com/threatlevel/2011/07/dhs-fears-stuxnet-attacks.
76 Mathew J. Schwartz, Next DIY Stuxnet Attack Should Worry Utilities, Information Week, Nov. 22, 2011,
http://www.informationweek.com/traffic-management/next-diy-stuxnet-attack-should-worry-utilities/d/d-id/1101494?.
77 See Hack attack causes ‘massive damage’ at steel works, December 22, 2014, BBC,
http://www.bbc.com/news/technology-305751044 (discussing the details of the incident as set forth in the annual report of the
German Federal Office for Information Security (BSI)); Essers, Loek, Cyberattack on German steel factory causes ‘massive
damage’, IT World, December 19, 2014, http://www.itworld.com/article/2861675.
78 See, e.g., DDoS Attacks Spread Beyond Banking: U. S. Utility Suffers Outage as Bank Strikes Continue, Bank InfoSecurity,
Mar. 12, 2013, http://www.bankinfosecurity.com/ddos-attacks-spread-beyond-banking-a-5596/op-1 (reporting on March 7, 2013
announcement by a DDoS protection provider that it had worked with an unidentified metropolitan utility company to mitigate an
attack in mid-February that took their online payment platforms offline for two days).
79 See, e.g., Phil Windley, Blowing up generators remotely, Sept. 28, 2007, http://www.zdnet.com/blog/btl/blowing-upgenerators-
remotely/6451.
80 See Under Cyberthreat: Defense Contractors, Bloomberg Businessweek, July 9, 2009.
81 Business Blackout, Emerging Risk Report – 2015, Center for Risk Studies, University of Cambridge, May 2015,
-19-
multiple lines of insurance impacted from property and liability to homeowners and specialty
lines.82
Known instances of attacks have been rare to date. In the spring of 2009, cyber spies reportedly
penetrated the nation’s electrical grid.83 This incident highlighted that utility companies are a
target, with resultant effect on those they service. The consequences of the East Coast blackout of
2004 demonstrated the potential effect and scope of business interruption and related losses that
can be incurred as a result of a real life utility failure. As the blackout demonstrated, businesses
dependent on refrigeration are especially vulnerable to large losses resulting from electrical failures
with resultant first-party and third-party claims.
Other instances of cyber attacks on critical infrastructure include the report in February 2011 that
Chinese hackers had infiltrated the computer systems of five multinational oil and gas companies, in
an attack dubbed “Night Dragon.” Security researchers stated that the purpose of the attack
appeared to be corporate espionage, as the focus appeared to be on oil and gas field production
systems as well as financial documents.84
In April 2012, the Department of Homeland Security reported that the U.S.’s water and energy
utilities face constant cyber-espionage and denial-of-service attacks against industrial-control
systems.85
In July 2012, the head of the U.S. National Security Agency stated that there has been a 17-fold
increase in attacks on American infrastructure between 2009 and 2011, initiated by criminal gangs,
hackers and other nations.86 More recently, cyber attacks against supervisory control and data
acquisition systems (SCADA operating systems) reportedly more than doubled from 2013 to 2014,
with the majority targeting Finland, the UK and the U.S., where SCADA systems are more
common.87
As noted by the Mandiant Report exposing China’s Espionage Units, a major target for such statesponsored
espionage are victims whose compromised systems allow access to infrastructure.88
Similarly, there have been reports of state-sponsored attacks on U.S. energy companies, as well as
other U.S. companies, emanating from the Middle East.89
An Israeli cyber warfare expert from The Institute for National Security Studies has warned that
hackers have begun targeting electric and nuclear power plants and other critical operations around
82 Id.
83 S. Gorman, Electricity Grid in U.S. Penetrated by Spies, The Wall Street Journal, Apr. 8, 2009, page A1.
84 John Markoff, Hackers Breach Tech Systems of Oil Companies, The New York Times, Feb. 10, 2011.
85 Ellen Messmer, DHS: America’s power utilities under daily cyber attack, Network World, Apr. 4, 2012.
86 David E. Sanger and Eric Schmitt, Rise Is Seen in Cyberattacks Targeting U.S. Infrastructure, New York Times, Jul. 26,
2012.
87 2015 Dell security Annual Threat Report; Vicinanzo, Cyber Attacks Against SCADA Systems Doubled In 2014, Says Dell
Threat Report, Homeland Security Today, April 15, 2015, http://hstoday.us/single-article/cyber-attacks-against-scada-systemsdoubled-
in 2014....
88 The Mandiant Report, supra. See section above on cyber spying.
89 David E. Sanger and Nicole Perlroth, Cyberattacks Against U.S. Corporations Are on the Rise, The New York Times, May
12, 2013.
-20-
the world, and reportedly predicts that the next 9/1 will occur because of a cyber incident
perpetrated by a terrorist organization. 90
Whether the aim is to steal secrets or to disrupt facilities, utilities are likely remain a target for cyber
criminals. Targeting of critical infrastructure remains a serious concern, and was the basis for an
Executive Order issued by President Obama in February 2013, announcing that a system would be
established for dissemination of information in a voluntary information-sharing program between
private and public sector, as well as for the establishment of procedures to expand the Enhanced
Cybersecurity Services program to all critical infrastructure sectors.91 (See Section III.4.d. on NIST
Guidelines below).
Cyber attacks with physical effects can have substantial financial impact on their targets, including
property damage, business interruption and contractual breaches, as well as general third-party
claims should the disruption of the target’s operations in turn affect its customers and vendors.92
Moreover, the increase in cyber attacks and the growing evidence that many are likely statesponsored,
and that cyber attack capabilities are increasingly part of the defense plans of many
countries, has led to debates in both government and private sectors as to when a cyber attack
becomes cyber warfare. An example of the debate is set forth in a report by independent legal
experts that recently declared that Stuxnet was an “act of force” under international law. However,
expert opinions differed as to whether Stuxnet constituted an “armed attack” that would justify the
use of counterforce in self-defense, and trigger the start of international hostilities under the Geneva
Convention’s laws of war. Simply put, the question of whether certain cyber attacks are acts of
warfare is one that will likely be addressed in the not-so-distant future.93
Other countries have also been focusing on this increasing risk. International cooperation in identifying
and preventing such attacks, and in identifying and stopping the attackers, is increasingly a focus of
international forums on cyber security.94
90 See discussions of comments of Dr. Babi Siboni, director of the Cyber Security Program at Israel’s Institute for National
Security Studies in: Lappin, Yaakov, Hackers have ‘begun targeting nuclear power plants,’ cyber warfare expert warns, The
Jerusalem Post, April 17, 2015; Next 9/11 will be caused by hackers, not suicide bombers, cyber expert warns, The Times of Israel,
April 15, 2015, www.timesofisrael.com.
91 Executive Order, Improving Critical Infrastructure Cybersecurity, Feb. 12, 2013, http://www.whitehouse.gov/the-pressoffice/
2013/02/12/executive-order-improving-critical-infrastructure-cybersecurity.
92 Some of these types of incidents may generate claims under different types of insurance coverages than are typically
involved in breaches involving Personal Information, depending on the nature of the breach, the damages, the claim, and the type of
policy and its terms and exclusions. See section on “Potential Insurance Coverage for Data Breaches,” infra.
93 Kim Zetter, Legal Experts: Stuxnet Attack on Iran Was Illegal ‘Act of Force’, Wired, Mar. 25, 2013,
http://www.wired.com/threatlevel/2013/03/stuxnet-act-of-force/. See Scott Shane, Cyberwarfare Emerges From Shadows for Public
Discussion by U.S. Officials, The New York Times, Sept. 26, 2012, http://www.nytimes.com/2012/09/27/us/us-officials-opening-upon-
cyberwarfare.html?pagewanted=all; Elizabeth Bumiller and Thom Shanker, Panetta Warns of Dire Threat of Cyberattack on U.S.,
The New York Times, Oct. 11, 2012, http://www.nytimes.com/2012/10/12/world/panetta-warns-of-dire-threat-ofcyberattack.
html?pagewanted=all.
94 See, e.g., Security & Defence Agenda, International cooperation on cyber-security, May 10, 2012,
http://www.securitydefenceagenda.org/Contentnavigation/Activities/Activitiesoverview/tabid/1292/EventType/EventView/EventId/1
119/EventDateID/1125/Internationalcooperationoncybersecurity.aspx
-21-
4. The Scope of What Constitutes a “Data Breach”: Not Just Electronic – Paper
Too
Data breach is often thought of only as a cyber risk: a risk associated with electronic processes used
for conducting business through computer networks. Most of the attention in the past few years has
been on electronic data breaches, particularly on instances of cyber criminals gaining unauthorized
access to electronic data maintained by financial institutions, data processors and retailers, and on
reports of lost laptops containing confidential information. Often, stories focus on the increasing
technical sophistication of cyber criminals (including how thieves can use portable technology to
scan credit card information from a card still in the unsuspecting victim’s purse or wallet).95 Many
data breaches still happen the old-fashioned way, however, through the improper safeguarding or
disposal of paper records. Apparently, “dumpster diving” is still a common way for some to obtain
Personal Information and other confidential information for illicit use.
Moreover, many data protection laws and regulations directed at protecting Personal Information
are not limited to electronic data, but also require protection and proper disposal of paper records
containing Personal Information. Most U.S. data breach notification requirements, however, apply
to breaches involving data in electronic format, and do not extend to Personal Information contained
in paper documents. In contrast, breaches involving Protected Health Information in any format,
including paper photographs or audio recordings can trigger response obligations that are based on
the content of the information and not tied to format.
Data breaches regularly result from the improper disposal of paper records. This was demonstrated
several years ago when a newspaper reporter found a law firm’s old client files in a dumpster in
downtown New York City. The files pertained to personal injury lawsuits, and included names and
medical information of individuals as well as Social Security numbers and other personal details.
Reportedly, in preparation for an office move, the law firm had hired a disposal company, but that
vendor improperly dumped the records rather than shredding them. This incident and many others
involving improper disposal of paper records containing Personal Information demonstrate that the
improper disposal of paper files still presents a substantial exposure, and that holders of such
documents need to be attentive to their disposal. This includes ascertaining the security practices of
any entities to which a company delegates disposal of its records.
More recently, as an example of the exposure still presented by dumping of paper records, is an
$800,000 HIPAA settlement that was entered into between the Department oif Health and Human
Service and community health center in June 2014, for an incident involving 71 cardboard boxes of
medical records being left unattended by health center employees in the drive way of a physican’s
home near a heavily trafficked area.96
People still leave paper files on trains and wherever they stop off on the way home, including files
with Protected Health Information of individuals. If the person leaving the files worked for an
entity subject to the rules governing reporting of breaches of PHI, that loss of paper records can be a
95 Electronic Pickpocketing Target Credit Cards With Radio Chip, News On 6, Dec. 14, 2010,
http://www.newson6.com/Global/story.asp?S=13672878.
96 Parkview Health Systems settlement, http://www.hhs.gov/news/press/2014pres/06/2014063a.html.
-22-
data breach requiring mandatory reporting, as discussed below in Section III.2.f and g, discussing
HIPAA, the HITECH Act, and other federal statutes and regulations governing PHI and PI.
5. Privacy and Data Breach Concerns in Cloud Computing
a. Considerations in the U. S. and Generally
As technology develops, so do new exposures, and at times they can outpace even the newest
regulatory requirements. Recently, there has been increasing attention on “cloud computing” and
the challenges it presents to those providing it and utilizing it, on assessing its risks as well as its
benefits, and in identifying and complying with applicable security standards and laws.
Cloud computing in its most general sense is the practice of sharing information and services on
remote servers, rather than on local ones. Often those remote servers are owned and operated by
others, who may rent space and usage to a number of other customers, so resources are shared.
The definition of what the “cloud” is may never be agreed upon. Many argue that the cloud is no
different from the Internet. Others, however, contend that the cloud represents one of the most
important changes in enterprise computing since the invention of the computer itself. Proponents of
this view note that there has been a radical change in the way service providers market their IT
capabilities to end users; it is now rare to see an IT service offering that doesn’t mention the cloud.
Regardless of the difference in views, most agree that the cloud presents an attractive opportunity
for enterprises to outsource their computer infrastructure and related IT to a third party. Servers,
storage, applications, and services can now be located in multiple jurisdictions, with further growth
in use anticipated.
Cloud computing has been defined as having the following essential characteristics:
On-demand self-service – users can self-provision;
Broad network access – capabilities accessible using a variety of devices, such as
phones, computers, and tablets;
Resource pooling – pooling of a provider’s computing resources allows for using a
multi-tenant model that can serve many customers;
Rapid elasticity – resources can be flexibly increased or reduced as needed to
meet current needs;
Measured service – metering capabilities allows for dynamic optimization of service
(e.g., storage, processing, bandwidth, and active user
accounts).97
97 Peter Mell and Timothy Grance, Special Publication 800-145 – The NIST Definition of Cloud Computing, NAT’L INST. OF
STANDARDS AND TECH., Sept. 2011, http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf.
-23-
There are three general categories of cloud services:
Software as a Service (SaaS) – The user uses applications provided with or through the
provider’s cloud infrastructure. Online email and customer
relationship applications are examples.
Platform as a Service (PaaS) – The user can develop or acquire applications (using
programming languages, libraries, services, and tools offered or
supported by the provider) which run on the cloud infrastructure.
Infrastructure as a Service (IaaS) – The user may deploy arbitrary software (including
operating systems or applications) and virtual machines using the
provider’s cloud infrastructure.98 This can amount to a virtual
datacenter, configured by the user.
Enterprise users and providers will have varying control over privacy and security safeguards
depending on which model is deployed. Notably, the skills called for in virtual computing are not
necessarily the same as the corresponding skills for on-premises IT.
Cloud computing presents both security benefits and risks. On one hand, cloud computing service
providers are highly specialized and therefore may be able to employ advanced and robust security
techniques that would be cost-prohibitive for smaller companies to implement on their own. On the
other hand, an entity using such resources must relinquish control over some of its IT functions and
the data that is processed in the cloud.
Frequently, the customer utilizing a cloud provider has no control over, or knowledge of, the exact
location of the provided resources. However, for regulated data, the jurisdiction can make a
difference, so it may be necessary to choose a cloud service that will provide assurances as to the
jurisdictions in which data may be stored or processed (including for support services).
Privacy and data security considerations for a business or other enterprise’s engagement of a cloud
provider include:
Evaluating whether to prohibiting or limiting employees’ individual subscriptions to cloud
services not authorized by the company, as cloud services are readily available on the
Internet to individuals, and tying approved cloud implementations to specified usage cases
as a cloud offering may be appropriate for certain types of data or business processes but not
others.
Instituting appropriate self-help measures to address risk, such as cyber insurance, data
segregation, and elevated access controls and logging.
Applying the same considerations and requirements to the cloud service provider as to other
IT service providers, such as (a) undertaking appropriate due diligence in connection with a
risk assessment before the engagement, (b) including appropriate contractual information
98 Id.
-24-
security controls and requirements, and (c) providing for an appropriate means of oversight
and/or validation of controls. 99
Reviewing a cloud provider’s website and other representations in its agreement, as a
provider can often present a very strong set of data security controls, but those controls have
less significance from a compliance and risk management perspective if the cloud provider
refuses to stand behind those controls in the agreement.
Taking into account that if any HIPAA Protected Health Information is to be stored or
processed in the cloud, the provider should enter into a Business Associate Agreement with
the enterprise.100
Also taking into account if any payment cardholder data is to be stored or processed in the
cloud, as then the engagement will likely result in an allocation of responsibilities (some of
which will remain shared) for PCI DSS compliance.101 Under new Payment Card Industry
Data Security Standards, the enterprise remains responsible for the cardholder information,
but the cloud provider usually falls within the PCI DSS compliance scope, even if the
enterprise encrypts cardholder information before sending it to the cloud and manages the
enterprise keys.102
Understanding that some state laws require encryption of personal data sent over the
Internet, including data sent to a cloud service.103
Reviewing the cloud contract, and negotiating to the extent possible, for terms that address
data risks, such as:
o whether the cloud provider will agree not to mine enterprise data or use it (even in
“aggregated” form) other than for purposes of providing the cloud service;
o security requirements applicable to subcontractors, and whether the cloud provider
will stand behind the acts and omissions of its subcontractors;
o the provision of audit reports or SOC 2 reports, and/or permission to conduct testing
within the cloud environment;
o notification for security incidents, and responsibility for notification and credit
monitoring costs in the event of a breach;
99 E.g., 16 C.F.R. § 314.4; Principles for Effective Cybersecurity: Insurance Regulatory Guidance, NAT’L ASS’N OF INS.
COMM’RS (PRINCIPLE 8), Apr. 2015,
http://www.naic.org/documents/committees_ex_cybersecurity_tf_final_principles_for_cybersecurity_guidance.pdf.
100 45 C.F.R. § 164.504(e).
101 Payment Card Indus. Data Sec. Standards (version 3.0), Requirements 2.6 and 12.9; p. 12 (“Use of Third-Party Service
Providers / Outsourcing”).
102 Information Supplement: PCI DSS Cloud Computing Guidelines, at p. 15, Payment Card Indus. Sec. Standards Counsel – Cloud
Special Interest Grp., Feb. 2015, https://www.pcisecuritystandards.org/pdfs/PCI_DSS_v2_Cloud_Guidelines.pdf.
103 E.g., 201 Code Mass. Regs. § 17.03; Nev. Rev. Stat. § 603A.215(2).
-25-
o whether the cloud provider will stand behind its intellectual property by
indemnifying for infringement claims;
o limits of liability and carve-outs;
o the cloud provider’s rights to suspend service;
o access to data during a post-termination period and transition services; and
o terms of the contract (e.g., technical specifications) which are subject to change over
time.
Also among the considerations for enterprises entering into a cloud provider agreement is that of
considering declining an agreement with a cloud provider that insists on unreasonable terms that do
not adequately address data risks, for in the end, it is the enterprise’s risk and exposure at stake.
The implications on an enterprise’s own policies, such as business continuity, record retention, and
disaster recovery policies, should also be evaluated and considered, in addition to those of its cloud
provider.
Although cloud services may offer significant benefits and efficiencies if used appropriately, cloud
services present unique issues and risks to be considered.
b. Recent Developments in the EU
Over the last few years Europe has increased its focus on cloud computing and several
organisations, working groups and policies have been set up to encourage its expansion and
increased usage at the EU level, although there to there is increasing concern about data security.
For example, the European Cloud Computing Strategy was adopted by the European Commission
in September 2012. Aimed at “Unleashing the Potential of Cloud Computing in Europe”, the
strategy outlines actions which hope to deliver a net gain of 2.5 million new European jobs, and an
annual boost of €160 billion to the EU’s GDP (around 1%), by 2020, all within the cloud arena.
Part of this strategy was the creation of the European Cloud Partnership (ECP)104, which brings
together industry and the public sector to work on common procurement requirements for cloud
computing in a transparent way. Cloud for Europe105 is another project, started in June 2013 and
expected to run until November 2016, which supports public sector cloud use as collaboration
between public authorities and industry and is co-funded by the European Commission under the
Framework Programme for Research and Innovation106 .
Concerns over cloud data security in Brussels have, nevertheless grown following the Snowden
affair in 2013. The EU’s response in respect of data stored in the cloud is that it wants to regulate
the sector even if that makes its use more complicated. Viviane Reding, the European
104 http://ec.europa.eu/digital-agenda/en/european-cloud-partnership.
105 http://www.cloudforeurope.eu/.
106 Id.
-26-
Commission’s justice minister, has even gone so far as to say that she wants to see “the
development of European clouds” certified to strict new European standards. EU legislators have
moved quickly to pull together regulations for cloud security, but businesses and consumer cloud
users are calling for more regulations to make changing cloud service providers easier and the ECP
has called for a certification of cloud providers.
The 2015 Cloud Security Spotlight study by the Cloud Security Alliance (CSA)107 found that
security is the biggest perceived barrier to cloud adoption, with 9 out of 10 organisations surveyed
disclosing that they are concerned about public cloud security. In order to address some of the
concerns around cloud computing, the International Information Systems Security Certification
Consortium ((ISC) and the CSA launched a new certification scheme in April 2015 targeted at
cloud security professionals. The new certification scheme, known as “Certified Cloud Security
Professional”, or “CCSP”, is designed as an international standard for professional-level knowledge
of the design, implementation and management of cloud environments. CCSP certification will act
as an indicator to employers and others that the CCSP accredited individual is competent in cloud
security, and has the knowledge and skills to address security and business issues relating to cloud
computing.
Amendments have been proposed to the current law such as requiring “all transfers of data” from a
cloud in the EU to a cloud maintained in the United States or elsewhere to “be accompanied with a
notification to the data subject of such transfer and its legal effects” and there has even been talk of
barring such transfers unless certain conditions are met. 108 In addition, lawmakers are also
proposing to impose guidelines for handling court orders from countries outside the EU.109
However, there is concern that these changes will isolate the EU and form a sort of “cyber-barrier”
which will restrict trade. Anna-Verena Naether, policy manager for DigitalEurope, has said, “We
have to make sure it doesn’t lead to a Fortress Europe approach.” 110 Sophia in ‘t Veld, a Dutch
MEP who sponsored one of the cloud computing amendments, expressed concern over the “market
dominance of a few American players” and, whilst Ms. in t’Veld is against building a fence around
Europe, she would like to see very clear rules established and more competition coming out of the
Euro Zone.111
The current EU data protection regime is not well-suited to the wide-spread adoption of cloud
computing. 112 The European Commission is, as noted above, working on a replacement of the Data
Protection Directive with the Proposed Regulation. In November of 2013 the European Parliament
released a report on the changes proposed by the Commission, 113 in which it uses the cloud arena
107 http://www.databreachtoday.co.uk/cloud-security-certification-launched-a-8162.
108 How to Regulate Cloud Computing, Mark O'Conor, Patrick van Eecke & Jessica Turner, The Guardian Online, 28 March
2013 at: http://www.theguardian.com/media-network/media-network-blog/2013/mar/28/regulation-cloud-computing-data-protection.
109 Id. at fn 100.
110 Id.
111 Id.
112 Roger Bickerstaff, Barry Jennings, and Tessa Finlayson, Cloud computing: an analysis of the key legal and commercial
considerations arising in relation to cloud computing and related agreements, PLC Practice Note, Maintained.
113 European Parliament Report “on the proposal for a regulation of the European Parliament and of the Council on the
protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data
-27-
as justification for legislative change in more ways than one. Whilst the Proposed Regulation does
not specifically address cloud computing, there are a number of provisions which will have an
impact on the provision and use of cloud services, including in the following key areas:
• Global reach: The Proposed Regulation contains provisions which have the effect of
extending the Proposed Regulation’s reach to organisations based outside the EU. Article 43(a) has
been proposed by the European Parliament to address the issue raised by of access requests by
public authorities or courts in third countries to personal data stored and processed in the EU. The
idea is that a transfer will only be granted by the data protection authority following verification that
the transfer complies with the Regulation and it is worth noting that this provision was drafted with
particular regard to the growth of cloud computing. The Proposed Regulation is intended to apply to
data controllers with no EU establishment where they undertake processing related to the offering
of goods or services to EU residents, or which monitors individuals resident in the EU, irrespective
of whether the processing takes place within the EU.
• Data processors will also be held responsible: Under the existing Directive, data controllers
(i.e. those persons who determine the purposes for which and the manner in which any personal
data are, or are to be, processed) – but not data processors (i.e. those persons that process the
personal data on behalf of the data controller) - are responsible for the lawful collection and
processing of personal data under their control. The rationale for this is that even after a data
controller discloses personal data to a data processor, the data processor has not collected this
personal data itself and it is required to process the personal data in accordance with the instructions
given to it by the data controller. The Proposed Regulation imposes obligations on both the data
controller and the data processor (which would include cloud providers). For example, draft Article
23 requires that both data controllers and data processors implement appropriate and proportionate
technical and organizational measures and procedures to ensure that the processing meets the
requirements of the Proposed Regulation; and Article 26(4) provides that if a data processor
processes personal data other than in accordance with the instructions of the data controller, then the
data processor will assume the position of a joint controller in respect of that processing. These
proposed changes to EU legislation under the Proposed Regulation (should they come into effect
without further amendment) may have a significant impact on the way that cloud providers wishing
to operate in the EU manage their services, something which may well be reflected in the cost of
cloud computing in the future.
• Sanctions: Article 79 of the Proposed Regulation, in its current draft, allows national data
protection authorities to impose fines of up to €1m or 2% of the worldwide turnover of the
breaching entity for personal data breaches. This applies to ‘anyone who, intentionally or
negligently’ causes a personal data breach – in other words, the fines extend to data processors as
well as data controllers. Customers are likely to look to cloud service providers, as they will to noncloud
service providers, to assist in managing the risks associated with increased financial exposure
under the Proposed Regulation.
Protection Regulation)”, 22 November 2013: http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-
%2F%2FEP%2F%2FTEXT%2BREPORT%2BA7-2013-0402%2B0%2BDOC%2BXML%2BV0%2F%2FEN&language=EN.
-28-
On 12 March 2014, the European Parliament voted in plenary in favour of the European
Commission’s Proposed Regulation. In May 2015, the European Council of Ministers agreed to a
“partial and general approach” on the majority of the chapters of the Proposed Regulation, and on
15 June 2015 approved their General Approach114 to the Proposed Regulation. Now that the Council
of Ministers have reached agreement on their version of the Proposed Regulation, the tripartite
negotiations between the Council of Ministers, European Parliament and the European Commission
have been able to commence on 24 June 2015. This represents the final stage of the European
negotiations, and indicates that the Proposed Regulations are on track for being put in place (that is,
published in the Official Journal of the European Union, or OJEU) by the end of 2015 and will then
come into force in European Union member states two years later.
The European Network and Information Security Agency (ENISA) published a research paper at
the beginning of 2015 stressing that cloud computing guidelines are needed for financial services
businesses.115 The paper, based on questionnaires and interviews carried out by ENISA on
representatives from the financial sector, highlights how guidelines are needed to help financial
institutions appreciate the regulatory requirements when using cloud services, which the
representatives reportedly view as “scattered across several different texts”. ENISA has since
published a Cloud Security Guide for SMEs in April 2015, highlighting 11 important risks and also
security opportunities SMEs should take into account when procuring a cloud services, as well as
explanations of security features of cloud services in the market.116
Cloud computing data breaches (including image leaks (for example, the widespread hacking of
iCloud celebrity nude photographs, stolen Snapchat images and the Dropbox and Sony hacks) have
raised concerns about risk of leaks and breaches in cloud storage platforms, particularly when
considering the type and volume of data that cloud platforms are able to hold: a single breach could
affect hundreds of thousands of individuals. According to a report from Netskope , over 15% of
European organizations now use more than 1,000 cloud apps with Google Drive, Facebook and
Twitter being amongst the most popular.117 Netskope’s figures show that 9 out of 10 cloud apps in
use today score a ‘medium’ or below for enterprise-level security, and 13.6% of app users have had
their login details compromised. As to whether or not cloud providers will increase their security to
give adequate protection to personal data - for the moment it seems to be a case of “watch this
(cyber) space.”118
114 http://data.consilium.europa.eu/doc/document/ST-9565-2015-INIT/en/pdf
115 Network and Information Security in the Finance Sector, Regulatory landscape and industry priorities, Lionel Dupre,
European Union Agency for Network and Information Security, Published 15 January, 2015 at:
http://www.enisa.europa.eu/activities/Resilience-and-CIIP/nis-in-finance/network-and-information-security-in-the-finance-sector.
116 Cloud Security Guide for SMEs, Dr M.A.C. Dekker, Dimitra Liveri, European Union Agency for Network and Information
Security, Published 10 April, 2015, http://www.enisa.europa.eu/activities/Resilience-and-CIIP/cloud-computing/security-forsmes/
cloud-security-guide-for-smes.
117 http://www.cloudcomputing-news.net/news/2015/apr/15/google-drive-facebook-and-twitter-most-popular-business-cloudapps-
are-they-safe/.
118 For a guidance on cloud computing and data protection, and the risk analysis check list for businesses wishing to use cloud
computing see ARTICLE 29 DATA PROTECTION WORKING PARTY, Opinion 05/2012 on Cloud Computing, adopted July 1st
2012, http://ec.europa.eu/justice/data-protection/index_en.htm.
-29-
6. Privacy and Data Breach Concerns in Social Media
The growth of social media sites presents another set of privacy and data security challenges.119
“Social media” refers broadly to online applications that allow users to create and exchange
different types of content. In addition to social networking sites like Facebook, LinkedIn, Twitter
and Google+, the term encompasses video and photo sharing sites such as YouTube and Instagram,
and news aggregator sites such as Fark and Feedly.120
Facebook, one of the more well-known social media sites, has over reported over 1.4 billion active
users.121 LinkedIn reports 364,000,000+ registered members with 75% of its new members in Q1
2015 reportedly from outside the U.S., and numerous newer sites report users in the multimillions.
122 The aggregation of so much personal information, and the myriad uses to which that
information is put by various applications, some of them created by third parties, has led to much
discussion about privacy settings on such sites.123 There have been a number of investigations both
in the U.S. and the EU concerning the collection and usage of personal information by such sites, as
well as private lawsuits. 124 Even usages that may have altruistic purposes, such as scanning of
postings to thwart criminal activity, have raised privacy concerns.125
a. Social Media as Target and Source of Data Breaches
The amount of information about individuals maintained on social media sites has made them
targets for those that seek information about individuals, for identity theft or other purposes.
Moreover, the lack of security maintained by users of their access credentials, and the informality
with which information is transmitted, makes social media susceptible to hacking both by those who
want to obtain information about individuals and those who use the sites for dissemination of false
information.
119 According to a recent report, 66% of online adults use social networking websites, as compared to 8% in 2005. Joanna
Brenner, Pew Internet: Social Networking (Full Detail), Pew Internet & American Life Project, May 31, 2012,
http://pewinternet.org/Commentary/2012/March/Pew-Internet-Social-Networking-full-detail.aspx.
120 Online dating websites such as Match.com and EHarmony are similar to social media websites in many ways in the way
they aggregate content and allow online communications between their members.
121 Facebook statistics as of March 2015, http://newsroom.fb.com/company-info/.
122 See Statistica, Leading social networks worldwide as of March 2015, ranked by number of active uses (in millions),
http://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/.
123 According to one report, only one-third of Facebook users believed that Facebook’s use of their personal information is
“somewhat to very acceptable.” Donna Tam, Facebook Less Trusted than Amazon, Google, Survey Says, CNET News, Jul. 19, 2012,
http://news.cnet.com/8301-1023_3-57476288-93/facebook-less-trusted-than-amazon-google-survey-says/.
124 In 2015, Facebook was targeted with a number of regulatory investigations and private lawsuits, several focusing on its use
of facial recognition software which allegedly involves the collection storage and usage of user’s biometric identifiers and
information. See, e.g. Schechner, Sam, Facebook Privacy Controls Face Scrutiny in Europe, April 2, 2015, The Wall Street Journal;
Nimesh Patel v. Facebook, Inc., United States District Court, Northern District of Illinois, Eastern Division; Carlo Licatta v.
Facebook, Inc., 2015 CH 05427, In The Circuit Court of Cook County, Illinois, County Department, Chancery Division (filed April
1, 2015).
125 See, e.g.,; Joseph Menn, Social Networks Scan for Sexual Predators, with Uneven Results, Reuters, Jul. 12, 2012,
http://www.reuters.com/article/2012/07/12/us-usa-internet-predators-idUSBRE86B05G20120712; Facebook In Privacy Breach, The
Wall Street Journal, Oct. 18, 2010. The privacy concerns regarding Facebook are not just in the United States. See, e.g., David
Cohen, Facebook Privacy Battles Heat Up in Ireland, All Facebook – The Unofficial Facebook Blog, Jul. 31, 2012,
http://allfacebook.com/ireland-odpc-europe-versus-facebook_b95960; David Cohen, Facebook Privacy Policies Challenged by
Austrian Law Student, All Facebook – The Unofficial Facebook Blog, Oct. 26, 2011, http://allfacebook.com/facebook-privacypolicies_
b64632.
-30-
The dangers of hacking of social media sites and the damage such hacking can cause was
demonstrated on April 22, 2013, when the Associated Press’s twitter feed was hacked and a tweet
of “Breaking: Two Explosions in the White House and Barack Obama is injured” appeared.126
Within minutes after the tweet, the Dow Jones average dropped more than 128 points during the
span of a few seconds, but after the report was found to be a hoax, the stock market recovered.127 A
group of hackers loyal to Syrian President Bashar Assad claimed responsibility for the hoax.128 A
report issued by a cyber intelligence firm described a campaign by Iranian hackers to use fake
persona on social networking site to gain login credentials and other information from officials in
the U.S. and other countries; after obtaining “friend” status, the fake personal reportedly would
target their victim with spear phishing emails that introduced malware with capabilities for data
exfiltration.129
Social media is also a target for more traditional style breaches. Social media and online dating
websites have reportedly been the subject of several hacking attacks, with sensitive information
potentially compromised. Recently, an online dating service, AdultFriend Finder, discovered a
potential security breach that may have compromised members’ personal information, and
reportedly their sexual preferences.130 Snapchat Inc. and Dropbox Inc. were reportedly subject to
user-data thefts in 2014, although issues have been raised as to whether its systems were breached
or the information in issue was stolen from unrelated sites.131 In 2012, a Russian hacker claimed to
have downloaded over six million passwords from LinkedIn; although the passwords were
encrypted, hundreds of thousands of them have reportedly been “cracked” and many posted
online.132 About a week and a half after the breach was reported, a lawsuit seeking $5 million in
damages was filed by one of the site’s users, leading to extensive motion practice on class
certification and what constitutes legally cognizable injuries from a data breach.133 The same week
that the LinkedIn breach was reported, it was reported that the online dating website EHarmony was
also the target of a hack and 1.5 million passwords were stolen.134 More recently, in late 2013,
hackers reportedly stole usernames and passwords for nearly two million accounts at Facebook,
Google, Twitter, Yahoo and others, reportedly as a result of key logging software maliciously
126 David Jackson, AP Twitter feed hacked; no attack at White House, USA Today, Apr. 23, 2013.
127 Jackson, supra.
128 Emily Alpert, Backers of Syrian president claim credit for AP Twitter hack, Los Angeles Times, Apr. 23, 2013.
129 Leon spencer, Social Media Central to Iranian espionage campaign, Report, ZDNet, May 30, 2014,
http://www.zdnet.com/social-media-central-to-iranian-espionage-campaign-report-7000030028.
130 See DiPietro, Ben, Crisis of the Week: Kiss-and-Tell Fears After Adult Friend Finder Breach, June 1, 2015, The Wall
Street Journal; Grande, Allison, On line Dating Site Uncovers Member Data Breach, May 22, 2015,
http://www.law360.co9m/articles/659418/print?section+corporate.
131 Grande, Allison, ‘It Wasn’t Me’ Defense Holds Promise for Snapchat, Dropbox, October 17, 2014,
http://www.law360.com/privacy/articles/587730?utm_source=shared-articles&utm_mediaum....
132 Zach Whittaker, 6.46 Million LinkedIn Passwords Leaked Online, ZDNet, Jun. 6, 2012, http://www.zdnet.com/blog/btl/6-
46-million-linkedin-passwords-leaked-online/79290.
133 In Re LinkedIn User Privacy Litigation, U. S. District Court Northern District of California, San Jose Division, Case
No.:5:12-CV-03088-EJD.
134 Salvador Rodriguez, Like LinkedIn, eHarmony is Hacked; 1.5 Million Passwords Stolen, Los Angeles Times, Jun. 6, 2012,
http://articles.latimes.com/2012/jun/06/business/la-fi-tn-eharmony-hacked-linkedin-20120606.
-31-
installed on a number of computers around the world and sending them to a server controlled by
hackers tracked to the Netherlands.135
In May, 2014 there was the discovery of the OAuth and the OpenID or “Covert Redirect” security
flaws. Basically, these programs allow the “cyber-attackers” to appear to the user as a standard login
popup, however, they are anything but. When the user logs-in, all of the information is provided
to the hacker, not to the intended website. Among the sites these fake log-in’s have reportedly
attacked are Facebook, Google+, LinkedIn and Microsoft.136
Software developers that create content for social media websites have also become targets for data
thieves and lawsuits. In one illustrative example, a developer that creates online services and
applications for use with social networking sites reportedly suffered a data breach in which
(according to allegations contained in a complaint related to the breach) a hacker stole the email and
social networking login credentials – i.e., user names and passwords – of approximately 32 million
people. The users had been required to provide their login credentials as part of a sign-up process to
gain access to the developer’s applications. A class action suit followed against the developer,
which reportedly settled for minimal payment to the plaintiffs.137 The U.S. Federal Trade
Commission filed charges against the developer over the breach, and that proceeding was reportedly
settled as well.138
The concern about theft of passwords and other user credentials remains, as has the threat of
phishing attacks. Often users tend to use the same passwords on multiple sites, so that a password
stolen from one site, even one for social use without financial or other protected information, can be
used by hackers to try to obtain access to information maintained by a user on other sites.
Social media sites have also become targets for those investigating individuals. Some people
examine the content that users make available on their social media website profiles. For example,
workers’ compensation claim investigators were reportedly examining the profiles of claimants to
determine whether they are engaging in physical activity that their claimed injuries should
prevent.139 Social media content is also reportedly being used as evidence in divorce cases –
according to one survey, more than one-third of divorce filings in 2012 contained the word
“Facebook” and used patterns of behavior that are recorded by Facebook posts, such as those that
135 Jose Pagliery, 2 million Facebook, Gmail and Twitter passwords stolen in massive hack, CNN Money, December 4, 2013,
http://money.cnn.com/2013/12/04/technology/security/passwords-stolen/.
136 See Social Media, Is Covert Redirect Flaw a Big Deal by Jeffrey Roman, http://www.databreachtoday.com/social-media-c-
289 and Social Media Latest to Feel Security Flaw Impact, by Jeff Green, http://www.pymnts.com/news/socialcommerce/
2014/social-media-latest-to-feel-security-flaw-impact
137 Claridge v. RockYou, Inc., No. C 09-6032 PJH (N.D. Ca.). According to reports, RockYou claimed it was financially
unable to pay any judgment. Tim Wilson, RockYou Lawsuit Settlement Leaves Question Marks on Breach Liability, Security Dark
Reading, Nov. 23, 2011, http://www.darkreading.com/insider-threat/167801100/security/privacy/232200192/rockyou-lawsuitsettlement-
leaves-question-marks-on-breach-liability.html.
138 John P. Mello Jr., RockYou Settles Pending Charges for $250K Over Data Breach, PC World, Mar. 7, 2012,
http://www.pcworld.com/article/252725/rockyou_settles_pending_charges_for_250k_over_data_breach.html.
139 Roberto Ceniceros, Comp cheats confess all on social network sites, businessinsurance.com, Sept. 6, 2009.
-32-
arguably relate to parenting skills, excessive parenting or disparaging remarks about a spouse. They
are also used in custody and alimony battles.140
Others have identified ways to use publicly available information on social media websites to obtain
information about the site’s users. For example, researchers at Carnegie Mellon University reported
that they were able to successfully guess individuals’ Social Security numbers based on information
on such websites.141 The researchers also claim to have developed an application for iPhones that
can take a photograph of someone and, through the use of facial recognition software, display onscreen
that person’s name and vital statistics.142 Additionally, the researchers reportedly looked at
photographs of anonymous people (many of whom used pseudonyms) on a dating website and,
through facial recognition software and Facebook, were able to identify about 10% of the dating
site’s members.
b. Social Media as Source of Statutory and Regulatory Violations
i. In the U.S.
Social media can be used to obtain information about individuals for less nefarious reasons than
identify theft, but in contexts that can still have an effect on an individual such as in vetting
applicants for employment or tracking the activities of employees. This has raised regulators’ and
legislators’ concerns about the incursion on individuals’ privacy and generated new and proposed
laws regulating their use as well as regulatory scrutiny.
The increased use of social media in the workplace adds another layer of complexity to privacy
issues. In 2010, the U.S. Supreme Court decided that a public employee who uses an employersupplied,
text messaging-enabled pager device does have a reasonable expectation of privacy with
regard to personal messages sent on the device. The Court ruled, however, that under a Fourth
Amendment analysis, the employer’s review of two months’ worth of the employees’ text messages
(in order to determine whether they were exceeding their allowable quotas for personal text
messages) was justified.143 Presumably, the Court’s holding would also apply to messages shared
on social media websites via employer-provided hardware.
In May 2012, the Acting General Counsel of the National Labor Relations Board (“NLRB”) issued
a report warning that many provisions routinely included in social media polices – such as blanket
restrictions on the publication of confidential information and rules requiring a professional tone in
online posts – may violate the National Labor Relations Act (“NLRA”) by inappropriately
140 Quentin Fottrell, Does Facebook Wreck Marriages?, Smart Money, May 21, 2012,
http://blogs.smartmoney.com/advice/2012/05/21/does-facebook-wreck-marriages/.
141 Facebook’s Privacy Issues Are Even Deeper Than We Knew, Forbes, Aug. 8, 2011,
http://www.forbes.com/sites/chunkamui/2011/08/08/facebooks-privacy-issues-are-even-deeper-than-we-knew/.
142 Face-matching with Facebook profiles: How it was done, C/NET News, Aug. 4, 2011, http://news.cnet.com/8301-
31921_3-20088456-281/face-matching-with-facebook-profiles-how-it-was-done/. See also Face Recognition Study – FAQ,
http://www.heinz.cmu.edu/~acquisti/face-recognition-study-FAQ/. Facebook’s own facial recognition technology has been the
subject of various inquiries, including a recent Congressional hearing. Kashmir Hill, Sen. Al Franken Grills Facebook and the FBI
Over Their Use of Facial Recognition Technology, Forbes Tech Blog, Jul. 18, 2012,
http://www.forbes.com/sites/kashmirhill/2012/07/18/sen-al-franken-grills-facebook-and-the-fbi-over-their-use-of-facial-recognitiontechnology/.
143 City of Ontario, California v. Quon, 130 S. Ct. 2619, 560 U.S. 746, 177 L. Ed. 2d 216 (Jun. 17, 2010).
-33-
restricting protected concerted activity rights.144 Although much of the report is dedicated to
unlawful policies, the NLRB found one policy – Wal-Mart’s – entirely lawful.145 The key to Wal-
Mart’s confidential information policy was that it included enough examples of prohibited
disclosures that employees could understand the policy did not cover protected concerted activity.
Its social media policy also provided plenty of examples of prohibited conduct, so that it was clear
protected concerted activity was not affected. Further, the NLRB noted that employers have a
legitimate interest in prohibiting intentionally discriminatory or harassing social media conduct
when it may contribute to a hostile work environment.
In September 2012, the NLRB issued its first decision on an employer’s social media policy,
holding that a general prohibition on what employees can say online violates Section 7 of the
NLRA.146 The NLRB rejected an administrative judge’s (“ALJ”) approval of Costco Wholesale
Corp.’s social media policy, determining that Costco’s policy prohibiting employees from
electronically posting statements that “damage the Company . . . or damage any person’s
reputation” was impermissible under the NLRA as an unlawful restraint on protected concerted
activity rights.147
Subsequent cases show, at best, that there is a fine line between policies that prohibit protected
concerted activity and those that do not. At worst, they show that enforcement is inconsistent. For
example, in a 2014 case, an ALJ found lawful an employer’s policy that “urge[d] all employees not
to post information regarding the Company, their jobs, or other employees which could lead to
morale issues in the workplace or detrimentally affect the Company business.”148 However, in
another case, the NLRB found the following policy unlawful: “[I]t is important that employees
practice caution and discretion when posting content [on social media] that could affect [the
Employer’s] business operation or reputation.”149 In light of these inconsistences, employers should
use caution and engage legal counsel when implementing or revising social media policies.
In another instance of the intersection between employment law and social media, in November
2010, the NLRB filed a lawsuit against an ambulance company, alleging that it violated federal
labor laws (specifically, an employee’s right to engage in protected concerted activities with other
employees pursuant to the NLRB150) when it fired an employee for posting unflattering comments
about her supervisor on a Facebook page.151 The parties settled in January 2011; the employer
agreed, among other things, to amend its social media policy.152
144 Lafe E. Solomon, Acting General Counsel of the NLRB, Report of the Acting General Counsel Concerning Social Media
Cases, Office of the General Counsel – Division of Operation-Management, May 30, 2012.
145 Id.
146 Costco Wholesale Corp. and United Food and Commercial Workers Union, Local 371, Case 34-CA-012421 (NLRB Sept.
7, 2012).
147 Id.
148 Landry’s Inc., No. 32-CA-118213 (N.L.R.B. A.L.J. June 26, 2014).
149 N.L.R.B Gen. Couns. Mem. GC 15-04 (March 18, 2015).
150 29 U.S.C. § 151 et seq.
151 American Medical Response of Connecticut, Inc. and International Brotherhood of Teamsters, Local 443, Case No. 34-
CA-12576 (NLRB Region 34).
152 Conn. ambulance co. settles Facebook firing case with Labor Board, International Business Times, Feb. 16, 2011
(http://www.ibtimes.com/conn-ambulance-co-settles-facebook-firing-case-labor-board-267649).
-34-
In a similar vein, employment background checks can include information from credit reports,
employment and salary history, criminal records, and social media.153 According to the FTC, the
same rules that apply to other types of information also apply to social media. For example, the
FTC investigated a company selling background information from social media to see if it was
complying with the Fair Credit Reporting Act (“FCRA”). Before the investigation was dropped, the
FTC wrote that “companies selling background reports must take reasonable steps to ensure the
maximum possible accuracy of what’s reported from social networks and that it relates to the
correct person,” as well as comply with other FCRA sections.154 Another data broker subsequently
agreed to pay a large fine following the FTC’s allegation that it failed to ensure the accuracy of the
social media data it marketed to employers.155
Some employers have reportedly been more direct about their review of social media websites and
have requested personal social media account login credentials during the job application process in
an attempt to gain information about the job applicant. Employers’ requests for login credentials
have been widely criticized and, beginning in 2012, states began prohibiting such requests. Now,
the practice is prohibited in some, but not all, states, with varying scope as to what conduct would
be permissible for legitimate business purposes such as investigating improper employee
downloading of an employer’s proprietary information.156
Potentially problematic uses of social media have been reported outside the employment context as
well. In one reported incident, a physician revealed sufficient information about a patient on a
social media site to constitute a breach of patient privacy.157 Judges and lawyers have been
153 According to reports, approximately 91% of employers use social media during their hiring process. The Facebook
Background Check: Using Social Media to Vet Candidates, Ohio State Bar Ass’n,
https://www.ohiobar.org/ForPublic/Resources/LawYouCanUse/Pages/The-Facebook-Background-Check-Using-Social-Media-to-
Vet-Candidates.aspx. See, e.g., Tyra M. Vaughn, Exclusive: Public safety agencies use social media to check applicants’
backgrounds, Daily Press, Sept. 1, 2012, http://articles.dailypress.com/2012-09-01/news/dp-nws-police-hires-facebook-backgoundchecks-
0823-20120901_1_social-media-wight-sheriff-mark-marshall-check-job. In 2014, the EEOC and the FTC jointly issued
guidance for employers and employees regarding employers’ background checks, including social media. See Background Checks:
What Employers Need to Know, available at https://www.ftc.gov/system/files/documents/plain-language/pdf-0142-backgroundchecks-
what-employers-need-know.pdf; Background Checks: Tips for Job Applicants and Employees, available at
http://www.consumer.ftc.gov/articles/pdf-0044-background-checks.pdf.
154 Lesley Fair, The Fair Credit Reporting Act & social media: What business should know, FTC Business Center Blog, Jun.
23, 2011 (http://business.ftc.gov/blog/2011/06/fair-credit-reporting-act-social-media-what-businesses-should-know); Letter from
Maneesha Mithal, Associate Director of the Division of Privacy and Identity Protection, Bureau of Consumer Protection, Federal
Trade Commission, to Renee Jackson, dated May 9, 2011, http://ftc.gov/os/closings/110509socialintelligenceletter.pdf.
155 Spokeo to Pay $800,000 to Settle FTC Charges Company Allegedly Marketed Information to Employers and Recruiters in
Violation of FCRA, ftc.gov, June 12, 2012, https://www.ftc.gov/news-events/press-releases/2012/06/spokeo-pay-800000-settle-ftccharges-
company-allegedly-marketed.
156 According to the National Conference of State Legislatures, 21 states have enacted such legislation intended to protect the
privacy of prospective and current employees (and in some states a student) including, as of June 2015, Arkansas, California,
Colorado, Connecticut, Delaware, Illinois, Louisiana, Maine, Maryland, Michigan, Montana, Nevada, New Hampshire, New Jersey,
New Mexico, Oklahoma, Oregon, Rhode Island, Tennessee, Utah, Vermont, Virginia, Washington and Wisconsin, www.ncsl.org.
See, e.g., California Chapter 618 of 2012; Illinois Public Act 097-0875(2012); Maryland Chapters 233 and 234 of 2012; Michigan
Public Act No. 478 (2012), New Jersey P.L. 2012, C.75, and 40 Okla. Stat. § 173.2. Similar legislation has been introduced or is
pending in a number of other states. Additionally, a federal version of a password protection statute, the Password Protection Act of
2015, was introduced in May of 2015, although previous versions of the bill did not make it out of committee. See H.R. 2277, 114th
Cong. (2015).
157 Chelsea Conaboy, For doctors, social media a tricky case, Boston Globe, Apr. 20, 2011,
http://www.boston.com/lifestyle/health/articles/2011/04/20/for_doctors_social_media_a_tricky_case/.
-35-
sanctioned for communications through social media,158 and an Israeli army mission was aborted in
2010 when a soldier revealed the mission on Facebook.159
The use of social media is likely to continue to expand. Banks and lenders are expected to
incorporate social media conversations into their analysis of credit risk.160 For example, some
online comments may be interpreted by lenders as an indicator that an applicant may be delinquent
on a future loan or a possible credit risk. Research is reportedly being conducted to try to create
correlations between online (social media) comments and possible credit issues, which could lead to
a form of “social media underwriting” in the future.
In addition, schools are now attempting to use posts on Facebook accounts as evidence and for the
punishment of students. For instance, a lawsuit filed in March 2014 by the American Civil
Liberties Union of Minnesota alleged that a former student’s free speech and privacy rights were
violated when the student was unfairly punished for comments posted to her Facebook page. The
punishment included detention, suspension and she was forced to turn over passwords to her
Facebook and email accounts. The case settled with the Pope County in West Central Minnesota
paying $70,000 and the district agreeing to changes in its policies regarding student privacy.161
These are just the tip of the iceberg for privacy issues arising from social media. Social media is
certain to present increasing challenges to privacy and data security. Concerns about the adequacy
of security of individuals’ information on social media sites has caught the attention of U.S.
regulators. In early May 2014, the U.S. Federal Trade Commission announced a settlement of
charges filed against Snapchat Inc., a popular mobile message application, which was charged with
deceiving its customers when it promised that photo and video messages on its site would disappear
shortly after being sent when there were methods that existed by which a recipient could use tools
outside of the application to save photo and video messages indefinitely.162 There has also been
regulatory scrutiny of Google and other large social media companies worldwide, with actions
taken against them by regulators not only in the U.S., but in countries around the world who are
concerned with the adequacy of privacy controls and the collection of information about
individuals.
158 See, e.g., Judge resigns amid probe about Facebook friend, Atlanta Journal-Constitution, Jan. 7, 2010.
159 Israeli military calls off raid after soldier posts details, cnn.com. Mar. 3, 2010, available at
http://www.cnn.com/2010/WORLD/meast/03/03/israel.raid.facebook/index.html.
160 Ken Lin, What Banks and Lenders Know About You from Social Media, Mashable Social Media, Oct. 7, 2011,
http://mashable.com/2011/10/07/social-media-privacy-banks/. One UK-based company marketed itself for its utilization of social
media profiles when underwriting online retailers. capExpand Use Social Media to Underwrite Online Businesses, Jan. 31, 2013,
http://www.bloomberg.com/article/2013-01-31/arp4NczdKsjs.html.
161 Susan Lunneborg, Facebook lawsuit settled: Minnewaska Area agrees to update student privacy policies to address
electronic media, Western Central Tribune, http://www.wctrib.com/content/facebook-lawsuit-settled-minnewaska-area-agreesupdate-
student-privacy-policies-address; Susan Lunnenberg, Western Minnesota student’s free speech suit over Facebook comments
settled for $70K, http://www.twincities.com/localnews/ci_25419690/western-minnesota-students-free-speech....
162 Snapchat Settles FTC Charges That Promises of Disappearing Messages Were False, May 8, 2014, www.ftc.gov/newsevents/
press-releases/2014/05/snapchat-settles-ftc-charges-that-promises-of-disappearing-messages-were-false. See Judy Greenwald,
FTC slaps Snapchat for overpromising users on privacy, security, Business Insurance, May 9, 2014,
http://www.businessinsurance.com/article/20140509/NEWS07/140509821?tags
-36-
In the U.S., FTC complaints and settlements with social media giants have led to agreements of 20
years of auditing and fines of eight figures. For example, the FTC brought a complaint against
Facebook in 2011, focusing primarily on changes that the company allegedly made to its privacy
controls in 2009, which led to the automatic sharing of information and users’ pictures even if they
previously chose to not share that content.163 The FTC also contended that Facebook shared its
users’ personal information with third-party advertisers despite several public assurances from the
company that it did not. As part of a settlement reached with the FTC, Facebook agreed to submit
to government audits of its privacy practices every other year for the next 20 years and committed
to obtaining explicit approval from users before changing the types of content it makes public. 164
Facebook did not, however, admit any wrongdoing as part of the settlement. That settlement and
the privacy issues it addressed were the basis for a March 2014 complaint filed by privacy
watchdog groups with the FTC challenging the proposed sale of WhatsApp (a company that offers
an instant messaging service) to Facebook, based in part on the issue of whether Facebook’s
policies toward privacy are incompatible with WhatsApp’s “pro-privacy” stance.165 This has
resulted in a statement by the FTC of which social media sites that are buyers or sellers of other
sites take note. By letter dated April 10, 2014, the FTC’s Bureau Director noted that both
companies collect data from consumers but make different promises and statements with regard to
consumer’s privacy, with WhatsApp promises exceeding the protections currently promised to
Facebook users, and that “we want to make clear that, regardless of acquisition, WhatsApp must
continue to honor these promises to consumers.”166
More recently, in December 2014, the FTC entered into a settlement with Snapchat, Inc., also
involving 20 years of compliance assessments and reporting with regard to privacy controls.167
In 2012, Google also reached a settlement with the FTC in which it has agreed to pay $22.5 million
to settle charges that it secretly bypassed the privacy settings of millions of users’ Apple Safari web
browser.168 The settlement reflects a penalty for a violation of a prior order, which was a consent
decree in which Google agreed in October 2011 not to misrepresent its privacy practices to
consumers.169 The FTC alleged that Google used cookies to monitor Safari users’ web browsing
despite advising the users that they would automatically be opted out of any such tracking. Google,
however, did not admit any wrongdoing, which led to an objection to the settlement by FTC
163 FTC finalizes Facebook privacy settlement, USA Today, Aug. 10, 2012, http://www.usatoday.com/tech/news/story/2012-
08-10/ftc-facebook-privacy/56934670/1.
164 The settlement was finalized once the period for public comment ended. Kristin Jones, FTC: Facebook Finalizes
Settlement of Privacy Charges, The Wall Street Journal, Aug. 10, 2012,
http://online.wsj.com/news/articles/SB10000872396390443404004577581150743022604
165 Seth Rosenbaltt, Privacy groups ask FTC to block Facebook-WhatsApp deal, CNET, March 6, 2104,
http://www.cnet.com/news/privacy-groups-ask-ftc-to-block-facebook-whatsapp-deal/.
166 FTC Notifies Facebook WhatsApp of Privacy Obligations in Light of Proposed Acquisition, http://www.ftc.gov/newsevents/
press-releases/2014/04/ftc-notifies-facebook-whatsapp-privacy-obligations-light-proposed
167 In the Matter of Snapchat, Inc., Docket No. C-4501, United States of America, Federal Trade Commission (December 23,
2014 Decision and Order).
168 Google Will Pay $22.5 Million to Settle FTC Charges it Misrepresented Privacy Assurances to Users of Apple’s Safari
Internet Browser, FTC, Aug. 9, 2012, http://www.ftc.gov/news-events/press-releases/2012/08/google-will-pay-225-million-settle-ftccharges-
it-misrepresented.
169 Jennifer Valentino-Devries, Google to Pay $22.5 Million in FTC Settlement, The Wall Street Journal, Aug. 9, 2012,
http://online.wsj.com/article/SB10000872396390443404004577579232818727246.html.
-37-
Commissioner J. Thomas Rosch as well as a motion filed by Consumer Watchdog in federal court
in California that is responsible for approving or rejecting the proposed settlement.170 The court
approved the settlement despite the objection.171
Regulatory investigations and actions are likely to increase, as the challenge of balancing
innovation and privacy continues.
ii. In the UK
In the UK, employees’ use of social media raises legal issues as well, in contexts similar to those of
concern in the U.S.
Firstly, the use of LinkedIn; particularly, the extent to which employees are able to move to a
competitor, taking LinkedIn connections made during their employment with them. Secondly, the
extent to which employers are able to discipline, and even dismiss, employees for using social
media, for example, Facebook or Twitter, to make unflattering comments regarding their employer
and/or, in circumstances where there is some connection between their social media usage and their
work, posting offensive comments.
In the first context, the issue is to what extent an employee who has developed LinkedIn
connections with clients and contacts of his/her employer owns/is able to use those contacts once
they move to a competitor. In the UK case of Pennwell Publishing (UK) Limited –v- Ornstein,172
the UK High Court held that an address list contained on Outlook or some similar programme
which is part of the employer’s email system and backed up by the employer, will belong to the
employer, this decision being based on the Copyright and Rights in Databases Regulations 1997
(“The Database Regulations”).
The position of who owns LinkedIn contacts has not been determined by the UK Courts. Clause 2B
of the LinkedIn User Agreement, to which every LinkedIn account holder is a party, provides: “You
own the information you provide Linked in under this Agreement.” At some point, the UK Courts
will need to determine whether, in line with Pennwell, the collection of contacts on LinkedIn means
that, as a result of the Database Regulations, ownership is with the employer. The most relevant
case thus far is Hays Specialist Recruitment (Holdings) Ltd –v- Ions,173 in which Hays, which runs
specialist recruitment employment agencies, alleged that an employee who left to set up his own
competing company, had uploaded client and candidate details from its confidential database to his
own LinkedIn account and was using for his new company. Whilst the issue of ownership as such
was not determined, Hayes successfully obtained an order for preaction disclosure of all documents
evidencing the use made and business obtained by the employee and his company from business
contacts uploaded by him to LinkedIn whilst he was employed by Hays. The Judge holding in the
case that Hays had reasonable grounds for considering it might have a claim against him as regards
170 Consumer Watchdog raises issue with the settlement because it allows Google to deny any wrongdoing. Juan Carlos Perez,
Consumer Watchdog Challenges Google-FTC Privacy Settlement, PC World, Aug. 22, 2012,
http://www.pcworld.com/businesscenter/article/261282/consumer_watchdog_challenges_googleftc_privacy_settlement.html.
171 United States of America v. Google., Inc., Case No. CV12-04177 SI, U. S. District Court, Northern District of California.
172 Case References [2007] IRLR 700.
173 [2009] IRLR 904
-38-
the transfer of information concerning client and applicants by uploading it to his LinkedIn network
whilst still employed by Hays.
A further connected issue is whether an employee, by updating their LinkedIn profile, will be
“soliciting” clients to who they are connected, contrary to any non solicitation covenant within their
contract with their former employer. The UK Courts have not yet considered this issue. Existing
case law suggests that certainly, if in updating the profile, the employee actively encourages
contacts to get in touch/direct business to him/her, this would amount to solicitation. If the
employee merely updates his/her current job, without more the position is less clear cut. It can be
argued this does not involve any request to do business with the contact. It can also be argued the
other way. Employers in the UK, with a particular concern regarding this issue, should consider
amending restrictive covenants to try and make the positon clear cut.
The second context noted above raises interesting issues regarding the extent to which employees’
usage of social media, such as Facebook, can be regarded as entirely private and personal or, in
certain circumstances, as having a potential impact on the employer, therefore, entitling an
employer to scrutinise and, in appropriate circumstances, to take disciplinary action.
This is an area where the law is likely to continue to develop in the UK. In one key case, Smith –v-
Trafford Housing Trust174, the UK High Court held that an employer had not been entitled to
categorise an employee’s postings regarding their views on gay marriage on Facebook as
misconduct. This being so, notwithstanding that, the employee had identified himself as a manager
of the employer on Facebook. The High Court took the view, however, that it was clear from the
page that the employee was not using for work related purposes.
A more recent case, Game Retail –v- Laws175 considered an employee’s use of Twitter. The
employer, a games retailer with more than 300 stores, used Twitter and other social media for
marketing and communication purposes. The employee was employed as a risk and loss prevention
investigator and, in such role, was responsible for a number of the employer’s stores. He opened a
personal twitter account and allowed some 65 of the employer’s stores to follow him on that Twitter
account. He made no attempt to restrict settings, which meant that his Twitter postings were public.
He made a number of offensive postings on his Twitter account, albeit, these were not directed
against his employer, but against a variety of targets, including caravan drivers, dentists and golfers,
as well as, supporters of a particular football team, the police and disabled people. Following a
disciplinary process, the employer summarily dismissed the employee for gross misconduct. The
case then came to be considered by the UK Employment Appeal Tribunal (first level of appeal in
the UK Employment Tribunal system). Within its judgement the EAT indicated that there was, “a
balance to be drawn between an employers’ desire to remove or reduce reputational risk from social
media communications by its employees and the employees right of freedom of expression.” On
the facts of the case, the EAT indicated that it felt that it was relevant that there was extensive use
by the employee of Twitter for work purposes and that the employee had made no attempt to restrict
publication of his postings using settings on his account or by creating a work account to follow the
stores and a separate private account. When looking at whether or not the employer’s dismissal was
fair for UK unfair dismissal purposes, the EAT indicated the issue was whether or not the employer
174 [2013] IRLR 86
175 UKEAT/0188/14DA
-39-
was entitled to reach the conclusion that the postings might have caused offence and that it did not
matter that nothing derogatory was said regarding the employer itself. “The issue was whether the
material was, of its nature, offensive and might be going to the employer’s employees, contrary to
its harassment policy or to customers or potential customers.”
Thus, whilst indicating that, “generally speaking employees must have a right to express
themselves, providing it does not infringe on their employment and/or is outside the work context,”
this case makes clear that it will be appropriate in the UK for an employer to take disciplinary
action, and may be appropriate to dismiss, where social media usage has a clear connection to work
and is clearly inappropriate.
Key points for UK employers coming out of this are to have clear policies on the use of social
media, spelling out what is and is not permitted, and providing appropriate training to staff on this.
That would assist in ensuring that employee behavior is compliant and, where it is not, that
appropriate disciplinary action can be taken.
Additional issues in the UK arise out of employers’ use of social media, when carrying out preemployment
vetting, as part of recruitment processes and the use of social media by employees to
“bully/harass work colleagues, contrary to their employers’ policies.
7. Privacy Issues Arising Out of Behavioral Advertising and Online Tracking
Targeted advertising has become ubiquitous. Digital advertising is a $100 billion industry.176
Significant privacy concerns have recently been raised by regulators and in a rash of class actions
arising from targeted advertising and tracking of consumer behavior by companies that market
online and via mobile devices.
a. In the United States
i. The FTC Recommendations
The FTC defines Online Behavioral Advertising (“OBA”) as a process of “tracking consumers’
activities online to target advertising.”177 It often, but not always, includes a review of the searches
consumers have conducted, the Web pages visited, the purchases made, and the content viewed, in
order to deliver advertising tailored to an individual consumer’s interests.
The FTC has taken a strong interest in privacy issues presented by OBA. In a Final Report released
in March 2012, the FTC set forth its Final Framework, which affirmed that the guidance will apply
to OBA data that is reasonably linkable to a specific consumer, computer or device, including data
not yet linked which may reasonably become so.178 Among other factors, the FTC referenced two
categories of comments that influenced its decision to maintain this definition in the final March
176 Ken Doctor, The newsonomics of GAFA’s global reach, Nieman Journalism Lab, Mar. 21, 2013,
http://www.niemanlab.org/2013/03/the-newsonomics-of-gafas-global-reach/.
177 FTC Staff, FTC Staff Report: Self-Regulatory Principles for Online Behavioral Advertising at p. 2, Feb. 2009,
http://www.ftc.gov/news-events/press-releases/2009/02/ftc-staff-revises-online-behavioral-advertising-principles.
178 Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers, at pp. 18 -
20, March 26, 2012, http://ftc.gov/os/2012/03/120326privacyreportpdf.
-40-
2012 report: (a) “. . . commenters pointed to studies demonstrating consumers’ objections to being
tracked, regardless of whether the tracker explicitly learns a consumer name, and the potential for
harm, such as discriminatory pricing based on online browsing history, even without the use of PII”;
and (b) “. . . commenters noted, the ability to re-identify ‘anonymous’ data supports the proposed
framework’s application to data that can be reasonably linked to a consumer or device.”179 The
final March 2012 Framework responded to businesses’ concerns that the “reasonably linkable”
definition may be overbroad in practice, by stating that:
… a company’s data would not be reasonably linkable to a particular consumer or device to
the extent that the company implements three significant protections for that data. First, the
company must take reasonable measures to ensure that the data is de-identified…. Second,
a company must publicly commit to maintain and use the data in a de-identified fashion, and
not to attempt to re-identify the data…. Third, if a company makes such de-identified data
available to other companies – whether service providers or other third parties – it should
contractually prohibit such entities from attempting to re-identify the data.180
In March 2013, the FTC released updated guidance for mobile and other online advertisers that
focused on explaining how to make their disclosures to consumer of their practices clear and
conspicuous to avoid charges of deceptive practices (updating guidance initially released in 2000).
This FTC Guidance is entitled .com Disclosures: How to Make Effective Disclosures in Digital
Advertising.181
In a continued effort to address issues raised by rapidly changing technologies allowing for
progressively greater collection and use of consumer information, in November 2013, the FTC
hosted a workshop titled “The Internet of Things – Privacy & Security in a Connected World,”182
which focused on the collection and use of information across multiple devices (like a consumer’s
smartphone and tablet, as well Internet activity from their home computer). Although the workshop
did not signal any actual new regulations or guidance from the FTC, panellist remarks throughout
the course of the workshop provide some insight into where regulation may be headed in the future.
In particular, the panel’s remarks indicate that regulatory measures may include heavy emphasis on
appropriate use of information based on the context in which it was collected, possible efforts to
provide for some level of standardization in privacy disclosures, and potential evaluations of the
viability of traditional notice and choice regimes in contexts where it is not realistic for consumers
to evaluate provided disclosures (such as in connection with interactions via smartphones).
As part of it continued efforts to “help ensure that consumers enjoy the benefits of technological
progress without being laced at risk of deception and unfair practices,” on March 23, 2015, the FTC
announced the formation of the Bureau of Consumer Protection’s Office of Technology Research
179 Id. Also noting that commenters to the December 2010 draft report “pointed to incidents, identified in the preliminary staff
report, in which individuals were re-identified from publicly released data sets that did not contain PII.” Id. at p. 18.
180 Id. at p. 21.
181 The Guidance is available through the FTC website, www.ftc.gov.
182 Information about and the video of this workshop is available at https://www.ftc.gov/news-events/eventscalendar/
2013/11/internet-things-privacy-and-security-connected-world.
-41-
and Investigation (“OTRI”), to continue the work performed by the FTC’s Mobile Technology
Unit.183
The FTC has repeatedly testified before Congress both on the Agency’s own efforts and in favour
of proposed legislation. On May 15, 2014, the FTC testified before Congress on the agency’s
ongoing efforts to protect consumers from emerging threats related to online advertising and
associated tracking of consumers’ online activities across websites, and on the agency’s continued
outlining steps the agency is taking through enforcement actions as well as consumer education.184
The FTC has continued to testify before Congress on privacy issues, including providing feedback
on proposed legislation.185
This FTC has continued an aggressive stance in addressing privacy related issues. 186 In the context
of its concern about privacy issues associated with OBA, it has initiated enforcement proceedings
and announced enforcement consent orders against companies for delivering OBA without
consumer consent, generally alleging “deceptive” acts in violation of the FTC Act and imposed
ongoing reporting requirements for as much as 20 years.187 Enforcement actions continue. On
August 9, 2012, the FTC and Google entered into a consent decree resulting in a $22.5 million fine
– the largest awarded ever by the FTC – for Google’s alleged use of cookies to circumvent user’s
privacy settings in Apple’s Safari browser. This allegedly caused users who had elected not to
receive targeted ads to be served with Google’s targeted ads anyway.188 Other examples include the
March 2013 FTC announcement that it settled charges against Epic Marketplace, an online ad
network, that allegedly used “history sniffing” to gather data from millions of consumers across
sites they visited and ads they reviewed, including collection of data about sites outside its network
relating to personal health conditions and finances.189 More recently, on April 23, 2015 the FTC
announced a settlement with Nomi Technologies which, according to the FTC Complaint, uses
mobile device tracking technology to provide analytics services to brick and mortar retailers
through its “Listen” service, and allegedly had been collecting information from consumers’ mobile
devices to provide the Listen service since January 2013, without providing an opt-out mechanism
at client’s retail locations, contrary to alleged representations. While the reported settlement did not
include a monetary fine, its directives as to representations on options for consumer control over
183 FTC. BCP’s Office of Technology Research and Investigation: The next generation in consumer protection, March 23,
2015, https://www.ftc.gov/news-events/blogs/business-blog/2015/03/bcps-office-technolgy-research.
184 See FTC May 15, 2014 press release and the Prepared Statement of The Federal Trade Commission on Emerging Threats
in the Online Advertising Industry before the Committee on Homeland Security and Governmental Affairs, Permanent Subcommittee
on Investigations, United States Senate, May 15, 2014, http://www.ftc.gov.
185 See, e.g., FTC Testified on Proposed Data Security Legislation Before House Energy and Commerce Committee’s
Commerce, Manufacturing and Trade Subcommittee, March 18, 2015, https://www.ftc.gove/news-events/press-releases/2015/03/ftctestifies-
proposed-data-security.
186 See Section III. 2.a., FTC Regulation of Privacy and Data Protection, below.
187 In the Matter of Chitika, Inc., the FTC pursued Chitika for having an “opt-out” for behavioral advertising that expired after
10 days, alleging that this was a “deceptive” practice because the opt-out was not meaningful. Chitika now has a 20-year reporting
requirement to the FTC. In August 2011, the FTC pursued its first mobile app complaint, resulting in a consent decree against a
mobile advertiser that served targeted ads to children under the age of 13 in violation of COPPA. United States of America, Plaintiff
v. W3 Innovations, LLC, also d/b/a Broken Thumbs Apps, http://www.ftc.gov/opa/2011/08/w3mobileapps.shtm. On November 8,
2011, the FTC entered into a consent order against a digital third-party advertiser, Scanscout, for its alleged due of flash cookies to
target advertising.
188 http://www.ftc.gov/os/caselist/c4336/120809googlecmptexhibits.pdf.
189 Epic Marketplace, Inc. No. C-4389 (FTC March 2013), see http://www.ftc.gov/enforcement/cases-proceedings/112-
3182/epic-marketplace-inc.
-42-
collection, use, disclosure or sharing of information collected from or about them or their computers
or devices included making available to the FTC compliance information for 5 years, delivering a
copy of the order to all subsidiaries and personnel for 10 years, and continuation of the consent
order for 20 years.190
ii. Industry Self-Regulation
Faced with increasing regulatory oversight and enforcement actions, the online advertising industry
has increased its self-regulation of OBA. 191
The Digital Advertising Alliance (DAA) released its Self-Regulatory Principles for Online
Behavioral Advertising, issued in 2009, followed by an implementation guideline for a Self-
Regulatory Program in 2010.192 It has continued to issue various guidelines on implementation of
its Principles, such as its July 2013 Application of Self-Regulatory Principles to the Mobile
Environment. 193 Recently, the DAA announced that enforcement of its Principles in the mobile
environment will begin on September 1, 2015. 194
Other organizations such as the World Wide Web Consortium Tracking Protection Working Group
have also been working on issuing self-regulatory guidelines.195 The Direct Marketing Association
(“DMA”) has also issued OBA guidelines underscoring several Self-Regulatory Principles
(Education, Transparency, Consumer Control, Data Security, Material Changes, Sensitive Data, and
Accountability) set forth by various other advertising organizations and the Better Business Bureau
in response to FTC calls for greater transparency knowledge and choice by consumers.196 The
Council of Better Business Bureaus administers the Online Interest-Based Advertising
Accountability Program, which includes initiation of formal enforcement of Self-Regulatory
Principals for Online Behavioral Advertising, and has endorsed the self-regulatory principles
drafted by the Digital Advertising Alliance (“DAA”). Companies agree to voluntarily modify their
practices to comply with the Principles.197 The BBB provides an avenue for consumers to report
consumer-unfriendly online advertising practices through the Better Business Bureau website,198
and allows consumers to generally describe such non-compliance as well as categorize it into one of
several types: no icon, poor disclosure, no opt out, not easy to use, or opt out ignored. The Better
Business Bureau has instituted an Accountability Program, 199 which issued a Compliance Warning
190 FTC, Retail Tracking Firm Settles FTC Charges it Misled Consumers About Opt Out Choices, April 23,
2015,https://www.ftc.gov/news-events/press-releases/2015/04/retail-tracking firm- settled- ftc-charges
191 See Christopher Mickus, Technology: FTC and self-regulatory frameworks regarding online behavioral advertising, Inside
Counsel, October 18, 2013, http://www.insidecounsel.com/2013/10/18/technology-ftc-and-self-regulatory-frameworks...
192 See www.iab.org website for details, e.g., http://www.iab.net/public_policy/behavioral-advertisingprinciples.
193 http://www.aboutads.info/DAA_Mobile_Guidance.pdf.
194 Digital Advertising Alliance Announces Mobile Privacy Enforcement to Begin September (2015), May 7, 2015,
http://www.digitaladvertisingalliance.org/content.aspx?page=media&id=Media7.
195 See http://www.w3.org/2011/tracking-protection/.
196 http://www.dmaresponsibility.org/privacy/oba.shtml.
197 See www.bbb.org website.
198 http://www.bbb.org/online-behavioral-advertising/report-form/.
199 http://www.bbb.org/council/for-businesses/advertising-review-services/online-interest-based-advertising-accountabilityprogram/.
-43-
in 2013. Since then, it has issued several enforcement actions for non-compliance with the
Principles, in an effort to demonstrate that self-regulation is working. 200
iii. Do Not Track Class Actions
Consumers are claiming that tracking their activities online or on their mobile devices violates their
right to privacy, and generally alleging a variety of state and federal statutory and common
law claims and violations. The class action bar filed more than 150 putative class action lawsuits
alleging violations of the Electronic Communications Privacy Act (“ECPA”), the Computer Fraud
and Abuse Act (“CFAA”), and state laws.
The ECPA prevents access and tracking of user behavior without consent. Initially, primary
defenses to the ECPA claims were based on motions to dismiss claiming lack of Article III standing
(no injury in fact) and consent. However, starting around June 2012, courts (particularly those in
California), held in several cases that plaintiffs had sufficiently alleged harm to avoid dismissal of
their complaints, and the U.S. Supreme Court at that time decided not to address whether statutory
damages could constitute injury in fact, thus raising issues as to the continued viability of the harm
defense to future privacy class actions at least in those jurisdictions.201 Thus, the focus in the
defense is generally on the sufficiency of disclosures and consent provisions on the website and
contained in a Privacy Policy and Terms of Use), with some courts showing a willingness to infer
200 Accountability Program Continues Compliance Sweep: Websites Take Responsibility for Informing Consumers abut Data
Collection, May 14, 2015, http://www.asrcreviews.org/2015/05/accountability-program-continues-compliance-sweep-websites-takeresponsibility-
for-informing-consumers-about-data-collection/.
201 See Section VII on privacy based litigation below. As notated there, the U.S. Supreme Court on April 27, 2015 agreed to
hear an appeal of a decision from the Ninth Circuit holding that statutory violations can confer standing, in Robins v. Spokeo, Inc.,
135 S. Ct. 323 (April 27, 2015).
The Northern District of California decision in the In re iPhone/iPad Application Consumer Privacy Litigation, N.D. Cal. Case No.
5:11-md-02250, on June 12, 2012, had held that Article III harm had been alleged where the plaintiffs identified the specific
applications that tracked their behavior and other harm besides just invasion of privacy. This was a reversal of the Northern District’s
original position finding no Article III harm as it related to the original complaint. In the June 12, 2012 Order, the Northern District
stated: “Plaintiffs have articulated additional theories of harm beyond their theoretical allegations that personal information has
independent economic value. In particular, Plaintiffs have alleged actual injury, including: diminished and consumed iDevice
resources, such as storage, battery life, and bandwidth,… increased, unexpected, and unreasonable risk to the security of sensitive
personal information …; and detrimental reliance on Apple’s representations regarding the privacy protection afforded to users of
iDevice app. Additionally, Plaintiffs have addressed the deficiencies identified in the Court’s September 20 Order.” (June 12, 2012
Order, Dkt 69, at 10.)
At that time, on the heels of the Northern District’s decision, the Supreme Court decided not to address the harm threshold
issue. As background, in late 2011, the U.S. Supreme Court took up the issue of standing in the case of First American Financial
Corp. v. Edwards. The Ninth Circuit Court of Appeals had ruled that statutory damages alone are enough to confer Article III
standing on a plaintiff under the Real Estate Settlement Procedures Act (RESPA). The high court was expected to address its prior
decisions that had held that Article III’s “case and controversy” provision required that a plaintiff allege, and eventually prove, that
he or she suffered an “actual injury” as a result of a defendant’s conduct in order to have standing to sue in federal court. However,
after briefing and argument was complete, rather than upholding or overturning the lower court’s decision, on June 28, 2012, the
Supreme Court simply dismissed the appeal as “improvidently granted.” The Supreme Court’s decision may be found at
http://www2.bloomberglaw.com/public/document/First_American_Financial_Corp._v_Edwards_No_10708_2012_BL_160940_U.
At least one court has held, after the First American Financial Corp. decision, that plaintiffs had standing to assert claims for privacy
violations relating to tracking, and the defendant’s motion to dismiss based upon lack of harm ground failed. See e.g., In re Hulu
Privacy Litigation (Case No. 3:11-cv-03761, Dkt 68, Jul. 28, 2012 Order).
-44-
consent if a consumer has reviewed a privacy policy that discloses tracking. Other defences include
focusing on the requirements of ECPA for disclosure of contents of a communication. 202
The CFAA makes it unlawful to track user browsing behavior if this causes $5,000 in economic
loss, and thus the focus in defense is generally on the sufficiency of allegations and evidence of
economic loss.203
The plaintiffs’ bar has also filed lawsuits relying on other privacy statutes, such as the Video
Privacy Protection Act (“VPPA”), to pursue claims. For example, In re Hulu Privacy Litigation
considers whether a provider of online streaming digital video content qualifies as a “video tape
service provider” under the VPPA.204 The Hulu court found that Hulu was a “video tape service
provider” as defined by the VPPA, even as it pertained to free content on its website that users
(plaintiffs) could stream without affirmatively registering for or subscribing to the Hulu service.
The court referred to the legislative history of the VPPA as showing Congress’s intent to “ensure
that VPPA’s protections would retain their force even as technologies evolve.”205 The court was
also persuaded that Congress intended the VPPA to provide broad protection for consumers’
privacy. This decision cleared the way for the plaintiffs to allege statutory damages of $2,500 per
violation for millions of page views, assuming the other hurdles identified in the decisions are
overcome. Later decisions in the case demonstrated that the outcome can turn on the specific facts
of the case rather than a successful dispute of the principles espoused. 206 The extensive
202 In early 2014, the Ninth Circuit (which hears appeals from federal trial level courts sitting in California) addressed the
viability of claims of violation of the ECPA, among other statutory violations in Zynga Privacy Litigation, No. 11-18044, 2014 WL
1814029 (Ninth Cir., May 8, 2014) (in which social network and social gaming users brought class actions against a social
networking company [Facebook] and social gaming company [Zynga] alleging violations of the Wiretap Act and Stored
Communications Act provisions of the ECPA by disclosing user information to third parties; the court concluded that the plaintiffs
failed to state a claim for violation of ECPA because they did not allege disclosure of the “contents” of a communication, which it
found to be a necessary element of an ECPA claim). See also section below on privacy related litigation which identifies several
decisions addressing the issues of standing and what courts have found to be sufficient allegations of harm.
203 A recent decision discussing such claims under CFAA as well as California state law (and citing to decisions in other
jurisdictions) is In re Google Android Consumer Privacy Litigation, No. 11-MD-02264, 2014 WL 988889 (N.D. Cal., March 10,
2014). In that decision, Google moved to dismiss claims based on allegations that apps improperly collected personal data from their
Android mobile phones and shared this data with Google. The motion to dismiss was based on the argument that plaintiffs lacked
standing under Article III of the U.S. Constitution. Plaintiffs in turn argued they had sufficient “injury in fact” based on allegations,
among others, that the increased rate at which their batteries discharged based on defendants’ conduct. The Court dismissed the
CFAA claims, noting that CFAA defines “loss” as “any reasonable cost to any victim…” and concluding that the allegations were
insufficient to establish damage or loss. However, the Court did allow to proceed a claim under the state Unfair completion Law,
even thought that statute also requires a plaintiff to have lost money or property to have standing to sue, on the grounds that
allegations of diminished battery life were sufficient under the state statute. Other causes of action also survived the motion to
dismiss, at least at this lower court state of the litigation. See also In re IPhone Litigation, No. 11-MD-02250, 2013 WL 6212591
(N.D. Cal. Nov. 25, 2013), dismissing such state law claims on summary judgment on grounds that while a fact issue existed as to
whether consumers suffered “injury-in-fact” that was economic in nature, the consumers lacked standing based on their failure to
allege specific facts showing (or at least demonstrating a genuine issue of material fact) that they read and relied upon manufacturer’s
alleged misrepresentations as to its practices and suffered economic injury as a result of that reliance, i.e., causation as well as actual
reliance.
204 In re Hulu, No. 3:11-cv-03764-LB (N.D. Cal. Aug. 10, 2012); see also decision on summary judgment motions 2014 WL
1724344 (N.D. Cal. April 28, 2014) (granting Hulu summary judgment on the comScore disclosures which were demonstrated to be
anonymous disclosures when sent by Hulu, but denying as to the Facebook disclosures on the ground that there were material issues
of fact about whether the disclosure of the video name was tied to an identified user such that it was a prohibited disclosure under the
VPPA, and because the record was not developed sufficiently to determine as a matter of law whether Hulu knowingly disclosed
information or whether Hulu users consented to the disclosures).
205 Id., August 10, 2012 decision, at 9.
206 In a March 31, 2015 decision, the District Court granted Hulu’s motion for summary judgment based on the specific facts
of the case, thus leaving the door open to a finding of violation of VPPA in other cases with different facts. In Re: Hulu Privacy
-45-
proceedings in this case demonstrate the risks, costs and uncertainties of such litigation, and the
advisability of investing in compliance rather than lawsuits.
iv. Do Not Track Legislation
Calls for a national “Do Not Track” law have continued to be unsuccessful, although the Obama
Administration has supported one and numerous bills have been proposed.
The Obama Administration has called for a universal privacy bill since March 2011, and expressly
supported the FTC’s “Do Not Track” proposals. Legislators initially responded with numerous
proposals, with an effort initiated by Senator Rockefeller in 2013, but so far none have been
enacted.207
California also proposed a “Do Not Track” bill that contains a private right of action and statutory
penalties.208 Ultimately, in September 2013, California’s Governor signed a new privacy law that
went into effect January 1, 2014, and requires that businesses that operate a commercial website or
online serve and collect “personally identifiable information” (as defined by the law) to disclose
how they respond to Web browser “do not track” signals or other mechanisms that provide
consumers with the ability to exercise choice over the collection of their PII , and disclose if such
information is collected by a clear and conspicuous hyperlink in its privacy policy that links to a
description of any protocol the operator follows that offers the consumers the choice to opt out of
internet tracking.209 The law’s requirements focus on disclosure, rather than on requiring the
honouring of do not track requests, but of course not following disclosures can be the basis for
Litigation, supra In the Hulu case, the court found dispositive that there was “no evidence that Hulu knew that Facebook might
combine a Facebook user’s identify (contained in the c_user cookie) with the watch-page address t yield ‘personally identifiable
information’ under VPPA.” Id., March 31, 2015 Decision at page 10. This was distinguished from the case that gave rise to the
enactment of VPPA, which was that of a video store giving a newspaper reported a list of the videos that then Circuit Judge Robert
Bork had rented, so that the connection between a specific user and the material that he requested or obtained as obvious. Id at 8.
207 See, e.g., H.R. Bill Nos. 611, 653 and 654, which recommend “do not track” without consumer consent (introduced by
Representatives Bobby Rush and Jackie Speier, respectively, in February 2011. HR 611 was referred to the House Committee on
Energy and Commerce in February 2011; HR 653 was referred to the House Committee on Financial Services, and HR 654 was
referred to the House Committee on Energy and Commerce.) On May 13, 2011, Representatives Markey (D) and Barton (R) cosponsored
HR 1895 “Do Not Track Kids Act of 2011” (proposing amendments to the Children’s Online Privacy Protection Act to
prohibit mobile tracking of children under the age of 13). Also, Senators John Kerry and John McCain introduced legislation on the
Senate side. See Commercial Privacy Bill of Rights Act (introduced Mar. 2011) at http://thomas.loc.gov/cgibin/
query/C?c112:./temp/~c1129yzKOm. Senator Rockefeller introduced the Do-Not-Track Online Act of 2011 as SB 913 (which
would create a “universal legal obligation” for companies to honor users’ opt-out requests on the Internet and mobile devices). This
bill was referred to the Senate Committee on Commerce, Science and Transportation. In 2013, Senator Rockefeller tried again,
introducing in the Senate S.418, the Do-Not-Track Online Act of 2013 (which would require the FTC to promulgate regulations that
establish standards for implementation of a mechanism by which an individual can indicate if he or she prefers to have personal
information collected by providers of on line services, including providers of mobile applications, and prohibit collection of personal
information on individuals who have expresses a preference not to have such information collected, and which would also allow for
collection and usage of such information notwithstanding the expressed preference if necessary to provide a service requested by the
individual so long as identifying particulars are removed or information deleted on provision of the service, or the individual receives
clear, conspicuous, and accurate notice and consents to such use and collection).
208 See, SB 761, introduced by state Senator Alan Lowenthal on February 18, 2011. It was amended four times, with the last
amendment on May 10, 2011, and included a requirement that the state attorney general issue regulations that would require Web
companies to notify state residents about online data collection and allow them to opt out, and contained a private right of action and
a statutory penalty of $1,000 per violation.
209 Cal. Bus. & Prof . Code §§22575-22578.
-46-
misrepresentation claims. In May 2014, the California Attorney General issued recommendations
on how to comply with this and other California privacy laws.210
The FTC revised its rule promulgated under the Children’s Online Privacy Protection Act
(“COPPA”)211 to expand the definition of “personal information” to include certain OBA
information, such as persistent identifiers, effective July 1, 2013.
Although federal legislative efforts have yet to bear fruit, there is no shortage of attention to related
issues at the highest levels of the federal government. The documents released by the
Administration and Senate—though not strictly law—still provide guidance on appropriate practice
and what may be addressed in future legislation. On February 23, 2012, the Obama Administration
issued a comprehensive framework for consumer privacy protection entitled “Consumer Data
Privacy In A Networked World: A Framework for Protecting Privacy and Promoting Innovation In
the Global Digital Economy (the “President’s Privacy Framework”).212 The document outlines a
vision for consumer privacy and provides guidance, particularly in the areas of behavioral
advertising and mobile media. It also includes a definition of “personal data” that includes
information that is used to deliver targeted marketing (e.g., mobile unique identifiers). The
President’s Privacy Framework consists of: (1) a Consumer Privacy Bill of Rights; (2) a multistakeholder
process to develop enforceable codes of conduct; (3) enhanced enforcement by the FTC
and safe harbours for companies that adopt codes of conduct; and (4) a commitment to increase
intraoperability with the privacy frameworks of international partners. On February 24, 2014, the
second anniversary of the issuance of the framework, 40 organizations signed a letter to the
President urging that he work with congress to pass legislation applying this “Consumer Bill of
Rights” to commercial sectors not subject to existing Federal data privacy laws.213
In May 2014, the Administration released a “Big Data Privacy Report.”214 Although this report does
not address particular legislative or regulatory efforts to address OBA-related issues, it does
implicitly include some call to action through a critique of current practices that are meant to
provide for greater consumer control. In particular the Big Data Privacy Report notes that current
practices are often simply ignored by consumers and, if utilized, may limit consumers’ ability to use
useful services.
Also in May 2014, the Senate Committee on Homeland Security and Governmental Affairs
Permanent Subcommittee on Investigations released a staff report titled “Online Advertising and
Hidden Hazards to Consumer Security and Data Privacy.”215 The staff report calls attention to a
concern that OBA practices and mechanisms may give rise to information security or malware
210 Making Privacy Practices Public – Recommendations on Developing a Meaningful Privacy Policy, May 2014, available at
http://oag.ca.gov.
211 Children’s Online Privacy Protection Rule 16 C.F.R. § 312, http://www.ftc.gov/os/2011/09/110915coppa.pdf. Also, on
November 8, 2011, the FTC issued its new guidance regarding consumers and cookies. See http://onguardonline.gov/articles/0042-
cookies-leaving-trail-web.
212 See Dominique R. Shelton, Takeaways From Obama’s New Consumer Privacy Framework,” Daily Journal, Mar. 2, 2012,
available at http://www.edwardswildman.com/files/upload/Edwards%20Wildman%20(DJ%203%202%2012)%20e-p.pdf.
213 See http://wepic.org/privacy/white_house_consumer_privacy_bill_of_rights.
214 Available at https://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_5.1.14_final_print.pdf.
215 Available at https://otalliance.org/system/files/files/resource/documents/report_-
_online_advertising_hidden_hazards_to_consumer_security_date_privacy_may_15_20141.pdf.
-47-
concerns. The staff report also echoes previous FTC concerns that traditional notice-and-choice
paradigms may not be sufficient to protect consumer privacy in light of changing technologies.
In February 2015, the Administration released another report, this one titled “Big Data and
Differential Pricing.”216 The Big Data and Differential Pricing report assesses issues that may arise
where companies adjust pricing based on information collected about consumers, and the potential
economic and discriminatory impact of those issues. In discussing appropriate handling of these
issues, the report calls current self-regulatory regimes into question and suggests that greater efforts
to provide for enhanced consumer control may be appropriate in the online marketing context,
stating: “Thus, in their recent reports on the activities of data brokers and information resellers, both
the Federal Trade Commission and the Government Accountability Office have suggested a need to
rethink existing frameworks for regulating consumer privacy and the acquisition and use of big data
in the marketing context.”
b. EU Positions on Online Behavioral Advertising
Effective May 25, 2011, countries in the EU were required to implement regulations to obtain
explicit consent before companies collect OBA information. On December 13, 2011, the UK’s
Information Commissioner’s Office (the ICO) advised that opt-in consent will be necessary to
collect OBA.217 The UK announced that it would begin enforcement actions to ensure compliance
starting May 25, 2012.218 In May 2012, the ICO published revised guidance on the rules on use of
cookies and similar technologies. The 2012 guidance is identical in almost all respects to the revised
guidance published in December 2011, with the exception of the ICO's advice on the use of implied
consent. The guidance now states that the provider can rely on implied consent, but only on the
understanding that: it is specific and informed and there is some action on the part of the user from
which consent can be inferred. 219
Further, on February 4, 2013 a new set of OBA rules came into effect, aiming to secure
transparency and control for web users and enforced by the UK’s Advertising Standards Authority
(the “ASA”). These rules supplement existing opt in and transparency rules for cookies under the
Privacy and Electronic Communications (EC Directive) (Amendment) Regulations 2011 (“Privacy
Regulations”) enforced by the ICO. There is some overlap between the ASA's guidance and the
cookie consent opt-in legal requirements under the Privacy Regulations.
216 Available at https://www.whitehouse.gov/sites/default/files/whitehouse_files/docs/Big_Data_Report_Nonembargo_v2.pdf.
217 Must Try Harder on Cookie Compliance Says ICO, Information Commissioner’s Office News Release, Dec. 13, 2011,
http://www.ico.gov.uk/news/latest_news/2011/must-try-harder-on-cookie-compliance-says-ico-13122011.aspx .
218 See Edwards Wildman Palmer LLP Client Advisory, 25 May Deadline for UK Website Providers, Apr. 2012,
http://www.lockelord.com/edwards-wildman-client-advisory---cookie-transparency-25-may-deadline-for-uk-website-providers-04-
18-201.
219 The ICO 2012 guidance is available at https://ico.org.uk/media/for-organisations/documents/1545/cookies_guidance.pdf.
-48-
The ASA rules require220 :
• Notifying consumers - third parties delivering ads to web users using OBA must give a
"clear and comprehensive" notice to web users about the collection and use of web viewing
behaviour data. This notice must be given on the third party's own website and either in or around
the advertisement delivered by OBA.
• Consumer choice - the notice must also inform users of how to opt out of OBA and must
include a link to a relevant mechanism that allows them to opt-out.
• Explicit consent if all website visited information is captured - third parties that use
technology to collect and use information about all or substantially all websites visited by web users
on a particular computer must obtain explicit consent. This rule is aimed at "deep packet inspection"
OBA, typically conducted at an ISP level.
• No targeting of under 12s - third parties delivering OBA must also not create "interest
segments" specifically designed for the purpose of targeting children aged 12 or under.
Privacy regulators in EU Member States have shown a keen interest in reviewing compliance with
EU cookie and tracking legislation, as exemplified by the recent release of the “cookies sweep:
report by the Article 29 Working Party, the on-going investigations into Facebook’s tracking
policies, and the landmark Court of Appeal decision in Vidal-Hall v Google on damages for misuse
of personal data,221 discussed below.
In July 2014 several EU Data Protection Authorities announced that they would be conducting an
“EU cookies sweep” to find out whether companies are using cookies on their websites (pursuant to
Article 5(3) of the e-Privacy Directive (2002/58/EC),222 as amended by Directive 2009/136/EC ).223
Article 5(3) of the e-Privacy Directive states that the storing of information or the gaining of access
to information already stored in the terminal equipment of a subscriber or user is only permitted on
condition that the user has given their prior consent, having been provided with clear and
comprehensive information about the purposes of the processing. The cookies sweep was conducted
in September 2014 and audited 478 sites in media, e-commerce and the public sector, across 8
Member States. The Article 29 Working Party released a report analysing the results in February
2015, in which it found that 43% of the audited sites did not provide sufficient information
220 The ASA Codes are available at http://www.cap.org.uk/Advertising-Codes/Non-broadcast-HTML/Appendix-3-Online-
Behavioural-Advertising.aspx.
221 [2015] EWCA Civ 311, Case No: A2/2014/0403
222 Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal
data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications),
accessible at http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32002L0058:en:HTML.
223 Directive 2009/136/EC of the European Parliament and of the Council of 25 November 2009 amending Directive
2002/22/EC on universal service and users’ rights relating to electronic communications networks and services, Directive
2002/58/EC concerning the processing of personal data and the protection of privacy in the electronic communications sector and
Regulation (EC) No 2006/2004 on cooperation between national authorities responsible for the enforcement of consumer protection
laws, accessible at http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2009:337:0011:0036:en:PDF.
-49-
regarding the types or purposes of cookies used, and over half of the sites did not request consent
from users (as required by the legislation) but merely informed them that cookies were in use.224
The Centre of Interdisciplinary Law and ICT at the University of Leuven in Belgium recently
conducted an investigation (commissioned by the Belgian Privacy Commission) into Facebook’s
policies and terms and conditions. The report concludes that several of Facebook’s policies fall foul
of EU privacy legislation, including in particular Article 5(3) of the e-Privacy Directive.225 When a
Facebook user visits a non-Facebook website which contains a Facebook social plugin (i.e. a widget
added to a website, such as the Facebook ‘Like’ button which allows users to share pages from a
website on a Facebook profile page), Facebook receives several cookies. Facebook’s cookie policy
explicitly states that they “still use cookies if you don’t have an account or have logged out of your
account.” Contrary to the provisions of the e-Privacy Directive, users (and non-users) of Facebook
are automatically opted-in to allow Facebook to track their Internet use in this manner: the
Facebook Help Centre provides guidance as to how to opt out of this tracking (rather than opt in).
Facebook’s privacy policies and tracking practices are currently being investigated by a collective
of privacy regulators in France, Spain, Italy, the Netherlands, Belgium and Germany. If the
investigations proceed, this may well lead to the imposition of a fine on Facebook together with an
order to change its privacy and tracking policies.
The Court of Appeal226 has also recently upheld a landmark decision of the High Court of England
and Wales in Vidal-Hall v Google227 in which the High Court introduced a new tort into English
civil law (the tort of misuse of private information) and held that claimants may be awarded
damages for distress where no financial loss has been suffered but personal data has been misused.
The case concerned the use of cookies by Google to track the claimants’ Internet usage via Safari.
Google used cookies without obtaining the claimants’ prior consent in order to collate the
claimants’ browser generated information, which Google then offered to its advertiser customers
through the Google ‘doubleclick’ advertising service. The advertisers then used the browser
generated information to place targeted advertisements on the claimants’ computer and device
screens. The claimants claimed that the placing of the advertisements revealed private information
about them and their browsing history, which was or could have been seen by third parties, and that
the collection and use of their browser generated information was a misuse of private information
and breach of confidence damaging their personal dignity, autonomy and integrity, which caused
them anxiety and distress. The High Court ruled that the English claimants were entitled to bring
their claims against Google Inc. (a U.S. based company) in the English courts. Google appealed this
decision to the Court of Appeal, which upheld the High Court’s decision.
Of particular interest is the Court’s finding in respect of section 13(2) of the UK Data Protection
Act 1998 (the “Data Protection Act”). The purpose of the Data Protection Act is to implement EU
224 Article 29 Data Protection Working Party ‘Cookie Sweep Combined Analysis – Report’, Adopted on 3 February 2015,
accessible at http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2015/wp229_en.pdf.
225 From Social Media Service to Advertising Network: A critical analysis of Facebook’s Revised Policies and Terms,
accessible at http://www.law.kuleuven.be/icri/en/news/item/facebooks-revised-policies-and-terms-v1-2.pdf.
226 Google Inc. v Vidal-Hall et al, [2015] EWCA Civ 311.
227 Vidal-Hall et al v Google Inc. [2014] EWHC 13 (QB).
-50-
Directive 95/46/EC (the “Data Protection Directive”).228 Article 23 of the Data Protection Directive
requires Member States to “provide that any person who has suffered damage as a result of an
unlawful processing operation or of any act incompatible with the national provisions adopted
pursuant to this Directive is entitled to receive compensation from the data controller [the entity
which determines the manner in which and the purposes for which collected personal data is used]
for the damage suffered” (emphasis added). Section 13(2) of the Data Protection Act (which seeks
to implement Article 23 of the Data Protection Directive) provides that individuals who suffer
distress are entitled to compensation where they also suffer damage as a result of the data
controller’s misuse of their personal data. The Court of Appeal found that the Data Protection
Directive “does not distinguish between pecuniary and non-pecuniary damage,” since “what the
Directive purports to protect is privacy rather than economic rights.” The Court of Appeal further
found that Article 13(2) of the Data Protection Act does make this distinction, and that “if
interpreted literally, section 13(2) has not effectively transposed article 23 of the Directive into our
domestic law.”229 The Court of Appeal ultimately found section 13(2) of the Data Protection Act to
be incompatible with Article 23 of the Data Protection Directive, and allowed the claimants to
continue with their claim against Google. The High Court and Court of Appeal decisions in Vidal-
Hall relate to preliminary issues, and the final outcome of the matter is a long way away.
In the meantime, the large fines and a change in the law (to require explicit prior consent to the use
of tracking cookies) will likely encourage Internet browser companies to reconsider their practices.
The draft text of the proposed EU General Data Protection Regulation suggests that privacy is
paramount and that explicit consent may indeed become a legal requirement in the not-too-distant
future: Article 4(8) of the draft Regulation currently defines “the data subject’s consent” as “any
freely given specific, informed and explicit indication of his or her wishes by which the data
subject, either by a statement or by a clear affirmative action, signifies agreement to personal data
relating to them being processed .” 230
8. Mobile/Apps as a Growing Exposure
In light of the importance of digital advertising revenue to businesses has led to an increase in both
private litigation and regulatory scrutiny not only of Online Behavioral Advertising and tracking as
discussed above, but of the usage of mobile apps in particular and the challenges those present in
providing transparency and adequate disclosures to consumers increasingly utilizing apps on the
small screens of smart phones and other mobile devices. Regulators in both the U.S. and EU have
recently issued guidelines for mobile app developers, and indicated they will scrutinize how
developers address privacy concerns.231
228 Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with
regard to the processing of personal data and on the free movement of such data, accessible at http://eurlex.
europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:en:HTML.
229 Google Inc. v Vidal-Hall et al, [2015], supra.
230 Article 4(8) of the proposed General Data Protection Regulation, accessible at
http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P7-TA-2014-0212+0+DOC+XML+V0//EN.
231 See Regulators Are Expressing Heightened Interest in Mobile Apps and Privacy Enforcement: Is Your Company
Prepared?, Edwards Wildman, April 2013, http://www.edwardswildman.com/Regulators-are-Expressing-Heightened-Interest-in-
Mobile-Apps-and-Privacy-Enforcement--Is-Your-Company-Prepared-4-02-2013/
-51-
In the U.S., the heighted scrutiny accorded mobile/apps has been led by the FTC on the federal
level, and California on the state level, both of which have issued guidances, and instituted
enforcement actions against mobile app operators.
As discussed above, in March 2013, the FTC updated an earlier guidance on disclosures to take into
account the developments in technology, and released an updated guidance for mobile and other
online advertisers directed at explaining how to make their disclosures to consumer of their
practices clear and conspicuous to avoid charges of deceptive practices, .com Disclosures: How to
Make Effective Disclosures in Digital Advertising.232
The FTC is not the only federal agency scrutinizing mobile app developers. Agencies overseeing
specific industries are also entering the fray. The U.S. Food & Drug Administration (“FDA”), for
example, is scrutinizing mobile medical apps. The FDA has apparently been hesitant in the past to
take actions that would chill innovation of tools for monitoring medical conditions remotely, many
of which methodologies are available for smartphones and tablets. However, in May 2013, it issued
its first publically announced enforcement action against a mobile app developer by issuing a letter
to an India-based developer that had been the subject of complaint because it was selling its app on
Apple Inc.’s App Store to screen for diabetes and urinary tract infections without first seeking the
FDA’s blessing as required for medical devices.233 On September 25, 2013, the FDA issued its
final Guidance for Industry and Food and Drug Administration Staff on Mobile Medical
Applications, containing “nonbinding recommendations” with the stated goal set forth in the
Introduction that it is to “inform manufacturers, distributors, and other entities about how the FDA
intends to apply it regulatory authorities to select software applications intended for use on mobile
platforms…” 234 While the regulatory issue that triggered this was not privacy, the final Guidance
does refer readers to the FDA’s Guidance for Industry - Cybersecurity for Networked Medical
Devices Containing Off-the-Shelf (OTS) Software. 235
The U.S. National Institute of Standards and Technology (“NIST”) is also involved in developing
guidelines for organizations to address security issues in the use of mobile devices, including
providing recommendations for implementing centralized management technologies. In July 2012
it issued draft Guidelines for Managing and Securing Mobile Devices in the Enterprise.236 In June
2013, these were superseded by NIST Special Publication 800-124, Revision 1, Guidelines for
Managing the Security of Mobile Devices in the Enterprise.237
In January 2013, the California Attorney General issued a report entitled Privacy on the Go:
Recommendations for the Mobile Ecosystem, making it clear that mobile privacy will be an
enforcement priority for its newly created Privacy Enforcement and Protection Unit. The report
232 The Guidance is available through the FTC website, www.ftc.gov.
233 http://www.fda.gov/MedicalDevices/ResourcesforYou/Industry/ucm353513.htm; see Jeff Overlay, FDA Shows Deft Touch
With 1st Mobile App Enforcement, Law360, May 22, 2013, http://www.law360.com/articles/443997/print?section=lifesciences.
234 See the Guidance, available on
http://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/UCM263366.pdf
235 http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm077812.htm).
236 Available at http://csrc.nist.gov/publications/drafts/800-124r1/draft_sp800-124-rev1.pdf.
237 http://www.nist.gov/customcf/get_pdf.cfm?pub_id=890048
-52-
outlines best practices for mobile consumer privacy, including summary disclosures of key privacy
practices that are more accessible on small screens, minimizing use of device identifiers, and
limiting data collection and use to what is necessary to effect the functionality a consumer has
elected to receive.238
In the EU, the Article 29 Data Protection Working Party, an independent European advisory body
on data protection and privacy, adopted on 27 February 2013, its Opinion 02/2013 on apps on smart
phones.239 Its stated goal was “to clarify the legal framework applicable to the processing of
personal data in the development, distribution and usage of apps on smart devices, with a focus on
the consent requirement, the principles of purpose limitation and data minimisaton, the need to take
adequate security measures, the obligation to correctly inform end users, their rights, reasonable
retention periods and specifically, fair processing of data collected from and about children.240
9. The Importance of Privacy Policies
Increasingly, regulatory agencies investigating a company, often after such company has disclosed
that it sustained a breach or the agency has otherwise learned of the breach, will request and review
the privacy policies of that company, scrutinize its compliance with regulatory requirements and
review the accuracy of its privacy statements when evaluating whether to issue a fine, and, if so, the
amount. Similarly, lawsuits by consumers and other third parties affected by a data breach often
focus on representations as to data security made by the breached company in their privacy policies
and on their websites. Thus, a compliant privacy policy is a critical factor in mitigating exposures
arising from data breaches.241
Moreover, legislative and regulatory requirements at both state and federal levels increasingly focus
on the content of privacy policies, and whether they adequately disclose to consumers the
company’s practices in the collection and use of information about individuals. This is
demonstrated by California, with its development of statutory requirements for disclosure of the
collection and usage of PI.242 As with many statutory requirements, failure to follow them has
provided fodder for class action attorneys to try to certify classes and obtain statutory penalties that
are often calculated on a per-violation basis.
The FTC also regularly announces investigations and consent orders in connection with companies
making “unfair and deceptive” statements in their privacy policies which often result in large fines
and years of government oversight.243
10. New Technologies Bring New Risks
As corporations and consumers embrace new technology, cyber criminals adapt their tactics to take
advantage of new opportunities for data theft. Sales of smartphones and tablet computers have now
238 A copy of the report is available at http://oag.ca.gov/sites/all/files/pdfs/privacy/privacy_on_the_go.
239 Available on http://ec.europa.eu/justice/data-protection/index_en.htm.
240 Guidance at p. 2.
241 See Section VII. below on Privacy Litigation in the U.S.: Current Issues.
242 See Section III. 1. F. below on The California Example.
243 See Section III. 2. below on Federal Requirements.
-53-
eclipsed sales of PCs, and cyber criminals are beginning to shift more of their attention to trying to
exploit security holes in the ubiquitous mobile devices. At the same time, more and more
organizations are embracing a bring your own device (“BYOD”) to work culture (or at least
allowing use of them for work). In fact, bringing your own device was just a first step to a culture
of bring your own anything and everything (“BYOx”), from usage of personal clouds (BYOC,
including personal usage of cloud service or storage providers, rather than one with whom a
business has entered into a contractual relationship or otherwise sanctioned) to devices, applications
and wearables of all kinds. Increasingly, everyday devices are linked to the internet for monitoring
and control, ranging far beyond computers to include a wide range of home appliances and
monitoring mechanisms, medical devices, vehicles – in what has become popularly referred to as
the Internet of Things. It is now estimated that more things are connecting to the Internet than
people — over 12.5 billion devices in 2010, 244 and forecasts that 4.9 billion connected things will
be in use in 2015, up 30 percent from 2014, and will reach 25 billion by 2020.245
Interconnectedness, however, while providing positive advantages including efficiency and
connectivity between distant operations and operators, also brings with it vulnerabilities to
unauthorized access and related network and data security concerns, as well as increases privacy
issues.246
III. The U.S. Regulatory and Statutory Landscape: Obligations Under Data Privacy and
Security Laws and Regulations
The regulatory and statutory landscape related to data privacy and security has changed
significantly in the past decade in response to increasing concerns about information privacy,
identity theft, and fraud. State, federal, industry, and international requirements impose new and
evolving obligations on companies to protect the Personal Information they collect, store, maintain,
transfer or use, whether such information relates to customers, employees or others, as well as
notification requirements in the event of a data breach. There has been increasing scrutiny of the
usage of Personal Information, and increasing enforcement of disclosure obligations as to
companies’ collection and use of information about individuals. Non-compliance with applicable
requirements exposes companies to the risk of government investigations, fines and penalties, as
well as the risk of litigation by individuals and classes of individuals alleging non-compliance with
privacy and data security requirements, including following a data breach. Although laws and
regulations concerning data privacy and security often do not create a private right of action, the
failure to comply with such requirements is often asserted in third-party lawsuits as evidence of
inadequate security, particularly when the company’s privacy notice represents that it is in
compliance with applicable legal and regulatory requirements.
244 Cisco Visualization, The Internet of Things, http://share.cisco.com/internet-of-things.html
245 Gartner Says 4.9 Billion Connected "Things" Will Be in Use in 2015, November 11, 2014,
www.gartner.com/newsroom/id/2905717 .
246 See, e.g., FTC Staff Report, internet of things: Privacy & Security in a Connected World, January 2015,
https://www.ftc.gov/2015/01/ftc-report-internet-things.
-54-
1. State Data Privacy and Security Requirements
In an effort to protect individuals’ privacy and to reduce the risk of identity theft, most states have
enacted laws and many state regulatory bodies have promulgated regulations imposing obligations
on entities that obtain and/or maintain Personal Information, although those obligations vary. These
laws and regulations are intended to protect Personal Information and, in the event of a breach,
often require notification to government agencies, credit reporting agencies, and/or individuals
whose Personal Information has been or may have been subject to unauthorized access or
acquisition.
a. Restrictions on Collection of Personal Information
In recognition of both privacy considerations and the data breach risks inherent in companies’
collection of large amounts of information about individuals, regulatory and legislative scrutiny
(and that of class action lawyers) has increasingly focused on the business practices of collection of
information about individuals. As discussed above with regard to the California example, that has
led to an increased focus on the disclosure of practices regarding the collection and usage of
Personal Information. There are also statutory restrictions in several states that limit not only the
usage and disclosure of Personal Information, but also the right to collect that information in the
first place.
Collection of customer information by retailers in connection with credit card transactions (usually
for marketing uses) has been a focus of attention by courts and legislatures. Numerous state statutes
restrict the right of retailers to record information such as addresses and telephone numbers of
customers in connection with credit card transactions, if that information is not required by the
credit card companies or otherwise necessary for, e.g., shipping or installation.247 The highest
appellate courts of California and Massachusetts have held those states’ statutes to restrict the right
of retailers to collect ZIP codes of customers in connection with credit card transactions under
certain circumstances and subject to certain exceptions.248
Other states have even greater restrictions on collection of information obtained during credit card
transactions, as that data is so often the target of breaches. Minnesota law, for example, prohibits
companies transacting business in Minnesota from retaining security codes, PIN verification
numbers, or the full contents of any track of magnetic stripe data following authorization of a credit
card transaction, and for longer than 48 hours following authorization of a PIN debit transaction.249
Companies collecting credit card information are also subject to collection and retention restrictions
imposed by card brand rules, such as those issued by Visa and MasterCard, and pursuant to PCIDSS
(as discussed below), 250 in addition to any other contractual obligations imposed by their
merchant bank.
247 See, e.g., Cal. Civ. Code §1747.08(b).
248 See section on The Expanding Definition of Personal Information, supra.
249 See Minn. Stat. 325E.64.
250 See section on Industry Standards, infra.
-55-
In addition, a number of states restrict collection of Personal Information through scanning or
swiping of the magnetic stripe or bar code of a state-issued identification card or driver’s license
under certain circumstances and subject to certain exceptions.251 Personal Information obtained
through such means is also subject to use restrictions under the laws of certain states.252
In recognition of both privacy considerations and the data breach risks inherent in companies’
collection of large amounts of information about individuals, regulatory and legislative scrutiny
(and that of class action lawyers) has increasingly focused on the business practices of collection of
information about individuals. As discussed below with regard to the California example, that has
led to an increased focus on the disclosure of practices regarding the collection and usage of
Personal Information. There are also statutory restrictions in several states that limit not only the
usage and disclosure of Personal Information, but also the right to collect that information in the
first place.
Collection of customer information by retailers in connection with credit card transactions (usually
for marketing uses) has been a focus of attention by courts and legislatures. Numerous state statutes
restrict the right of retailers to record information such as addresses and telephone numbers of
customers in connection with credit card transactions, if that information is not required by the
credit card companies or otherwise necessary for, e.g., shipping or installation.253 Recently, the
highest appellate courts of California and Massachusetts held those states’ statutes to restrict the
right of retailers to collect ZIP codes of customers in connection with credit card transactions under
certain circumstances and subject to certain exceptions.254
Other states have even greater restrictions on collection of information obtained during credit card
transactions, as that data is so often the target of breaches. Minnesota law, for example, prohibits
companies transacting business in Minnesota from retaining security codes, PIN verification
numbers, or the full contents of any track of magnetic stripe data following authorization of a credit
card transaction, and for longer than 48 hours following authorization of a PIN debit transaction.255
Companies collecting credit card information are also subject to collection and retention restrictions
imposed by card brand rules, such as those issued by Visa and MasterCard, and pursuant to PCIDSS
(as discussed below), 256 in addition to any contractual obligations imposed by their merchant
bank.
In addition, a number of states restrict collection of Personal Information through scanning or
swiping of the magnetic stripe or bar code of a state-issued identification card or driver’s license
251 See, e.g., Ga. Code Ann. § 40-5-120(5); Haw. Rev. Stat. § 487J-6.
252 See, e.g., Haw. Rev. Stat. § 487J-6.
253 See, e.g., Cal. Civ. Code §1747.08(b).
254 See section on The Expanding Definition of Personal Information, supra.
255 See Minn. Stat. 325E.64.
256 See Section III.5., PCI – The Payment Card Industry Standards for Protection of Payment Card Information, below.
-56-
under certain circumstances and subject to certain exceptions.257 Personal Information obtained
through such means is also subject to use restrictions under the laws of certain states.258
b. Protection of Social Security Numbers
Social Security numbers have become a prime target of data thieves. Unlike debit and credit card
numbers, Social Security numbers are difficult to change and can be used to obtain additional
documentation for identity theft purposes potentially far more profitable for the thief and damaging
to the victim than a discrete number of fraudulent transactions. Hackers are reportedly focusing on
the Social Security numbers of children, which are generally not yet in use by the holders to obtain
credit and thus are not associated with a tarnished credit history.259 Breaches involving Social
Security numbers are also of concern to law enforcement agencies charged with state and national
security, due to their potential use as identification for nefarious purposes including evading law
enforcement and national security authorities, and gaining entry to the U.S. under assumed
identities.
Many states impose specific requirements governing the handling of Social Security numbers. For
example, a Connecticut law requires any person or entity that collects Social Security numbers to
create a protection policy specifically related to Social Security numbers.260 The company policy,
which must be published or publicly displayed (such as on the company’s website), must protect
confidentiality, prohibit unlawful disclosure, and limit access to Social Security numbers.
New York, in another example, enacted legislation limiting the disclosure, transmission and printing
of Social Security numbers.261 The law limits the collection of Social Security numbers and
restricts the ability of private entities to require an individual to provide his or her Social Security
number or any number derived from that number (with certain exceptions, such as “requests for
purposes of employment,” for confirming the individual’s age to allow him or her access to a
marketing program restricted to individuals of a certain age, or by a banking institution). While the
statute is fairly detailed, there are still unanswered questions as to the scope of its application.
Other states have also enacted laws and regulations to protect the Social Security numbers and other
Personal Information of their residents.262
c. Record Disposal Requirements
Many states also regulate the disposal of records containing Personal Information. For example,
under Massachusetts and New York law, records containing Personal Information must be redacted,
burned, pulverized, shredded or destroyed in some other way that will render the data unreadable.
In Massachusetts, if third parties are contracted to dispose of such records, they must implement
257 See, e.g., Ga. Code Ann. § 40-5-120(5); Haw. Rev. Stat. § 487J-6.
258 See, e.g., Haw. Rev. Stat. § 487J-6.
259 Identity Theft Poses Extra Troubles for Children, New York Times, Apr. 17, 2015; see also, Federal Trade Commission,
Child Identity Theft, available at http://www.consumer.ftc.gov/articles/0040-child-identity-theft.
260 Conn. Gen. Stat. § 42-471.
261 N.Y. Gen. Bus. Law § 399-ddd.
262 See, e.g., Cal. Civ. Code § 1798.85; Mich. Comp. Laws § 445.84; Or. Rev. Stat. § 646A-620; Tex. Bus. & Com. § 501.
-57-
policies and procedures that prohibit unauthorized access to or use of Personal Information during
collection, transport and disposal. Both states impose fines for noncompliance.263
Companies that dispose of records containing Personal Information also need to consider whether
they are subject to disposal requirements imposed by federal law. The Fair and Accurate Credit
Transactions Act of 2003, for example, requires businesses and individuals that use consumer
reports, such as lenders, insurance companies, employers, landlords, car dealers, and debt collectors,
to properly dispose of those consumer reports.264
d. Data Breach Notification Requirements
In the event of a data breach involving unauthorized access to Personal Information (and meeting
certain other statutory or regulatory criteria), state laws and regulations in most U.S. jurisdictions
mandate notice of the breach to affected individuals, and some states also require reporting to
regulatory agencies and state Attorneys General, as well as credit reporting agencies.265 Vast
numbers of individuals may be involved in a single breach, and large breaches frequently affect
residents of multiple jurisdictions.
Fifty-one U.S. jurisdictions, including 47 states, the District of Columbia, Guam, Puerto Rico and
the U.S. Virgin Islands, have enacted data breach notification laws.266 These laws specify the steps
that a company must take in response to a breach that affects the residents of that state.267 Although
the data breach notification laws of each of the 51 jurisdictions are similar, they are not identical,
and contain significant variations as to, for example, how they define a “breach,” what type of data
constitutes “Personal Information” that is within the scope of the statute, the types of events
triggering notice obligations, the timing and content of notices, and whether notice must be sent
even when there is a very low likelihood of harm resulting from the breach. Upon a breach or
potential breach of data security, the affected company must carefully review the requirements of
each applicable jurisdiction to determine its obligations in that particular jurisdiction.268 As further
discussed below, the various laws purport to apply based on the residence of the individual whose
data was compromised, and are not limited by the company’s place(s) of business.269 Often, even
most “local” businesses find that they collect data from residents of multiple jurisdictions.
263 Mass. Gen. Law ch. 93I § 2; N.Y. Gen. Bus. Law § 399-h.
264 15 U.S.C. § 1681w(a)(1); see also 69 Fed. Reg. 68690-01 (Nov. 24, 2004), codified at 16 C.F.R. § 682.
265 Generally, if the breach occurred while the data at issue was in the possession of a third party that does not “own” or hold
license to the data (for example, a vendor of the company that is deemed to “own” the data), the state data breach reporting statutes
task the data owner with the consumer and regulatory reporting requirements, while the third party that suffered the breach is tasked
with reporting the breach to the data owner. See e.g., Cal. Civ. Code § 1798.82(b).
266 As of May 14, 2015, the states that do not yet have such notification laws are Alabama, New Mexico and South Dakota;
although legislative efforts are currently under way in Alabama to pass such a law. S.B. 106, 2015 Leg. (Al. 2015). Legislative
efforts to pass a data breach notification bill in New Mexico recently failed. H.R. 224, 2014 Leg. (N.M. 2014).
267 However, the Texas breach notification statute requires companies that conduct business in Texas to notify residents of
other states that do not require notice. See Tex. Bus. & Com. Code Ann. § 521.053.
268 A list of jurisdictions and links to their data breach notification laws is available at
http://www.ncsl.org/research/telecommunications-and-information-technology/security-breach-notification-laws.aspx.
269 See Tex. Bus. & Com. Code Ann. § 521.053.
-58-
In addition to such state data breach notification requirements, companies in certain industries, such
as banking,270 credit unions,271 insurance,272 telecommunications,273 and health care,274 are also
subject to industry-specific breach notification requirements, while still other industry regulators
have issued guidance relating to breach response.275 The definitions of a reportable incident under
these requirements often differ from the general data breach notification requirements, requiring
additional levels of analysis and response in the event of an incident. U.S. federal and state
governmental entities are subject to separate data breach notification requirements for breaches of
data in their possession or databases.
In the event of a data breach, an initial and major task is to identify which jurisdictions’
requirements apply. Entities often find themselves subject to the different, sometimes conflicting,
requirements of multiple jurisdictions. A single data breach incident may have only one location at
which the entity’s data security was breached. Nevertheless, the individuals affected by the breach
may reside in many different jurisdictions that impose data breach notification requirements, some
of which may not be limited to companies doing business in the jurisdiction. For example, if a
laptop stolen from an office in Florida contains the Personal Information of residents of Maine,
Massachusetts, New Hampshire and Vermont, then the data breach laws of all those states, as well
as Florida, may be triggered. In the event of a breach of a database or loss of computerized records
containing information of individuals residing in different locations, the notification requirements of
all U.S. states and other jurisdictions with such requirements are potentially triggered.
Typically, the applicability of notice requirements of a given jurisdiction depends on several factors,
including:
whether the type of information that has been lost, stolen or misplaced falls within the
jurisdiction’s definition of “Personal Information”;
whether there has been a “breach of the security of the system” (or similar defined term)
under the jurisdiction’s definitions and requirements;
270 See, e.g., Interagency Guidance on Response Programs for Unauthorized Access to Customer Information and Customer
Notices, issued by the Office of the Comptroller of the Currency, Treasury; Board of Governors of the Federal Reserve System;
Federal Deposit Insurance Corporation; and Office of Thrift Supervision, Treasury, interpreting section 501(b) of the Gramm-Leach-
Bliley Act and Interagency Guidelines Establishing Information Security Standards.
271 Guidelines for Safeguarding Member Information promulgated by the National Credit Union Administration (12 C.F.R.
748 and Appendices).
272 See, e.g., Connecticut Insurance Department Bulletin IC-25, Aug. 18, 2010.
273 47 C.F.R. § 64.2011.
274 See federal Health Insurance Portability and Accountability Act of 1996, as amended (“HIPAA”) (42 U.S.C. § 201 et seq.),
the Health Information Technology for Economic and Clinical Health (“HITECH”) Act, as amended, and implementing regulations,
requiring notice to affected individuals, the U.S. Department of Health and Human Services (“HHS”), and in some cases, the media,
in the event of a breach of protected health information (“PHI”); see also Federal Trade Commission (the “FTC”) Health Breach
Notification Rule, 16 C.F.R. Part 318, requiring notification to affected individuals, the FTC and in some cases, the media, by
vendors of personal health records (“PHR”) and PHR-related entities in the event of a data breach involving PHR. Such
requirements are discussed further in Section III(2)(f) below. In addition, certain states impose specific notification requirements
upon health care providers in addition to the general state breach notification requirements (see Cal. Health Safety Code § 1280.15,
requiring agency and individual notice within five days of discovery).
275 See, e.g., U.S. Department of Education Data Breach Response Checklist, September 2012, providing guidance for
educational agencies and institutions, available at http://ptac.ed.gov/document/checklist-data-breach-response-sept-2012.
-59-
whether the data that was breached was in the type of medium (for example, paper or
electronic) that is covered by the jurisdiction’s statute;
whether the data (if in electronic form) was encrypted or otherwise rendered unreadable;
and
whether the incident meets the jurisdiction’s threshold of harm or likelihood of harm, if
any.
Not all jurisdictions have the same definitions or triggers. For example, some jurisdictions define a
“breach” that requires notification to include unauthorized “access” to Personal Information, while
others may require notification in the event of unauthorized “acquisition” or “misuse” of Personal
Information. Further, certain jurisdictions only require notice where a specific harm threshold has
been met, while others do not have any threshold for harm or likelihood of harm.
The data breach notification statutes and applicable industry regulations (for example, Department
of Insurance regulations and bulletins) of each relevant jurisdiction must be analyzed to determine
whether:
Residents of the jurisdiction must be notified;
Notices to affected individuals must contain specific content, as further discussed below;
Notices must be sent within a specific timeframe, and how the triggering event for the
timeframe is defined by the applicable state statute;
State Attorneys General or other state agencies (for example, Departments of Insurance, if
applicable) must be notified and, if so, whether those notices must include specific content
or must be made on specific state-issued forms, whether there is a specific timing
requirement to those notices, and whether notices to state agencies must be made before or
after notification to affected individuals; and
Credit reporting agencies, such as Experian, TransUnion and Equifax, must be notified.
Certain states require that notices to affected individuals include specific content, such as:
A general description of the breach;
The type of Personal Information exposed;
Contact information for the major credit reporting agencies;
The company’s contact information; and
Advice to remain vigilant by reviewing account statements and credit reports.
-60-
In contrast, Massachusetts prohibits the disclosure of the nature of the breach in the consumer
breach notice letter. 276
Time is of the essence with regard to such notifications, which may be required as early as five days
following discovery of a breach.277 Many breach notification statutes do not specify a fixed number
of days by which notice is required, but instead require notice “as soon as practicable and without
unreasonable delay” (or similar language). Affected individuals frequently identify timeliness of
notification as a significant factor in their assessment of a breached entity’s response to a data
security incident. Governmental agencies may impose fines for delays, and certain states, such as
Florida, outline specific penalties up to $500,000 where notice is not provided to affected
individuals within 45 days, as required by Florida law.278
As a lesson in the importance of expediency in providing notices and what to do when information
is not all available at once, the California Attorney General filed and reportedly settled a lawsuit
against Kaiser Foundation Health Plan, Inc. in early 2014, alleging that Kaiser failed to issue breach
notification to affected individuals “in the most expedient time possible and without unreasonable
delay” in violation of Cal. Civ. Code § 1798.82 (which does not specify a fixed number of days by
which notice is required) where Kaiser had identified a portion of the affected group in December
2011, continued to investigate and identify additional individuals through February, and ultimately
issued notices in March 2012. According to the complaint, Kaiser had sufficient information to
identify and notify at least some individuals affected by the breach between December 2011 and
February 2012.
e. Data Security Requirements: Massachusetts Remains at the U.S.
Forefront
Massachusetts, through its Office of Consumer Affairs and Business Regulation (“OCABR”), has
promulgated one of the most comprehensive U.S. regulatory schemes for data privacy and security.
The regulation, which went into effect March 1, 2010, set a new U.S. state standard for data
protection.
The Massachusetts data security regulation (201 C.M.R. 17.00, the “Massachusetts Regulation”)
applies to any individual or company, regardless of type, size or location, that owns or licenses
Personal Information of Massachusetts residents. Under the Massachusetts Regulation, Personal
Information includes the name of a Massachusetts resident together with his or her Social Security
number, or driver’s license, financial account, or credit card number (with or without PIN). Any
entity, including insurance companies, producer entities and service providers, that uses or stores
the Personal Information of Massachusetts residents, whether of employees, customers, insureds, or
others, is subject to the Massachusetts Regulation.279
276 Mass. Gen. Laws ch. 93H, § 3.
277 See Cal. Health Safety Code § 1280.15; Connecticut Insurance Department Bulletin IC-25, Aug. 18, 2010.
278 See Fla. Stat.§ 501.171(9)(b)(1), requiring notice to affected individuals no later than 45 days following determination of a
breach, and imposing a fine of $1,000 for each day the breach goes undisclosed for up to 30 days and, thereafter, $50,000 for each
30-day period or portion thereof for up to 180 days, with a maximum fine of $500,000.
279 The extraterritorial authority of the OCABR and the Massachusetts Attorney General to enforce the Massachusetts
Regulation against companies located outside Massachusetts borders is yet to be fully tested.
-61-
The Massachusetts Regulation establishes the most rigorous state data security requirements in the
U.S. to date. To comply, companies that own or license the Personal Information of Massachusetts
residents are required to adopt a comprehensive written information security program (referred to as
a “WISP”) that satisfies specific requirements, including the following:
Identify and evaluate internal and external risks;
Regularly monitor employee access to Personal Information;
Prevent terminated employees from accessing documents, devices and other records
that contain Personal Information;
Take reasonable steps to select and retain third-party service providers that are
capable of compliance with the Massachusetts Regulation;
Review security measures annually, and update the WISP when there is a material
change in business operations;
Develop and maintain a procedure for actions to take in response to any breach of
security;
Train employees about and discipline employees for violation of the policy; and
Designate one or more employees to maintain, supervise and implement the WISP.
The WISP must also address the establishment and maintenance of a detailed computer security
program as to Personal Information of Massachusetts residents, including, to the extent technically
feasible:
Encryption of all transmitted records and files containing Personal Information that
are stored on laptops and other portable devices and/or will travel across public
networks or wirelessly;
User-authentication protocols and access-control measures, including control over
user identifiers, passwords and access;
A system for monitoring unauthorized use; and
Up-to-date firewalls, anti-virus definitions and anti-malware programs.
In an effort to ease the burden imposed on small businesses, the Massachusetts Regulation makes
clear that its requirements are risk-based in both implementation and enforcement, stressing that
there is no one-size-fits-all WISP. The Massachusetts Attorney General will judge compliance on a
case-by-case basis, taking into account the following factors: (i) the size, scope and type of
business handling the information; (ii) the amount of resources available to the business; (iii) the
amount of data stored; and (iv) the need for security and confidentiality of both consumer and
employee information.
-62-
This risk-based approach brings the Massachusetts Regulation in line with both the enabling
legislation and applicable federal law, including two rules promulgated by the FTC: (i) the Red
Flags Rule that requires creditors and financial institutions to have a written Identity Theft
Prevention Program to detect warning signs of identity theft and fraud; and (ii) the Gramm-Leach-
Bliley Safeguards Rule (16 C.F.R. Part 314), which requires financial institutions to have a security
plan to protect personal consumer information (both discussed below).
The Massachusetts Regulation also requires that companies oversee their third-party vendors by:
(i) Taking reasonable steps to select and retain third-party service providers that are
capable of maintaining appropriate security measures to protect such Personal
Information consistent with these regulations and any applicable federal regulations;
and
(ii) Requiring by contract that such third-party service providers implement and maintain
such appropriate security measures for Personal Information.
Contracts with third-party service providers entered into prior to March 1, 2010 are required to have
been amended by March 1, 2012 to satisfy the Massachusetts Regulation.
In Frequently Asked Questions (“FAQs”) published in November 2009,280 the OCABR made the
encryption requirement imposed by the Massachusetts Regulation flexible. Consistent with the
risk-based approach of the Massachusetts Regulation, the encryption requirement is technologyneutral
in that it does not require specific encryption technology.
The FAQs clarify other important issues as well, including the following:
A bank or credit card account is a “financial account” which, when accompanied by the
name of a Massachusetts resident, is subject to the Massachusetts Regulation.
An account that is not clearly a financial account is considered a financial account if
unauthorized access could result in an increase of financial burden or a misappropriation of
monies, credit or other assets.
An insurance policy number is a financial account number if it (i) grants access to a person’s
finances, or (ii) could result in an increase of financial burden, or a misappropriation of
monies, credit or other assets.
Compliance with HIPAA does not eliminate a company’s obligation to comply with the
Massachusetts Regulation if the company owns or licenses Personal Information of a
Massachusetts resident.
Companies, especially small businesses that are subject to the Massachusetts Regulation, have
voiced concerns about the burden and cost of compliance. The OCABR, however, has taken the
position that the importance of protecting residents’ Personal Information outweighs the financial
280 The FAQs and other guidance related to the Massachusetts Regulation are available at:
http://www.mass.gov/ocabr/docs/idtheft/201cmr17faqs.pdf.
-63-
burden on even small businesses that may need to retain outside consultants to help them institute
the required procedures.
The Massachusetts Attorney General’s Office has signaled that it will be taking a hardline approach
to enforcement of its consumer protection and privacy and data security requirements. In May
2012, the Massachusetts Attorney General reported resolving a suit it had filed against a hospital
that reportedly shipped several boxes of unencrypted back-up tapes containing individuals’ names,
Social Security numbers, financial account numbers and health information to a service provider,
which then were reported missing. The suit alleged violation of both the Massachusetts Consumer
Protection Act and HIPAA. A consent judgment, announced in May 2012, included a $750,000
payment, including a $250,000 civil penalty, a $225,000 payment for an educational fund to be used
by the Attorney General to promote education concerning the protection of Personal Information
and Protected Health Information, and a credit of $275,000 to the hospital to reflect security
measures it took subsequent to the breach. The Attorney General reported that the hospital also
agreed to take a variety of steps, including a review and audit of security measures. This case also
demonstrates the importance placed by the Massachusetts Attorney General on both data security
procedures in place prior to the breach and responsiveness in addressing issues resulting from a
breach.281
Significantly lesser fines have issued in situations where there was no evidence of unauthorized
access to Personal Information, but fines were nonetheless issued where information was
unencrypted in violation of the Massachusetts Regulation.
The Massachusetts Attorney General has also indicated that it will scrutinize an entity’s response to
a breach and will consider whether the breached entity complied with the Payment Card Industry
Data Security Standards if there has been a breach of credit card numbers. In March 2011, it
announced that the owner of a group of popular restaurants in Massachusetts agreed to pay a
$110,000 fine in connection with a data breach that allegedly affected over 125,000 credit and debit
card holders. The Attorney General’s focus was on the reported fact that a forensic investigator was
not engaged until three weeks after the restaurant was informed by credit card processors of a
potential breach, and that the restaurant continued to accept credit and debit cards for several weeks
after it allegedly knew or had reason to know that its security had been breached. The complaint
also alleged that the restaurant had failed to comply with Payment Card Industry Data Security
Standards and that it did not have other necessary data security precautions in place to protect its
customer data.282
f. Privacy Policies and Protections: The California Example
In the U.S., California has often led the way in privacy statutory requirements, and continues to do
so in the area of required privacy policies, additional disclosures for companies that collect and
281 South Shore Hospital to Pay $750,000 to Settle Data Breach Allegations, Press Release of Attorney General Martha
Coakley, May 24, 2012, http://www.mass.gov/ago/news-and-updates/press-releases/2012/2012-05-24-south-shore-hospital-databreach-
settlement.html.
282 See Edwards Angell Palmer & Dodge (now known as Locke Lord LLP) Client Advisory, Massachusetts Attorney General
Breaking New Ground in Data Security Enforcement? Apr. 2011, http://media.lockelord.com/files/upload/2011-CA-MA-AGDataSecurity.
pdf.
-64-
share consumer information with third parties for marketing purposes, the protection of medical
records, and allowing minors to “erase” posts from social media sites.
i. California’s Shine the Light Law
California’s Shine the Light Act283 requires certain businesses to disclose their collection and usage
of consumer information, and provide consumers with the ability to opt out. It requires certain
businesses and non-profits with 20 or more employees that have an established business relationship
with a consumer to either:
Adopt a Privacy Policy of not disclosing certain information of its customers (defined
as personal information, but which under this Act is a term that is far broader in scope
than its typical use, and includes certain demographic information) to third parties for
that third party’s marketing purposes without the advance consent of its customers, or
give its customers the option of “opting out” of such disclosures, and must publicly
disclose the policy or option (for example, in its website Privacy Policy), or
Annually, upon request, identify the categories of personal information disclosed
regarding its users during the previous year, and the names and addresses of any third
parties to whom such information was disclosed, together with information sufficient
to identify the nature of the third parties’ businesses.
The Act applies to both online and offline collection and disclosures and there are specific
requirements for online and brick-and-mortar notices. There are nuances as to the businesses the
Act applies to, which are exempt, what reports consumers are entitled to receive, and how various
terms such as “third party” are applied. Failure to comply with the intricacies of the Act led to the
filing of multiple class action lawsuits against online retailers and publishers in 2013.
ii. California’s Online Privacy Protection Act
California also has specific statutory requirements for privacy policies of entities that collect PI of
California residents through the Internet. The California Online Privacy Protection Act (“OPPA”)
requires “an … online service that collects personally identifiable information through the Internet
about individual consumers residing in California who use or visit its commercial website or online
service … [to] conspicuously post [a] … privacy policy….”284
OPPA has specific requirements as to how the privacy policy must be noticed, including the form of
notice and the link on site or application home page. In March 2012, the California Attorney
General pointed to studies indicating that only 19% of the top 340 mobile applications post privacy
policies and only 5% of all mobile apps do so. She gave notice that OPPA’s requirements applied
to mobile apps and that failure to comply could lead to actions under California’s Unfair
Competition Law285 (which also permits class actions by consumers). She also recently brought an
action against Delta Airlines for failure to have a privacy policy posted on its mobile apps in
compliance with OPPA. In May 2013, Delta succeeded in obtaining a dismissal of that case on
283 CA Civil Code § 1798.83.
284 CA Bus & Prof. Code Sec. 22575.
285 Codified at Cal. Bus. Prof. Code § 17200.
-65-
federal preemption grounds, based on federal laws prohibiting state regulation of airlines. Those
grounds would not apply to other industries, and the California Attorney General has appealed the
decision.286
As of January 1, 2014, an amendment to OPPA went into effect which now requires web site and
online services to make certain disclosures regarding online tracking and targeted advertising. Prior
to the amendment, OPPA required a website and online service operator to disclose in its privacy
policy: (1) categories of personal information gathered; (2) parties with whom such information is
shared; (3) if the operator maintains a process for consumers to review and change such
information; (4) a description of the process by which the operator notifies users of changes to its
privacy policy; and (5) the effective date of the policy. After the amendment, in addition to the
foregoing, OPPA requires the operator to: (1) disclose how the operator responds to “Do Not
Track” signals or other mechanisms giving consumers the ability to exercise choice over the
collection of personal information over time and across third-party websites or online services, if
the operator engages in the collection of such information; and (2) disclose whether other parties
may collect such information over time and across different Web sites when a consumer uses the
operator’s site or service.
iii. California’s Social Eraser Law
California passed a new law with respect to Privacy Rights for California Minors in the Digital
World that went into effect January 1, 2015.287 The law amended California Business and
Professions Code by adding Sections 22580 -22582 to it.
The law prohibits websites from advertising certain items to minors if the “marketing or advertising
is specifically directed to that minor based on information specific to that minor.” Among the
prohibited items are alcoholic beverages, firearms, ammunition, spray paint, tobacco and cigarettes,
fireworks, tattoos, drug paraphernalia, and obscene material.
In addition to the foregoing advertising restrictions, the law also implements what has been
described as a “Social Eraser.”
This provision requires operators of websites directed to minors or with actual knowledge that
minors are using the website (1) to permit registered users who are minors to remove, or request
removal of, content posted by the user (but not third parties); (2) provide notice that the information
may be removed; (3) provide clear instructions as to how to remove or request removal; and (4)
provide notice that such removal mechanisms do not ensure complete or comprehensive removal.
The operator however does not have to erase or remove content if: (1) federal or state law requires
its retention; (2) it was posted by a third party; (3) it is anonymous data; (4) the minor does not
follow the instructions provided by the website regarding how to remove or request removal; or (5)
the minor received compensation for the content.
286 The People of the State of California v. Delta Airlines, Inc., Case No. 12-526741, Superior Court for the State of California,
City and County of San Francisco. The decision is on appeal.
287 California Senate Bill 568 available at http://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201320140SB568
-66-
Lastly, the operator is deemed to be in compliance if (1) it renders the information no longer visible
to third parties (even if still on the server); or (2) if even after making invisible, it remains visible
because a third party has copied or reposted the content.
This law will likely be another potential source for class actions and regulatory enforcement
proceedings.
iv. Confidentiality of Medical Information Act
California Civil Code Section 56 et seq. codifies California’s “Confidentiality of Medical
Information Act” (“CMIA”). Under the CMIA, medical information is defined to mean “any
individually identifiable information, in electronic or physical form, in possession of or derived
from a provider of health care, health care service plan, pharmaceutical company, or contractor
regarding a patient’s medical history, mental or physical condition, or treatment. “Individually
identifiable” means that the medical information includes or contains any element of personal
identifying information sufficient to allow identification of the individual, such as the patient’s
name, address, electronic mail address, telephone number, or social security number, or other
information that, alone or in combination with other publicly available information, reveals the
individual’s identity.”288 The CMIA generally puts limits on the disclosure of patients’ medical
information by health plans, medical providers, pharmaceutical companies as well as other
businesses organized for the purpose of maintaining medical information.
CMIA also requires covered entities to protect the integrity of electronic medical information and to
automatically preserve records of deletions or changes. It also restricts the use of the information in
connection with certain types of marketing activity.
Unlike HIPAA, CMIA provides for a private right of action as well as statutory damages without
the need to prove actual damages of $1,000 per violation.289 Recent court decisions, however, have
potentially made it more difficult for plaintiffs to secure these damages.
A California appellate court has held that health providers cannot be held liable for a negligent
disclosure of medical information if the plaintiff fails to establish that the information was actually
viewed by an unauthorized person. 290 In a similar vein, the California Court of Appeal, Fourth
District in held that the mere disclosure of personal information without disclosure of actual
“medical information” (i.e., medical history, treatment, etc.) was not sufficient to entitle a plaintiff
to damages under CMIA.291
g. New Trend in State Regulation: Social Media
Social media is an increasing source of concern to regulators, both as a source of information about
individuals that can be culled by employers and other businesses investigating individuals, and as a
target for hackers.
288 CA Civil Code Section 56.05(j).
289 CA Civil Code Section 56.36(b)(1).
290 In Sutter Health v. The Superior Court, No. C072591, 2014 WL 3589699 (Cal. Ct. App. July 21, 2014),
291 Eisenhower Medical Center v. The Superior Court, No. E058387, 2014 WL 2115216 (California Court of Appeal, Fourth
District, May 21, 2014).
-67-
Out of concern that applicants and employees will be required to provide access to their social
media accounts, several states have recently enacted legislation regulating access by employers
and/or educational institutions to individuals’ social media accounts, with similar legislation
pending in many other states.292 For example, effective January 1, 2013, California law restricts
companies from requesting or requiring that current or potential employees provide their social
media account login credentials, access personal social media in the presence of the employer, or
divulge any personal social media.293 California law also imposes similar restrictions upon public
and private colleges and universities located in the state with regard to social media of current or
potential students,294 and requires that private colleges and universities post their social media
privacy policies on the college or university’s website.295 Such restrictions are subject to limited
exceptions, such as where social media is reasonably believed to be relevant to an investigation of
allegations of employee misconduct, so long as the social media is used solely for purposes of that
investigation or related proceedings.296 (See discussion above entitled Social Media as Source of
Statutory and Regulatory Violations for a discussion of pertinent case law, and federal as well as
state scrutiny of employer and university requirements that applicants, employees and/or students
provide access to their social media accounts; see also examples of data breaches involving social
media.)
2. Federal Requirements
In addition to state laws and regulations, entities may also be subject to federal rules and regulations
mandating privacy and protection of Personal Information, and requiring that certain steps be taken
in the event of a data breach. The FTC currently asserts broad authority to regulate unfair or
deceptive acts or practices relating to privacy and data protection.297 Public companies may also
need to disclose cyber risks and incidents as part of their mandated disclosure of material
information to potential investors. A number of federal acts and regulations, such as the Fair Credit
Reporting Act (“FCRA”),298 also require protection of consumer information, depending on the
nature of the entity involved, the type of information disclosed, and the circumstances.
292 As discussed above in the Section on Social Media, according to NCSL, the National Conference of State Legislatures, as
of May 2015, states that have enacted such legislation intended to protect the privacy of prospective and current employees (and in
some states a student) include: Arkansas, California, Colorado, Delaware, Illinois, Louisiana, Maryland, Michigan, Nevada, New
Hampshire, New Jersey, New Mexico, Oklahoma, Oregon, Rhode Island, Tennessee, Utah, Virginia, Washington and Wisconsin, as
well as Guam. See http://www.ncsl.org/research/telecommunications-and-information-technology/state-laws-prohibiting-access-tosocial-
media-usernames-and-passwords.aspx.
293 Cal. Lab. Code § 980.
294 Cal. Educ. Code § 99120.
295 Cal. Educ. Code § 99122.
296 See, e.g., Cal. Lab. Code § 980(c).
297 See Edward F. Glynn, Jr., Edwards Wildman Client Advisory- Court Finds FTC Has Section 5 Unfairness Authority to
Bring Enforcement Action Against Hotel Chain Victimized By Cyber Intrusion, April 9, 2014,
http://www.edwardswildman.com/court-finds-ftc-has-section-5-unfairness-authority-04-09-2014/, now located at
www.lockelord.com.
298 FCRA (15 U.S.C. §1681, et seq.) regulates “Credit Reporting Agencies” and imposes certain restrictions and notice
requirements on the production and use of consumer reports. The FTC found in January 2013 that an app developer offering criminal
record searches for the cost of downloading a 99 cent app was a Credit Reporting Agency subject to FCRA and charged the
developer with three violations of that law. In the Matter of Filiquarian Publishing, LLC, No. 112 3195, Federal Trade Commission,
Agreement Containing Consent Order (Jan. 10, 2013).
-68-
In addition, sector-specific federal laws relating to privacy and data protection extend to specific
industries. For example, financial institutions are subject to specific federal requirements and, for
these purposes, the term “financial institutions” is defined very broadly. With respect to the
healthcare industry, certain health information is also subject to federal protections under HIPAA
and the HITECH Act. Educational institutions, government agencies, and telecommunications
entities are similarly subject to sector-specific federal privacy and data protection laws, as identified
below. Moreover, federal agencies such as the Securities and Exchange Commission (“SEC”),
Department of Justice (“DOJ”) and the Food and Drug Administration (“FDA”) have issued
guidances relating to privacy, data protection, cyber risk and incident response, as also referenced in
this section.
a. FTC Regulation of Privacy and Data Protection
Section 5 of the Federal Trade Commission Act, which applies to almost all companies engaged in
interstate commerce in the United States, prohibits unfair or deceptive acts or practices in or
affecting commerce. The FTC has brought numerous privacy and data security enforcement actions
against companies pursuant to such authority for (i) failure provide appropriate data security to
reasonably protect customer information, which the FTC has interpreted to constitute an “unfair act
or practice;” and/or (ii) non-compliance with the companies’ privacy policies or representations
regarding security, which the FTC has interpreted to constitute a “deceptive act or practice.”
The FTC has brought such enforcement actions against, e.g., software vendors (Microsoft299 and
Guidance Software300), consumer electronics companies (Genica and Computer Geeks),301 mobile
app developers (Delta Airlines),302 clothing retailers (Guess!303 and Life Is Good304), music retailers
(Tower Records),305 animal supply retailers (PetCo),306 general merchandise retail stores (BJs
Wholesale,307 TJX companies,308 and Sears309), shoe stores (DSW),310 entertainment establishments
(Dave & Busters311), social media sites (Twitter312 and Facebook313), and hotels (Wyndham).314
299 FTC v. Microsoft (Consent Decree, Aug. 7, 2002), available at www.ftc.gov/os/caselist/0123240/0123240.shtm
300 In the Matter of Guidance Software (Agreement Containing Consent Order, FTC File No. 062 3057, November 16, 2006),
available at www.ftc.gov/opa/2006/11/guidance.htm
301 In the Matter of Genica Corporation, and Compgeeks.com, FTC File No. 082-3113 (Agreement Containing Consent Order,
February 5, 2009), available at http://www.ftc.gov/enforcement/cases-proceedings/082-3113/gencia-corporation-compgeekscomalso-
dba-computer-geeks
302 See, “California Attorney General Sues Delta Air Lines for Failing to Have a Mobile App Privacy Policy,” at http://bit.ly/W11J4T
303 In the matter of Guess?, Inc. (Agreement containing Consent Order, FTC File No. 022 3260, June 18, 2003), available at
www.ftc.gov/os/2003/06/guessagree.htm
304 In the Matter of Life is good, Inc. (Agreement Containing Consent Order, FTC File No. 072 3046, January 17, 2008),
available at www.ftc.gov/os/caselist/0723046
305 In the Matter of MTS, Inc., d/b/a Tower records/Books/Video (Agreement containing Consent Order, FTC File No. 032-
3209, Apr. 21, 2004), available at www.ftc.gov/os/caselist/0323209/040421agree0323209.pdf
306 In the Matter of Petco Animal Supplies, Inc. (Agreement containing Consent Order, FTC File No. 042 3153, Nov. 7, 2004),
available at http://www.ftc.gov/enforcement/cases-proceedings/032-3221/petco-animal-supplies-inc-th-matter
307 In the Matter of BJ’s Wholesale Club, Inc. (Agreement containing Consent Order, FTC File No. 042 3160, June 16, 2005),
available at www.ftc.gov/opa/2005/06/bjswholesale.htm
308 In The Matter of The TJX Companies, Inc., FTC File No. 072-3055 (Agreement Containing Consent Order, March 27,
2008), available at www.ftc.gov/os/caselist/0723055
-69-
Two cases winding their way through the courts challenge the FTC’s authority to regulate privacy
and data protection pursuant to Section 5 of the FTC Act. The first arises from a complaint that the
FTC filed against Wyndham Worldwide Corporation in 2012, in which the FTC charged that
Wyndham violated the FTC Act’s prohibition on unfair and deceptive practices by failing to secure
customer information according to Wyndham’s privacy policy.315 Wyndham argued that the FTC
lacks the authority to regulate data security, and that it failed to satisfy fair notice principles because
it had not issued any regulations concerning data security before bringing its unfairness claim. In
April 2014, a federal court sitting in New Jersey rejected Wyndham’s arguments when it denied
Wyndham’s motion to dismiss the FTC complaint and permitted the FTC’s case against Wyndham
to move forward.316 Wyndham moved for interlocutory review of the decision, which the New
Jersey federal judge granted in June 2014, after determining that businesses and consumers
nationwide would benefit from appellate review of the issue.317 As of May 2015, the issue is under
review by the 3rd Circuit.
The second case, involving LabMD, began as an enforcement action by the FTC on similar
grounds, based in large part upon information from security firm Tiversa that its routine scanning
activities found a LabMD patient file leaked outside the company, prompting the FTC’s
investigation of LabMD. By 2014, LabMD challenged the FTC’s authority in several
administrative and court proceedings. While challenges to the FTC’s authority to bring enforcement
actions based on issues of adequacy of a company’s privacy and data security procedures have so
far been largely unsuccessful, LabMD did obtain a victory over the FTC in a May 2014 decision by
an administrative law judge ordering the FTC to provide deposition testimony as to what data
security standards, if any, the FTC has published and intends to rely upon at trial to demonstrate that
LabMD’s data security practices were not reasonable or appropriate and in violation of Section 5 of
the FTC Act. 318 The case has undergone a number of twists and turns including procedural issues
309 In the Matter of Sears Holdings Management Corporation, FTC File No. 082 3099 (Agreement Containing Consent Order,
September 9, 2009), available at http://www.ftc.gov/os/caselist/0823099/index.shtm
310 In the Matter of DSW Inc., (Agreement containing Consent Order, FTC File No. 052 3096, Dec. 1, 2005), available at
www.ftc.gov/opa/2005/12/dsw.htm
311 In the Matter of Dave & Buster's, Inc., FTC File No. 082 3153 (Agreement Containing Consent Order, March 25, 2010),
available at http://www.ftc.gov/os/caselist/0823153/index.shtm
312 In the Matter of Twitter, Inc., FTC File No. 092 3093 (Agreement Containing Consent Order, June 24, 2010; Decision and
Order, March 11, 2011), available at http://www.ftc.gov/enforcement/cases-proceedings/092-3093/twitter-inc-corporation
313 In the Matter of Facebook, Inc., File No 092 3184 (Agreement Containing Consent Order, November 29, 2011), available
at http://ftc.gov/os/caselist/0923184/index.shtm
314 FTC v. Wyndham Hotels, (PENDING Lawsuit filed 6/26/2012 http://www.ftc.gov/opa/2012/06/wyndham.shtm)
315 FTC Files Complaint Against Wyndham Hotels For Failure to Protect Consumers' Personal Information, Federal Trade
Commission, Press Release, June 26, 2012, http://www.ftc.gov/news-events/press-releases/2012/06/ftc-files-complaint-againstwyndham-
hotels-failure-protect
316 Federal Trade Commission v. Wyndham Worldwide Corp., 13-cv-01887, U.S. District Court, District of New Jersey
(Newark). See Edwards Wildman Palmer LLP Client Advisory, Court Finds FTC Has Section 5 Unfairness Authority To Bring
Enforcement Action Against Hotel Chain Victimized By Cyber Intrusion, April 2014, http://www.edwardswildman.com/Court-Finds-
FTC-Has-Section-5-Unfairness-Authority-04-09-2014/
317 Id. See Allison Grande, 3rd Circ. To Tackle FTC Data Security Power In Wyndham Row, Law 360, June 24, 2014,
http://www.law360.com/articles/551152/3rd-circ-to-tackle-ftc-data-security-power-in-wyndham-row.
318 In the Matter of LabMD, Inc., FTC Matter/File No. 102-3099, FTC Docket No. 9357, see May 1, 2014 order. See
LabMD, Inc. v. Federal Trade Commission, 1:12-cv-3005-WSD, United States District Court for the Northern District of Georgia,
Atlanta Division; LabMD Inc. v. Federal Trade Commission, Case No. 3-15267, United States Court of Appeals for the Eleventh
Circuit. The FTC suit against LabMD was scheduled to go to trial in May 2014. See also LabMD challenges FTC data security
-70-
as to whether there must first be exhaustion of FTC administrative procedures before substantive
issues can be addressed by the courts, and a Congressional investigation into Tiversa. As of May
2015, the matter is still to be concluded.319
b. Gramm-Leach-Bliley Act
The Gramm-Leach-Bliley Act (“GLBA”) was enacted in 1999 to reform the financial services
industry and address concerns relating to consumer financial privacy. Title V of GLBA establishes
a minimum federal standard of privacy for consumer non-public personal information and applies to
financial institutions, including companies that were not traditionally considered to be financial
institutions, such as insurance companies.320 Prominent among the privacy requirements of the
GLBA and the regulations promulgated thereunder are requirements that financial institutions (i)
develop and adopt privacy and information security policies and practices, and (ii) send annual
privacy notices to customers.
GLBA required the state and federal governmental agencies that regulate financial institutions to
promulgate regulations to effectuate GLBA.321 Thus, a number of agencies issued privacy and data
security regulations pursuant to GLBA, applicable to financial institutions subject to their
jurisdiction, including banks,322 registered investment advisors and broker dealers,323 credit
unions,324 insurance companies,325 and others.326
The Dodd-Frank Wall Street Reform and Consumer Protection Act (the “Dodd-Frank Act”)
transferred rulemaking authority over privacy provisions of GLBA from the following regulatory
agencies to the newly created Consumer Financial Protection Bureau (the “CFPB”) effective July
2011: the FTC, the Board of Governors of the Federal Reserve System, Federal Deposit Insurance
Corporation, National Credit Union Administration, Office of the Comptroller of Currency, and
action in new lawsuit, Grant Gross, PC World, Mar. 21, 2014, http://www.pcworld.com/article/2110840/labmd-challenges-ftc-datasecurity-
action-in-new-lawsuit.html; Allison Grande, 11th Circ. Blow Prompts LabMD to Drop FTC Fight for Now, Law360,
February 25, 2014, http://www.law360.com/articles/513110/print?section=appellate; Allison Grande, FTC Told to Reveal Data
Security Expectations In LabMD Suit, Law360, May 2, 2014, http://www.law360.com/articles/534075/print?section=health; Allison
Grande, LabMD Ruling Puts FTC in Driver’s Seat on Data Security, Law360, May 13, 2014,
http://www.law360.com/articles/537543/print?section=corporate.
319 Id. See Allison Grande, LabMD Loses Bid To Exclude FTC Docs In Data Security Row, Law 360, April 17, 2015,
http://www.law360.com/articles/644656/labmd-loses-bid-to-exclude-ftc-docs-in-data-security-row; IAPP Daily Dashboard, Former
Investigator: Triversa Falsified Finding in LabMD Case, May 8, 2015, https://privacyassociation.org/news/a/former-investigatortriversa-
falsified –findings…
320 See http://www.ftc.gov/privacy/privacyinitiatives/glbact.html on the applicability of Title V of GLBA to insurance
companies.
321 15 U.S.C. § 6801-6809.
322 E.g., “Interagency Guidance on Response Programs for Unauthorized Access to Customer Information and Customer
Notice” dated March 30, 2005 issued jointly by the five member agencies of the Federal Financial Institutions Examination Council –
the Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, National Credit Union
Administration, Office of the Comptroller of Currency, and Office of Thrift Supervision.
323 SEC Regulation S-P, 17 C.F.R. Part 248.
324 National Credit Union Administration regulations, 12 C.F.R. Part 748.
325 Most state insurance departments have promulgated regulations implementing GLBA with respect to their licensees that are
subject to GLBA, in most cases based upon model regulations issued by the National Association of Insurance Commissioners.
326 E.g., FTC Privacy Rule (16 C.F.R. Part 313) and Safeguards Rule (16 C.F.R. Part 314).
-71-
Office of Thrift Supervision.327 The SEC, the Commodity Futures Trading Commission, and state
insurance departments continue to regulate GLBA with respect to the financial institutions subject
to their jurisdiction, and the FTC retains limited jurisdiction with respect to GLBA.
In light of the transfer of GLBA privacy rulemaking authority to the CFPB, the CFPB published an
interim final rule in December 2011 establishing a new Regulation P (Privacy of Consumer
Financial Information), combining content of existing regulations previously promulgated by the
FTC and banking regulators, and including technical and conforming changes to reflect the transfer
of authority to CFPB and certain other changes made by the Dodd-Frank Act.328
On October 20, 2014, the CFPB issued a final rule amending Regulation P to allow financial
institutions that do not engage in certain types of information-sharing activities to stop mailing an
annual privacy notice to consumers if they post the annual notices on their websites and meet
certain other conditions.329
i. Regulation S-P and SEC Enforcement of Privacy, Data Protection
and Cybersecurity
Regulation S-P, promulgated by the SEC pursuant to the GLBA, implements the privacy and data
protection requirements of the GLBA with respect to financial institutions subject to SEC
jurisdiction, including registered investment advisers and broker-dealers.330 Subject to limited
exceptions, Regulation S-P requires such entities to issue privacy notices to consumers regarding
their privacy policies and practices and include the categories of information collected and
disclosed; to whom information might be disclosed; an explanation of the consumer’s right to opt
out of certain disclosures; and policies and practices for protecting the confidentiality, security, and
integrity of nonpublic personal information. Regulation S-P also requires registered investment
advisers and broker-dealers regulated by the SEC to adopt written policies and procedures that
address administrative, technical and physical safeguards for the protection of customer records and
information, and impose requirements for secure disposal of consumer reports, as defined by the
Fair Credit Reporting Act. Related SEC Regulations S-AM and S-ID impose limitations on affiliate
marketing, and impose duties regarding the detection, prevention and mitigation of identity theft
pursuant to the Red Flags Rule.
In April 2011, the SEC announced that it had, for the first time, assessed financial penalties against
individuals charged solely with violations of Regulation S-P.331 According to the SEC, the fine was
assessed pursuant to an SEC investigation that found that while a broker-dealer was winding down
its business operations in 2010, its former president and former national sales manager violated
customer privacy rules by improperly transferring customer records to another firm. The SEC also
found that the former chief compliance officer failed to ensure that the firm’s policies and
procedures were reasonably designed to safeguard confidential customer information.
327 Pub. L. No. 111-203, section 1061(a)(1).
328 12 C.F.R. Part 1016.
329 CFPB statement and final rule available at http://www.consumerfinance.gov/newsroom/cfpb-finalizes-rule-to-promotemore-
effective-privacy-disclosures//
330 17 C.F.R. Part 248.
331 The SEC press release is available at: http://www.sec.gov/news/press/2011/2011-86.htm.
-72-
c. Federal Trade Commission “Red Flags” Rule
The FTC and other federal agencies that regulate financial institutions, including the Federal
Reserve Board, National Credit Union Administration, Office of the Comptroller of Currency and
Securities and Exchange Commission, have issued regulations to implement sections 114 and 315
of the Fair and Accurate Credit Transactions Act of 2003 (“FACTA”).332
FACTA is federal legislation directed at protecting consumers against identity theft as well as
enhancing the accuracy of consumer report information. It prohibits businesses from printing out
more than five digits of a credit card number, and allows consumers to obtain a free credit report
every 12 months from each of the nationwide credit reporting agencies.
The regulations, which are commonly referred to as the Red Flags Rule (the “Rule”),333 require
covered entities to develop and implement a written Identity Theft Prevention Program to detect the
warning signs – the “red flags” – of identity theft in order to prevent and mitigate identity theft.
The Rule applies to “financial institutions” and “creditors” that maintain “covered accounts,” as
those terms are defined by the Rule. The FTC’s enforcement of the Rule was effective December
31, 2010 with regard to all covered entities.
On April 10, 2013, the SEC and Commodity Futures Trading Commission (“CFTC”) jointly
adopted rules and guidelines to transfer responsibility for promulgating and enforcing the Red Flags
rule from the FTC to the SEC and the CFTC with respect to the entities they regulate. This includes
SEC-registered investment advisers, broker-dealers, or mutual funds and CFTC regulated futures
commodity merchants, commodity trading advisers, and commodity pool operators.334 This transfer
of jurisdiction became effective in November 2013. (See Section III.2.b. above on Gramm-Leach-
Bliley Act).
i. Affected “Financial Institutions” and “Creditors”
The Rule applies to “financial institutions” and “creditors” that maintain “covered accounts,” as
those terms are defined by the Rule. “Financial institution” is defined as “a State or National bank,
a State or Federal savings and loan association, a mutual savings bank, a State or Federal credit
union, or any other person that, directly or indirectly, holds a transaction account . . . belonging to a
consumer.”335
As initially enacted, the Rule’s definition of the term “creditor” was very broad, causing concern
that the Rule would extend to entities other than traditional financial institutions that engage in
regular forbearance in the collection of debts or bills or permit multiple or extended payments. On
December 18, 2010, President Obama signed the Red Flag Program Clarification Act of 2010 into
law, amending the Fair Credit Reporting Act’s definition of the term “creditor” to narrow the scope
of the Rule. The revised definition of “creditor” specifically excludes those who advance funds on
behalf of a person for expenses incidental to a service provided by the creditor to that person. As a
332 Pub. Law 108-59, codified at 15 U.S.C. § 1681 et seq.
333 16 C.F.R. § 681.
334 See Edwards Wildman Client Advisory – Identity Theft Red Flag Rules Adopted by SEC and CFTC, Apr. 2013,
http://www.edwardswildman.com/newsstand/detail.aspx?news=3737.
335 15 U.S.C. § 1681a(t).
-73-
result, many professionals who had challenged the scope of the Rule, including lawyers,
accountants and healthcare professionals, are not subject to its requirements.336
The Rule now defines “creditor” as used in the Rule as follows:
“(A) means a creditor, as defined in section 702 of the Equal Credit Opportunity Act337
(15 U.S.C. 1691a), that regularly and in the ordinary course of business
(i) obtains or uses consumer reports, directly or indirectly, in connection with a credit
transaction;
(ii) furnishes information to consumer reporting agencies, as described in section
623, in connection with a credit transaction; or
(iii) advances funds to or on behalf of a person, based on an obligation of the person
to repay the funds or repayable from specific property pledged by or on behalf of the
person;
(B) does not include a creditor described in subparagraph (A)(iii) that advances funds on
behalf of a person for expenses incidental to a service provided by the creditor to that
person; and
(C) includes any other type of creditor, as defined in that section 702, as the agency
described in paragraph (1) having authority over that creditor may determine appropriate by
rule promulgated by that agency, based on a determination that such creditor offers or
maintains accounts that are subject to a reasonably foreseeable risk of identity theft.”338
The December 18, 2010 amendment limited the definition of a creditor to cover only creditors who
regularly, and in the ordinary course of business, carry out the following functions:
Obtain or use consumer reports in connection with a credit transaction;
Furnish information to consumer reporting agencies in connection with a credit transaction; or
Advance funds to – or on behalf of – someone, except for funds for expenses incidental to a
service provided by the creditor to that person.339
336 The FTC amended its regulations to update its definition of “creditor” to match that in the Red Flag Program Clarification
Act of 2010. 16 C.F.R. Part 681. The FTC’s amended rules went into effect on February 11, 2013. 77 Fed. Reg. 72712-15.
337 “Any person who regularly extends, renews, or continues credit; any person who regularly arranges for the extension,
renewal, or continuation of credit; or any assignee of an original creditor who participates in the decision to extend, renew, or
continue credit.” 15 U.S.C. § 1691a.
338 15 U.S.C. § 1681m(e).
339 See also http://www.ftc.gov/bcp/edu/microsites/redflagsrule/index.shtml.
-74-
ii. Covered Accounts
Significantly, the definition of “covered accounts” under the Red Flags Rule is also broad. It has
two parts:
(i) An account that a financial institution or creditor offers or maintains, primarily for
personal, family or household purposes, that involves or is designed to permit
multiple payments or transactions, such as a credit card account, mortgage loan,
automobile loan, margin account, cell phone account, utility account, checking
account, or savings account; and
(ii) Any other account that the financial institution or creditor offers or maintains for
which there is a reasonably foreseeable risk to customers or to the safety and
soundness of the financial institution or creditor from identity theft, including
financial, operational, compliance, reputation, or litigation risks.340
The second part of this definition extends the scope to any account for which there is a foreseeable
risk of identity theft.
The Rule is designed to be risk-based and to take into account the burden that the Red Flags Rule
could impose upon an entity that has only a small risk of identity theft. The FTC makes clear that
higher-risk entities should have a more comprehensive Identity Theft Prevention Program, and lowrisk
entities are permitted to have a less complex program, but all entities covered by the Rule are
required to establish a program.
In recognition of the burden that compliance with the Red Flags Rule may impose on certain
entities, the FTC released a “Do-It-Yourself” Red Flag program for entities that are at low risk for
identify theft.341
d. Federal Information Security Management Act - FISMA
“FISMA” refers to the federal information security act directed at federal agencies, initially enacted
in 2002 and updated in 2014.
The Federal Information Security Management Act of 2002 (“FISMA”)342 is a United States federal
law enacted as Title III of the E-Government Act of 2002, and focuses on the importance of
information security to the economic and national security interests of the U.S. FISMA requires
each federal agency to develop, document, and implement an agency-wide program to provide
information security for the information and information systems that support the operations and
assets of the agency, including those provided or managed by another agency, contractor, or other
source.343
340 16 C.F.R. § 681.2(b)(3).
341 Available at http://www.ftc.gov/bcp/edu/microsites/redflagsrule/get-started.shtm.
342 44 U.S.C. § 3541, et seq.
343 http://csrc.nist.gov/groups/SMA/fisma/overview.html.
-75-
In December 2014, President Obama signed the Federal Information Security Modernization Act of
2014 (“FISMA 2014”) into law. 344 FISMA 2014 updates and modernizes FISMA, assigning the
Department of Homeland Security (“DHS”) an administrative role and modifying reporting
requirements, among other changes. 345
Pursuant to FISMA, the Office of Management and Budget submits an annual report to Congress on
the implementation by Federal agencies of FISMA, providing an update of information security
initiatives, a review of the year’s information security incidents, and the Federal government’s
progress in meeting key information security measures. 346
e. Department of Homeland Security - SAFETY Act
The Support Anti-Terrorism by Fostering Effective Technologies Act (“SAFETY Act”) of 2002, a
federal law enacted as part of the Homeland Security Act of 2002, Public Law 107-296 , provides
certain legal liability protections for sellers of qualified anti-terrorism technologies (“QATTs”) in
the event of a terrorist attack.347 Enacted to encourage the use of anti-terrorism technologies
following the 9/11 attacks, the SAFETY Act protects manufacturers and sellers of a broad range of
QATTs, including products, services and software, or combinations thereof, to whom a Designation
or Certification has been issued as those terms are defined.348 According to its SAFETY Act Fact
Sheet, the DHS has approved over 748 applications for SAFETY Act protections.349
f. The Health Insurance Portability and Accountability Act - HIPAA
i. Overview of HIPAA and the HITECH Act
Following passage of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”),
the U.S. Department of Health and Human Services (“HHS”) issued Standards for Privacy of
Individually Identifiable Health Information (the “Privacy Rule”), Security Standards (the “Security
Rule”), and the HIPAA Enforcement Rule.350 The intent of these regulations was and is to protect
the privacy of individually identifiable health information that is maintained or transmitted in any
form, whether electronic or not, and that relates to: (1) a past, present, or future physical or mental
health condition; (2) provision of health care; or (3) past, present, or future payment for the
provision of health care to an individual.351 With some limited exceptions, this information is
generally categorized as “protected health information” or “PHI.”352
344 44 U.S.C. § 3551 et seq.
345 See http://www.dhs.gove/federal-information-security-management-act-fisma.
346 For the report submitted in February 2015 for Fiscal Year 2014 (October 1, 2013 through September 30, 2014), see
https://www.whitehouse.gov/sites/default/files/omb/assets/egov_docs/final_fy14_fisma_report_02_27_2015.pdf.
347 6 U.S.C. § 441 et seq.
348 See https://www.safetyact.gov/pages/homepages/Home.do, and the SAFETY Act FAQs at
https://www.safetyact.gov/jsp/faq/samsFAQSearch.do?action=SearchFAQForPublic.
349 Id.
350 42 U.S.C. § 201 et seq. (HIPAA), 45 C.F.R. Part 160 and Subparts A and E of Part 164 (Privacy Rule); 45 C.F.R. Parts 160
and 164 subparts A and C (the “Security Rule”); 45 C.F.R. Part 160, Subparts C, D, and E (the “Enforcement Rule”).
351 45 C.F.R. § 160.103.
352 Id.
-76-
On February 17, 2009, the Health Information Technology or Economic and Clinical Health Act,
under Title XIII of the American Recovery and Reinvestment Act of 2009, Public Law 111-5,
(“HITECH Act”) was signed into law and contained numerous provisions affecting the privacy and
security of PHI. The final rule implementing most amendments mandated by the HITECH Act was
issued on January 25, 2013 (the “Omnibus Final Rule”).353 In addition to changes to the Privacy
and Security Rules, the Omnibus Final Rule updated the penalty structure and enforcement scheme
of HIPAA’s Enforcement Rule and finalized breach notification requirements established by the
HITECH Act (the “HIPAA Breach Notification Rule”).354 The changes adopted through the
Omnibus Final Rule are now in effect.
HIPAA and its implementing regulations apply to health plans, healthcare clearinghouses, and
healthcare providers who engage in electronic data interchange using one or more of the “standard
transactions,” as defined by HIPAA regulations governing electronic data interchange (collectively
referred to as “covered entities”).355 Pursuant to the HITECH Act and Final Omnibus Rule,
“business associates” who perform functions or activities on behalf of covered entities and create,
maintain, receive or transmit PHI in relation to such functions or activities are now directly
regulated by the Security Rule and parts of the Privacy Rule.356 This relationship is also governed
by contractual obligations, typically outlined in the parties’ business associate agreement, that seek
to ensure the privacy and security of PHI created, maintained, received or transmit on behalf of the
covered entity. To ensure that exposure to PHI by downstream subcontractors is further protected,
business associates’ subcontractors who create, maintain, receive or transmit PHI in relation to
functions or activities performed on behalf of a business associate are also regulated under HIPAA
as business associates.357
ii. HIPAA Privacy and Security Rules
The Privacy Rule governs the use and disclosure of an individual’s PHI by covered entities and
their business associates and sets standards for an individual’s right to understand and control some
aspects of how his or her PHI is used and disclosed.358 Unless certain exceptions apply, the Privacy
Rule requires a covered entity to obtain an individual’s authorization before using or disclosing that
individual’s PHI.359 Notably, however, a covered entity may generally use or disclose PHI for its
own treatment, payment, or health care operations without authorization.360 A covered entity may
also disclose PHI for treatment activities of another health care provider.361 The Privacy Rule also
requires a covered entity to mitigate, to the extent practicable, any harmful effect that is caused by
an impermissible use or disclosure of PHI.362 From an administrative standpoint, covered entities
are required to designate a privacy official to oversee the entity’s implementation of HIPAA privacy
353 78 Fed. Reg. 5566 (Jan. 25, 2013).
354 45 C.F.R. Part 164 subpart D (the “Breach Notification Rule”).
355 45 C.F.R. § 160.103.
356 Id.
357 Id.
358 See 45 C.F.R. §§ 164.520-164.528.
359 Id. §§ 164.508, 164.512.
360 Id. § 164.506.
361 Id.
362 Id. § 164.530(f).
-77-
policies and procedures, train all members of its workforce on such policies and procedures, have a
complaint reporting process, and sanction workforce members who fail to comply with the entity’s
HIPAA privacy policies and procedures.363
The Security Rule requires covered entities and business associates to adopt specified standards for
protecting electronically stored and transmitted PHI, including administrative safeguards (written
procedures and protocols, along with business associate agreements),364 physical safeguards
(limitations on physical access to hardware, media, and software containing PHI),365 and technical
safeguards (protective controls for information systems and networks).366 These security standards
are written to be flexible and scalable to covered entities’ and business associates’ size, complexity,
capabilities, technical infrastructure, hardware, and software security capabilities.367 Nevertheless,
it is important for covered entities and business associates to adopt practices that meet all “required”
standards set forth in the Security Rule and, unless otherwise justified, the “addressable” standards
as well. One important aspect of the Security Rule is the requirement for a HIPAA security risk
assessment, which requires a covered entity or business associate to conduct accurate and thorough
assessments of potential risks and vulnerabilities to the confidentiality, integrity, and availability of
electronic PHI held by the organization.368 Failure to conduct a comprehensive risk assessment has
been a common deficiency cited by OCR in recent enforcement actions. To assist covered entities
and business associates with conducting their security risk assessment, in March 2014, HHS
released an online Risk Assessment Tool.369
iii. Breach Notification Rules
The HITECH Act directed the FTC and HHS to issue regulations with respect to breaches involving
unsecured health information. The FTC’s final Health Breach Notification Rule became effective
September 24, 2009, but compliance was not required until February 22, 2010 (the “FTC Rule”).370
HHS issued an interim final rule on the Breach Notification for Unsecured Protected Health
Information for covered entities and business associates, effective September 23, 2009 (the “HIPAA
Breach Notification Rule”). The Omnibus Final Rule updated and finalized the HIPAA Breach
Notification Rule as of March 26, 2013. The breach notification requirements promulgated by the
FTC and HHS are discussed below.
iv. FTC Health Breach Notification Rule
The FTC Rule requires vendors of personal health records (“PHR”) and related entities to notify
affected individuals, the FTC and, potentially, the media of a breach of security of unsecured “PHR
identifiable health information.” PHR identifiable health information is defined as individually
identifiable health information that is provided by or on behalf of an individual and either identifies
363 Id. § 164.530.
364 Id. § 164.308.
365 Id. § 164.310.
366 Id. § 164.312.
367 Id. § 164.306(b).
368 Id. § 164.308(a)(1)(ii).
369 This tool is available at http://www.healthit.gov/providers-professionals/security-risk-assessment.
370 16 C.F.R. Part 318. Published in the Federal Register, available at http://www.ftc.gov/healthbreach/.
-78-
the individual or may be used to identify the individual.371 Based on how the FTC defines vendors
of PHR and PHR related entities, businesses offering online services that allow consumers to store
and organize medical information, web-based applications that help consumers manage medications
and websites offering online personalized health checklists are subject to the FTC notification
requirements.372
In addition to PHR vendors and related entities, the FTC Rule regulates certain third party service
providers that provide services to a vendor of PHR or to a PHR related entity and access, maintain,
retain, modify, record, store, destroy, or otherwise hold, use, or disclose unsecured PHR identifiable
health information as a result of such services.373 If a vendor of PHR hires a business to provide
billing, debt collection, or data storage services related to health information, that business is a
third-party service provider and covered by the FTC Rule.
To assess whether a breach has occurred under the FTC Rule, PHR vendors, related PHR entities
and their third party service providers are to evaluate the following factors:
Whether the potential breach involved “unsecured” PHR - The information is
“unsecured” if it is not protected through the use of a technology or methodology
recommended in HHS guidance that renders the information unusable, unreadable,
or indecipherable to unauthorized individuals.374
Whether there has been an unauthorized access or acquisition of the unsecured
PHR - According to the FTC Rule, when there is unauthorized access to data,
unauthorized acquisition is presumed unless there is reliable evidence showing that
there has not been, or could not reasonably have been, unauthorized acquisition of
such information.375
Whether the individual authorized the access. 376
Upon discovering a breach of security of unsecured PHR identifiable health information, vendors of
PHR and PHR related entities are responsible for the following notifications, which may be delayed
for law enforcement purposes:377
Notice to the Individual: Notice must be provided to an affected individual without
unreasonable delay and in no case later than 60 calendar days after the discovery of
the breach.378
371 Id. § 318.2(e).
372 Id. §§ 318.2(j); 318.2(f).
373 Id. § 318.2(h).
374 16 C.F.R. § 318.2(e) and (i).
375 Id. § 318.2(a).
376 Id.
377 Id. §§ 318.3, 318.5.
378 Id. §§ 318.4(a), 318.4(c), 318.5(a).
-79-
Method - Notice may be made by: (1) first-class mail to the individual’s last known
address; (2) email if the individual did not choose to receive first-class mail; or (3)
substitute notice, if the contact information for 10 or more individuals is insufficient
or outdated, by conspicuous posting on the home page of the entity’s website for a
period of 90 days or in major print or broadcast media, including in the areas where
the affected individuals likely reside.379 The notice must include a toll-free phone
number, which must remain active for at least 90 days.380 If notification requires
urgency because of possible imminent misuse of the information, notification may
also be provided by telephone or other means.381
Content -The notice must contain (1) a brief description of what happened,
including the date of the breach and the date of discovery of the breach, if known; (2)
a description of the types of unsecured PHR identifiable health information involved
in the breach; (3) steps that individuals should take to protect themselves from
potential harm resulting from the breach; (4) a brief description of what the entity is
doing to investigate the breach, mitigate harm, and protect against future breaches;
and (5) contact information for individuals to ask questions or obtain additional
information, including a toll-free number, email address, website, or postal
address.382
Notice to the FTC: If the breach involves the unsecured PHR identifiable health
information of 500 or more individuals, notice to the FTC must be provided no later
than ten business days after the date of discovery.383 If the breach involves fewer
than 500 individuals, the entity may instead maintain a log of the breach and must
submit it annually to the FTC no later than 60 calendar days following the end of the
calendar year.384 The FTC has issued a standard form to make it easier for
companies to report a breach to the FTC.385
Notice to the Media: If 500 or more residents of a state or jurisdiction are, or are
reasonably believed to be, affected by the breach, the entity must provide notice to
prominent media outlets in the state or jurisdiction.386
Third party service providers must notify the vendor or PHR related entity that is ultimately
responsible for these notifications under the FTC Rule.387 Third party service providers are not
379 Id §318.5(a).
380 Id.
381 Id.
382 Id.
383 Id §318.5(c).
384 Id.
385 The form is available at http://www.ftc.gov/healthbreach/.
386 45 C.F.R. §318.5 (b).
387 Id. § 318.3(b).
-80-
directly responsible under the regulation for making the notifications to individuals, the FTC, and
media outlets.388
The FTC will treat each violation of the FTC Rule as an unfair or deceptive act or practice that may
result in a civil penalty of up to $16,000 per violation.389 To date, only a small number of entities
have filed breach notices with the FTC, and it does not appear that the FTC has yet assessed civil
penalties against an entity for failing to report breaches in accordance with the FTC Rule.
v. HIPAA Breach Notification Rule
The HIPAA Breach Notification Rule outlines the requirements for covered entities and business
associates to follow when a breach of unsecured PHI occurs.390 Although the mechanics of the
notification process required by FTC and HHS are nearly identical, the HIPAA Breach Notification
Rule provides significantly more detail for addressing whether a breach of PHI actually occurred.
When investigating a potential breach of unsecured PHI, conducting and documenting a thorough
assessment of the incident and confirming that the incident falls within the definition of a breach is
critical.
A breach under the HIPAA Breach Notification Rule is defined as the acquisition, access, use or
disclosure of unsecure PHI that is impermissible under the Privacy Rule and that compromises the
security or privacy of the PHI.391 A covered entity or business associate is to conduct the following
four-prong inquiry to determine if a breach as occurred:
Does the potential “breach” involve unsecured PHI - PHI is unsecured if it is not
rendered unusable, unreadable, or indecipherable to unauthorized individuals through the
use of a technology or methodology specified in guidance published by HHS.392
Has there been an impermissible acquisition, access, use or disclosure - A covered entity
or business associate must determine whether the alleged impermissible acquisition, access,
use or disclosure violates the HIPAA Privacy Rule.
Is the probability low that the PHI was compromised - An impermissible acquisition,
access, use or disclosure of PHI is presumed to be a breach unless the covered entity or
business associate demonstrates that there is a low probability that the PHI has been
compromised based on a risk assessment of at least the following factors: (1) the nature and
extent of the PHI involved, including the types of identifiers and the likelihood of reidentification;
(2) the unauthorized person who used the PHI or to whom the disclosure was
388 Id.
389 Id. §318.7.
390 45 C.F.R. Part 164 subpart D.
391 45 C.F.R. § 164.402.
392 Id. § 164.402; See Dept. of Health and Hum. Serv., Guidance Specifying the Technologies and Methodologies that Render
Protected Health Information Unusable, Unreadable, or Indecipherable to Unauthorized Individuals for the Purposes of the Breach
Notification Requirements under the HITECH Act (April 17, 2009), available at
http://www.hhs.gov/ocr/privacy/hipaa/administrative/breachnotificationrule/brguidance.html.
-81-
made; (3) whether the PHI was actually acquired or viewed; and (4) the extent to which the
risk to the PHI has been mitigated.393
Does an exception apply - There are three exceptions to the definition of “breach.” Two of
these exceptions generally capture benign incidents of unintentional acquisition, access, use
or disclosure of PHI by or to a workforce member or person acting under the authority of a
covered entity or business associate. To meet these exceptions, the PHI cannot be further
used or disclosed in a manner not permitted by the Privacy Rule. The third exception applies
if the covered entity or business associate has a good faith belief that the unauthorized
person to whom the impermissible disclosure was made would not have been able to retain
the information.394
Upon discovering a breach of unsecured PHI, covered entities must notify affected individuals,
HHS and, if more than 500 residents of a state or jurisdiction are affected, the media.395 Business
associates who discover a breach of unsecured PHI must notify the covered entity of such breach.396
These notifications are generally required within 60 days of discovery of the breach, although
breaches involving fewer than 500 individuals can be logged by the covered entity and reported to
HHS annually.
The HIPAA Breach Notification Rule is nearly identical to the FTC Rule in terms of timeliness of
notification, method of notification, and notice to the media. Some important differences include
the following:
Instead of notifying the FTC, the HHS Rule requires covered entities to notify the Secretary
of HHS. All notifications must be submitted to the Secretary through the OCR’s web
portal.397
The HIPAA Breach Notification Rule expressly requires notices to be in plain language
(although it can be presumed that the FTC would also expect notices to be provided in plain
language as well).398
If the breach affects more than 500 individuals of a particular state or jurisdiction, notice
must be made to HHS contemporaneously with the notification to affected individuals.399
HIPAA violations, including those related to the HIPAA Breach Notification Rule, may result in
significant civil money penalties with maximum penalties for violations of the same HIPAA
provision of $1.5 million per year.400
393 Id. § 164.402(2).
394 Id. § 164.402(1).
395 45 C.F.R. §§ 164.404, 164.406, 164.408.
396 Id. § 164.410.
397 This website is available at http://www.hhs.gov/ocr/privacy/hipaa/administrative/breachnotificationrule/brinstruction.html.
398 45 C.F.R. §§ 164.404(c)(2).
399 Id. § 164.408(b).
400 Id. § 160.404.
-82-
vi. HIPAA and HITECH Act Enforcement
(1) Regulatory Enforcement
Since passage of the HITECH Act, new developments relating to both the compliance and
enforcement environment surrounding HIPAA continue to emerge. Fines for violations of the
HIPAA Privacy, Security and HIPAA Breach Notification Rules have significantly increased due to
a tiered penalty structure adopted under Omnibus Final Rule that generally ranges from $100 per
violation to $1.5 million.401 Although it appears that OCR’s more recent enforcement efforts have
somewhat plateaued, rumored to be caused by OCR leadership changes and limited resources for
auditing, OCR continues to investigate complaints and reports of breaches and issue penalties for
HIPAA violations. Enforcement is anticipated to broaden through 2015 with implementation of the
second phase of OCR’s audit program, which is expected to focus on both covered entities and
business associates. With growing prevalence of large health information breaches,402focus on
HIPAA enforcement activities is expected to grow.
Small breaches as well as large ones are subject to OCR scrutiny. The year 2013 began with HHS
announcing its first HIPAA breach settlement involving fewer than 500 patients, in which a hospice
agreed to pay $50,000 to settle potential violations of the Security Rule in connection with a breach
of unsecured electronic PHI arising from theft of an unencrypted laptop.403 In April 2015 OCR
announced a $125,000 settlement with a small, single-location pharmacy in Denver, Colorado that
had deficiencies in its HIPAA compliance program that resulted in the disposal of unsecured
documents containing the PHI of 1,610 patients. This settlement emphasizes OCR’s expectations
that covered entities, regardless of size, develop appropriate policies and procedures and training
programs that address requirements of the HIPAA Privacy and Security Rules.404
OCR has repeatedly expressed concern about improper disposal of PHI and risks to electronic data
on mobile devices, laptops and other hardware that is susceptible to theft, loss or improper disposal.
In June 2014, OCR announced an $800,000 settlement against a small hospital system in Indiana for
leaving 71 boxes containing PHI in a retired doctor’s driveway. OCR’s press release announcing
this settlement emphasized the importance “that HIPAA covered entities and their business
associates protect patient information during its transfer and disposal.”405 OCR communicated
similar sentiments in August 2013 upon announcing a $1.2 million settlement with a health plan for
returning multiple photocopiers to leasing agents without erasing the data contained on the copier
hard drives.406 OCR has also routinely emphasized its expectation that laptops and mobile devices
401 Id.; See Edwards Wildman Client Advisory, HIPAA Enforcement Rule Sets Standards for Penalties: Will Your HIPAA
Compliance Program Stand Up?, Mar. 14, 2013, http://healthcare.edwardswildman.com/blog.aspx?entry=4655.
402 See, e.g., reports of breach announced by Anthem, Inc. in early 2015, potentially affecting as many as 80 million
individuals. https://www.anthemfacts.com; http://www.naic.org/documents/anthem_data_breach.htm.
403 U.S. Department of Health & Human Services, HSS announces first HIPAA breach settlement involving less than 500
patients, Jan. 2, 2013, www.hhs.gov/news/press/2013pres/01/20130102a.html.
404 U.S. Dept. of Health & Hum. Servs, HIPAA Settlement Highlights the Continuing Importance of Secure Disposal of Paper
Medical Records, April 28, 2015, http://www.hhs.gov/ocr/privacy/hipaa/enforcement/examples/cornell/cornell-press-release.html.
405 U.S. Dept. of Health & Hum. Servs, $800,000 HIPAA Settlement in Medical Records Dumping Case, June 23, 2014,
http://www.hhs.gov/news/press/2014pres/06/20140623a.html.
406 U.S. Dept. of Health & Hum. Servs, HHS Settles with Health Plan in Photocopier Breach Case, August 14, 2013,
http://www.hhs.gov/news/press/2013pres/08/20130814a.html.
-83-
containing PHI be encrypted. Through an almost $2 million settlement announced in April 2014,
OCR further underscored its concerns regarding risks to the security of patient information posed
by unencrypted laptop computers and other mobile devices.407
Over the past few years, OCR has also expressed concern that many covered entities, especially
providers, have not completed a comprehensive risk assessment as required by the Security Rule
and warned that such violations may result in daily fines that could amount to hundreds of
thousands or even millions of dollars in civil penalties. OCR followed through with these warnings
in May 2014 when it announced a settlement with two large medical institutions involving
payments of $4.8 million, arising from an investigation following the institutions’ submission of a
joint data breach report in 2103 regarding the disclosure of PHI of 6,800 individuals.408
Demonstrating the focus of OCR investigations on pre-breach security practices as well as postbreach
response, the Resolution Agreements focus largely on the medical institutions’ alleged
failures to assess and monitor IT equipment, applications and data systems utilizing PHI, including
data systems linked to hospital patient data bases, and includes corrective action plans. Similarly, in
December 2014, OCR announced a $150,000 settlement tied to a five-facility nonprofit
organization’s breach of unsecured electronic PHI affecting 2,743 individuals due to malware
compromising the security of its IT resources. In response to this breach, OCR emphasized the
need to review systems for unpatched vulnerabilities and unsupported software that can leave
patient information susceptible to malware and other risks.409
In addition to OCR enforcement efforts, the HITECH Act also permits a state attorney general to
pursue an action against an entity that is subject to HIPAA when the attorney general “has reason to
believe that an interest of one or more of the residents of [a] state has been or is threatened or
adversely affected by any person who violates a [privacy or security provision under HIPAA].”410
Such lawsuits may implicate both state and federal law violations. For example, in January 2012, in
the first state HIPAA enforcement action against a business associate, the Minnesota Attorney
General filed a civil lawsuit411 against Accretive Health, Inc., a provider of debt collection and other
services for hospitals. The lawsuit, which alleged multiple HIPAA violations as well as
inappropriately aggressive debt collection practices, was later settled when Accretive agreed to pay
$2.5 million to the State of Minnesota to establish a restitution fund to compensate affected patients.
Accretive was also required to stop doing business in Minnesota for two years, which will cost the
company approximately $25 million in projected annual revenues. As state attorneys general
become more comfortable with their ability to enforce HIPAA, covered entities and business
associates may see more state enforcement activities in the coming years.
407 U.S. Dept. of Health & Hum. Servs, Stolen Laptops Lead to Important HIPAA Settlements, April 22, 2014,
http://www.hhs.gov/ocr/privacy/hipaa/enforcement/examples/stolenlaptops-agreements.html.
408 New York and Presbyterian Hospital agreed to pay OCR $3,300,000 to settle potential violations of HIPAA Privacy and
Security Rules and to adopt a corrective action plan to evidence their remediation of OCR’s findings, and Columbia University
agreed to pay a $1,500,000 monetary settlement and corrective action plan to address deficiencies in its HIPAA compliance program.
http://www.hhs.gov/ocr/privacy/hipaa/enforcement/examples/jointbreach-agreement.html; See Data Breach Results in $4.8 Million
HIPAA Settlements, May 7, 2014, Advisen, http://www.hhs.gov/news/press/2014pres/05/20140507b.html.
409 U.S. Dept. of Health & Hum. Servs, HIPAA Settlement Underscores the Vulnerability of Unpatched and Unsupported
Software, Dec. 2014, http://www.hhs.gov/ocr/privacy/hipaa/enforcement/examples/acmhs/index.html.
410 42 U.S.C. § 1320d-5(d).
411 Minnesota v. Accretive Health, Inc. (No. 12-145), D. Minn., Jan. 19, 2012.
-84-
(2) Private Enforcement Actions
Issues surrounding compliance with the Privacy and Security Rules may also become a component
of third-party lawsuits. Although there is not a private cause of action under HIPAA, lack of
compliance with such regulatory safeguards may give rise for state law claims of negligence
relating in the security procedures of companies that sustain a breach, defamation in a case
involving disclosure of sensitive health information or breach of a provider’s fiduciary duty for
failure to protect a patient’s health information.412
(3) State Laws and Preemption Issues
Businesses that handle an individual’s health information may be subject to privacy protections
under state laws as well as under HIPAA. As held by the Eleventh Circuit, HIPAA preempts
contrary state laws that impede the purpose and objective of HIPAA in keeping an individuals’ PHI
strictly confidential.413 However, state laws that create additional or more stringent privacy
protections for individuals are not pre-empted. Thus, state laws that protect health information,
including laws governing the disclosure of the results of HIV tests, genetic tests, or other sensitive
information, and/or require additional notification in the event of a breach also must be considered
when assessing requirements relating to health information and responding to potential breach
incidents.
g. Additional Data Privacy Requirements for Educational Institutions -
FERPA
In the United States, any school or institution that provides educational services or instruction and
receives funds under any program administered by the U.S. Department of Education (DOE) is
subject to the privacy and other requirements of the Family Educational Rights and Privacy Act
(“FERPA”).414 Subject to certain limited exceptions, FERPA gives students (or, for students who
are both under 18 and not yet attending a post-secondary school, their parents) the right to inspect
the student’s “education records” and to challenge whether the “education records” are accurate or
violate the student’s privacy rights and prohibits schools from disclosing those records, or any
“personally identifiable information” about the student contained in those records, without the
consent of the student (or, as noted above, for students who are under 18 and not yet attending postsecondary
school, the student’s parent).
FERPA broadly defines “education records” to include “any information recorded in any way” that
(1) is “directly related” to a student “who is or has been in attendance” at an educational institution,
412 See Amborgy v. Express Scripts, Inc. et al., Civil Docket #4:09-VC-00705-FRB, filed in May 2009 in the U.S. District Court,
Eastern District of Missouri. This lawsuit was commenced as a class action against an entity that provided pharmacy services and
drug formulary management services to member groups including managed care organizations, insurance carriers and employer and
union-sponsored health plans. It received an extortion demand by persons who had gained access to its customers’ confidential
Personal Information. The plaintiffs based their complaint on, among other things, the company’s alleged failure to comply with
HIPAA in a purported breach of assurances of compliance in its Privacy Notice.
413 Opis Management Resources v. Secretary Florida Agency For Health Care Administration, Docket No. 4:11-cv-00400-RS-CAS,
Apr. 9, 2013 (in which the Florida Agency for Health Care Administration issued citations to nursing facilities for violating Florida
law when they refused to release a deceased’s medical records to a spouse and certain others on the grounds that they were not
“personal representatives” under the relevant provisions of HIPAA).
414 20 U.S.C. §1232g; 34 C.F.R. Part 99.
-85-
and (2) is “maintained by” the institution or a person acting on its behalf.415 However, the statute
specifically exempts the following categories of records from the broad definition of “education
record”:
• “Records that are kept in the sole possession of the maker, are used only as a
personal memory aid, and are not accessible or revealed to any other person except a
temporary substitute for the maker of the record.”
• Records created and maintained by a campus “law enforcement unit” solely for law
enforcement purposes. A “law enforcement unit” is broadly defined to include not only
campus police with arrest or other law enforcement powers, but also “non-commissioned
security guards” or any other individual or component of an educational institution that
is authorized or designated to refer law enforcement matters to appropriate local, state or
federal authorities or to “maintain the physical security and safety of the agency or
institution.”
• Records relating to a student solely in his or her capacity as an employee of the
institution provided the employment was not “as a result of” the person’s status as a
student.
• Medical (including mental health) records that are made, maintained, and used solely
in connection with the treatment of a student and are disclosed only to individuals
providing the treatment.
• Records that only contain information about an individual after he or she no longer is
a student at the institution.416
FERPA also broadly defines the term “personally identifiable information” for purposes of the
statute. It “includes, but is not limited to: (a) the student's name; (b) the name of the student's parent
or other family members; (c) the address of the student or student's family; (d) a personal identifier,
such as the student's Social Security number, student number, or biometric record; (e) other indirect
identifiers, such as the student’s date of birth, place of birth, and mother's maiden name; (f) other
information that, alone or in combination, is linked or linkable to a specific student so that it would
allow a reasonable person in the school community, who does not have personal knowledge of the
relevant circumstances, to identify the student with reasonable certainty; or (g) information
requested by a person who the educational agency or institution reasonably believes knows the
identity of the student to whom the education record relates.”417
As noted above, FERPA generally prohibits a school or educational agency from disclosing
education records or personally identifiable information contained within those records without the
consent of the student or, where applicable, the student’s parent, subject to certain exceptions. The
415 20 U.S.C. § 1232g(a)(4)(A); 34 C.F.R. Subpart D; 34 C.F.R. § 99.3.
416 20 U.S.C. § 1232g(a)(4)(B); 34 C.F.R. § 99.3..
417 34 C.F.R. § 99.3
-86-
consent must be in writing and must specify “the records to be released, the reasons for such
release, and to whom.”418
A significant exception to the general prohibition against disclosure without consent concerns
“directory information,” which refers to personally identifiable information the disclosure of which
generally would not be considered harmful or an invasion of privacy. A school or educational
information can establish its own definition of “directory information,” which can include such
information as student names, street or email addresses, telephone listings, dates of attendance,
courses of study, honors received, height and weight of athletic team members, and the like.419 A
school may disclose directory information without consent if it has given “public notice” to students
(or parents, where applicable) of the types of information it deems to be “directory information” and
has given students (or parents) the opportunity to inform the school in advance that the student (or
parent) does not want the school to disclose any or all of the student’s directory information without
consent.420 Prior to the 2011 amendments to FERPA, schools and educational agencies were
permitted only an all-or-nothing approach to directory information, whereby anything classified as
directory information was publicly available for any purpose. With the 2011 amendments, FERPA
permits, but does not require, schools and educational agencies to limit the use or disclosure of
directory information and give parents or students the option to opt out of some, but not other, uses.
In addition to “directory information,” FERPA establishes a number of other significant exceptions
to the general rule that education records and personally identifiable information contained therein
may not be disclosed without consent. They include, among others:
• Disclosure to parents: A school may disclose any information from a student’s
education records to the student’s parent if the student is a dependent of the parent under
Section 152 of the Internal Revenue Code.421 Regardless of whether the student is a
dependent for tax purposes, the school may disclose to a student’s parent a determination
that the student has committed a disciplinary violation with respect to the use or possession
of alcohol or a controlled substance, provided the student is under the age of 21 at the time
of the disclosure and the disclosure is not prohibited by state law.422
• Disclosure to other officials of the school: FERPA permits disclosure of
information without consent to other “school officials” who the institution has determined to
have a “legitimate educational interest” in receiving the information.423 The school, as part
of its annual FERPA notification to students, is required to designate who constitutes a
school “official” and what constitutes a “legitimate educational interest” for these
purposes.424 School “officials” can be broadly defined to include essentially any person
working at or on behalf of the school, including outside contractors and vendors. A
“legitimate educational interest” can be broadly defined to include any circumstance in
418 20 U.S.C. § 1232g(b)(2)(A); 34 C.F.R. § 99.30(b).
419 34 C.F.R. § 99.3.
420 20 U.S.C. § 1232g(a)(5), (b)(1); 34 C.F.R. § 99.37.
421 20 U.S.C. § 1232g(b)(1)(H); 34 C.F.R. § 99.31(a)(8).
422 20 U.S.C. § 1232g(i); 34 C.F.R § 99.31(a)(15).
423 20 U.S.C. § 1232g(b)(1)(A); 34 C.F.R. §§ 99.31(a)(10), 99.36(a)(1).
424 34 C.F.R. § 99.7(a)(3)(iii).
-87-
which the school “official” needs the information in order to do his or her job on the
school’s behalf.
• Disclosure to other schools: FERPA permits disclosure of information to officials
of another school “at which the student seeks or intends to enroll.” The school making the
disclosure must notify the student of the disclosure (unless it was initiated by the student)
and upon request must provide the student with a copy of the information disclosed.425
Where a college or university has taken disciplinary action against a student for conduct that
posed a significant risk to the safety or well-being of that student, other students or members
of the school community, the information may be disclosed to teachers and officials in
another school “who have a legitimate educational interest in the behavior of the student.”426
• Disclosure in response to a court order or valid subpoena: FERPA permits
disclosure of information in response to a judicial order or lawfully issued subpoena. Before
complying with the order or subpoena, however, the school must make a “reasonable effort”
to notify the student – giving the student an opportunity to seek a protective order – unless
the subpoena is a federal grand jury or a law enforcement subpoena and the court or issuing
agency has ordered that the existence or contents of the subpoena not be disclosed. 427
• Disclosure in connection with a health or safety emergency: FERPA permits
disclosure of information in a “health or safety emergency” if “knowledge of the
information is necessary to protect the health or safety of the student or other individuals.”428
• Disciplinary violations involving crimes of violence and non-forcible sex
offenses: A college or university may disclose to an alleged victim of any crime of violence
or a non-forcible sex offense the “final results” of any disciplinary proceeding against the
alleged perpetrator, “regardless of whether the institution concluded a violation was
committed.” If the school determines that a violation was committed, it may disclose the
“final results” of the disciplinary proceeding to anyone. The “final results” may include
“only the name of the student, the violation committed, and any sanction imposed,” and may
not include “the name of any other student, such as a victim or witness” without that
student’s consent.429
The 2011 amendments to FERPA also expand another exception to the privacy protections of
FERPA, by increasing the opportunities for disclosure under the “audit or evaluation” and “studies”
exceptions. These allow covered entities to share student records with third parties for the purposes
of audits, evaluations, or longitudinal studies of their education programs, when certain privacy
protections, including written agreements with those third parties, are in place.
There is no private right of action under FERPA.430
425 20 U.S.C. § 1232g(b)(1)(B); 34 C.F.R. §§ 99.31(a)(2), 99.34(b).
426 20 U.S.C. § 1232g(h); 34 C.F.R. § 99.36(b)(3).
427 20 U.S.C. § 1232g(b)(1)(J); 34 C.F.R. § 99.31(a)(9).
428 20 U.S.C. § 1232g(b)(1)(I); 34 C.F.R. §§ 99.31(a)(10), 99.36.
429 20 U.S.C. § 1232g(b)(6); 34 C.F.R. §§, 99.31(a)(14), 99.39.
430 Gonzaga University et al. v. Doe, 122 S. Ct. 2268, 536 U.S. 273 (2002).
-88-
Aside from FERPA, several states have passed laws that govern the collection of information not
covered by FERPA, such as social media account information. See Section on Social Media, above.
Moreover, if an educational institution is also an arm of municipal or state government, its records
may be subject to privacy laws governing state agency records.
h. Further Protection for Minors – COPPA
Additional statutory protection is afforded children under 13 by the Children’s Online Privacy
Protection Act of 1998 (“COPPA”). 431 COPPA and its related rules regulate the online and mobile
collection and release of personal information from children under 13. The FTC has authority to
issue regulations and enforce COPPA, and has done so vigorously. For instance, in May 2011, the
FTC announced an agreement settling claims that Playdom, Inc., a leading publisher of social
games and virtual worlds, violated the COPPA Rule and Section 5 of the FTC Act in connection
with the operation of a number of online virtual world games; the settlement included a $3 million
fine. Other fines, while not as large, have been substantial. 432
COPPA also includes a self-regulatory provision that allows industries or other entities to apply for
approval of a “safe harbor” program, under which participating companies agree to be subject to the
compliance review and disciplinary procedures of the program in lieu of FTC enforcement. As of
January 2015, the FTC had approved two such “safe harbors.”433
The FTC in recent years has made several efforts to revise COPPA to address mobile and new
technology, while also taking into account the issues identified by the various stakeholders. Thus,
in September 2011, the FTC proposed revisions to COPPA. After receiving over 350 public
comments, on August 1, 2012 the Commission published a Supplemental Notice of Proposed
Rulemaking changing several aspects of its proposed revisions in a continued effort to balance the
interest of protecting children with the practicalities and challenges of operating within an online or
mobile environment, while also acknowledging the importance and benefits of the Internet, and
invited further comments.434
Finally, in December 2012, after a two-year process, the FTC introduced a new rule that went into
effect July 1, 2013. In this new rule, the FTC changed course on many of its original proposals and
adopted many industry suggestions that recognize that COPPA is aimed at protecting children from
inappropriate contact without parental knowledge, and not aimed at preventing advertising to
children. The new rule retained “email plus,” which allows operators to obtain parental consent to
collection of children’s personal information for certain internal purposes (but not third party
commercialization or marketing) by means of an email from a parent along with a reasonable form
of follow-up confirmation. The definition of “personal information” was expanded for purposes of
431 15 U.S.C. §§6501-6506. See related rules at 16 CFR Part 312.
432 See, e.g., Path Social Networking App Settles FTC Charges it Deceived Consumers and Improperly Collected Personal
Information from Users’ Mobile Address Books – Company also Will Pay $800,000 for Allegedly Collecting Kids’ Personal
Information without their Parents’ Consent, FTC Release, Feb. 1, 2013, http://www.ftc.gov/opa/2013/02/path.shtm:also, Yelp, TinCo
Settle FTC Charges Their Apps Improperly Collected Children’s Personal Information, FTC Release, September 17, 2014,
https://www.ftc.gov/news-events/press-releases/2014/09/yelp-tinyco-settle-ftc-charges-their-apps-improperly-collected.
433 http://www.ftc.gov/content/safe-harbor-program; see Happy 2nd anniversary to the “new” COPPA Rule, released January
14, 2015, https://www.ftc.gov/news-events/blogs/business-blog/2015/01/happy-2nd-anniversary-new-coppa-rule.
434 See Edwards Wildman Palmer LLP Client Advisory, Not Kidding About Protecting Kids’ Data – FTC Puts Forth More
Changes to Proposed Children’s Privacy Rules, Aug. 2012, http://www.edwardswildman.com/newsstand/detail.aspx?news=3013.
-89-
COPPA and its requirement for verified parental consent for collection of such information, and
includes persistent identifiers (with some exceptions), geolocation information, photographs and
videos of children. The new rule also includes provisions affecting mixed-use or family-oriented
sites as well as general audience sites (such as social media plug ins), and numerous other
provisions. As stated by the FTC in its announcement of the new rule:
It requires that operators of website or online services that are either directed to children under 13 or
have actual knowledge that they are collecting personal information from children under 13 give
notice to parents and get their verifiable consent before collecting, using or disclosing such
information, and keep secure the information they collect from children. It also prohibits them from
conditioning children’s participation in activities on the collection of more personal information that
is reasonably necessary for them to participate. The Rule contains a “safe harbor” provision that
allows industry groups or other to seek FTC approval or self-regulatory guidelines.”435
Though “email plus” was retained in some capacity, the FTC detailed various other methods of
obtaining verifiable parental consent and the situations for which each method would be applicable.
In addition to these, the new rule opened the door for applications for new methods of obtaining
verifiable parental consent. The first such new method, “knowledge-based authentication” was
approved in December 2013 on an application by Imperium, Inc.436 In January 2015, the FTC
denied a proposed verifiable consent method “consisting of a multi-step method requiring the entry
of a code sent by text message to a mobile device” stating that the mechanism did not comply with
“COPPA’s requirements regarding the type of parental information that can be collected as a means
to verify a parent’s identity.”437
The FTC has continued to revise its Guide for Complying with COPPA, including updating its
Frequently Asked Questions in March 2015, which are stated to be intended to supplement the
compliance materials available on the FTC Website.438
i. Telecommunications
Entities regulated by the Federal Communications Commission (“FCC”) may be subject to several
privacy provisions contained in the Communications Act, including a prohibition on disclosing the
contents or even existence of the communications they carry.439
435 FTC Strengthens Kids’ Privacy, Gives Parents Greater Control Over Their Information by Amending Children’s Online
Privacy Protection Rule, released Dec. 19, 2012, www.ftc.gov/opa/2012/12/coppa.shtrm; see also Alan Friel, New COPPA Rule a
Middle Ground – Broadens and Clarifies Children’s Privacy Obligations of On line and Mobile Companies, But Backtracks on Many
Previously Proposed Changes That Would Have Disrupted The Advertising and Publishing Industries, Digilaw, Dec. 19, 2012,
http://digilaw.edwardswildman.com/blog.aspx?entry=4485.
436 See Edwards Wildman Palmer LLP Client Advisory, Edwards Wildman Client Advisory: New Ways to Get Parental
Consent to Collect Data From Children Emerging Under COPPA, February 2014, http://www.edwardswildman.com/Edwards-
Wildman-Client-Advisory-New-Ways-to-Get-Parental-Consent-to-Collect-Data-From-Children-Emerging-Under-COPPA-02-13-
2014/
437 See FTC Concludes Review of AgeCheq’s Second Proposed COPPA Verifiable Parental Consent Method, FTC Release,
January 29, 2015, https://www.ftc.gov/news-events/press-releases/2015/01/ftc-concludes-review-agecheqs-second-proposed-coppaverifiable.
438 “Complying with COPPA: Frequently Asked Questions – A Guide for Business and Parents and Small Entity Compliance
Guide (revised March 2015),” available at www.business.ftc.gov.
439 47 U.S.C. § 605.
-90-
The most prominent of the Communications Act’s privacy rules are those concerning Customer
Proprietary Network Information (“CPNI”), which requires that providers of telephone service –
including Voice over Internet Protocol (VoIP) providers that connect to the public switched
telephone network – limit use, disclosure of, and access to information such as phone numbers
dialed, length of calls, services purchased by a customer, and charges incurred to the provision of
telephone service and certain related services.440 Unlike some other privacy laws, this does not
include “Subscriber List Information,” the names, telephone numbers, and addresses of subscribers
that the telephone carrier publishes in a directory, 441 which exception allows for the publication of
telephone directories.
In 2013, the FCC extended the CPNI rules to cover information collected by a mobile device that
meets the definition of CPNI, when the mobile carrier directs that collection and has access to the
information collected, including data regarding customers’ use of the network and data collected
through and about preinstalled apps.442
The Communications Act also includes customer notice and data protection requirements for
cable443 and satellite444 providers. Under these rules, cable and satellite providers must give annual
notice to their subscribers of their personally identifiable information collected; how that
information will be used, disclosed, and maintained; and how a subscriber may access the
information held. The law also limits the possible uses and disclosures that can be made without
customer consent, and requires that the data be destroyed if it is no longer necessary for the purpose
for which it was collected.
The FCC has increasingly focused on privacy and data protection, including through significant
enforcement actions.445
Other laws and regulations affecting telecommunications companies include the cybersecurity
framework developed by the National Institute of Standards & Technology (NIST), which identifies
the communications sector as critical infrastructure (see discussion of NIST Framework on Critical
Infrastructure below); and EU data breach notification regulations, which requires
telecommunications companies to provide notice to regulators and subscribers in the event of a data
breach (see Section IV below). Various states in the U.S. may also have data security, privacy and
breach notification laws that affect communications companies (see Section III. above on State Data
Privacy and Security Requirements, above), in addition to other federal laws that may also apply to
communications companies.
440 47 U.S.C. § 222.
441 Id.
442 In the Matter of Telecommunications Carriers’ Use of Customer Proprietary Network Information and Other Customer
Information, CC Docket No. 96-115, Declaratory Ruling, FCC 13-89 (June 27, 2013).
443 47 U.S.C § 551.
444 47 U.S.C. § 338(i).
445 See, e.g., April 2015 settlement of $25 million with AT&T Services, Inc., available at https://www.fcc.gov/document/attpay-
25m-settle-investigation-three-data-breaches
-91-
j. Telephone Consumer Protection Act – TCPA
The Telephone Consumer Protection Act (“TCPA”) presents a major privacy-related risk for
companies in a wide array of industries which use faxes, text messages, artificial or pre-recorded
voice messages, and automated dialing technologies to reach customers.446 While the TCPA is not
directed at data security, it is privacy related in that it was enacted in response to consumer
complaints about the intrusion into consumer privacy of unsolicited telemarketing. The TCPA
provides a private cause of action to recipients of certain unauthorized telephone calls and faxes and
affords damages of $500 for each violation.447 Courts in their discretion may also award up to
treble damages if plaintiffs show defendants violated the TCPA “willfully” or “knowingly.”448
Because these statutory damages can become substantial (even staggering) when aggregated, an
active and sophisticated plaintiffs’ bar has filed thousands of class action lawsuits seeking hundreds
of millions of dollars in damages for alleged TCPA violations. In addition to claims under the
TCPA, these plaintiffs also frequently assert claims based on state consumer protection statutes and
common law claims for conversion, which can increase a defendant’s exposure. A discussion of
TCPA litigation trends is contained in Section VII(f) below; this subsection provides a brief
overview of unlawful practices and penalties under the TCPA.
Subject to various exceptions, the TCPA outlaws five practices.
First, the Act makes it unlawful to use an automatic telephone dialing system (“ATDS”) or an
artificial or prerecorded voice message (sometimes called “robocalls”), without the prior express
consent of the called party, to call any emergency telephone line, hospital patient, pager, cellular
telephone, or other service for which the receiver is charged for the call, with certain exemptions.449
The TCPA authorizes the FCC to exempt from this provision calls to a number assigned to a
wireless service that are not charged to a consumer, subject to conditions the Commission may
prescribe to protect consumer privacy rights.450 Courts have reached conflicting decisions as to
whether only the current subscriber of the phone may provide the requisite “consent”,451 and
446 See generally 47 U.S.C. § 227.
447 Id., § 227(b)(3).
448 Id.
449 Id., § 227(b)(1)(A).
450 Id., § 227(b)(2)(C). As of March 2014, 24 petitions seeking clarification concerning how to interpret the TCPA were
pending before the FCC. Addressing these petitions, FCC Commissioner Michael O’Reilly wrote in a blog post that it was “time to
provide clarity” for companies that rely on the TCPA and stated: “[i]t is very troubling that legitimate companies feel they have to
ask the government for its blessing every time they need to make a business decision in order to avoid litigation,” and “[t]hat is why
the FCC needs to address this inventory of petitions as soon as possible.” Commissioner Michael O’Reilly, TCPA: It is Time to
Provide Clarity, FCC Blog (Mar. 25, 2014), www.fcc.gov/blog/tcpa-it-time-provide-clarity. See In the Matter of Cargo Airline Ass’n
Petition for Expedited Declaratory Ruling; Rules and Regulations Implementing the Tel. Consumer Prot. Act of 1991, 2014 FCC
LEXIS 1072 (Mar. 27, 2014) (summarizing conditions specified by the FCC for this exemption).
451 See Osorio v. State Farm Bk., No. 11-cv-61880, 2014 U.S. App. LEXIS 5709, *14-18 (11th Cir. Mar. 28, 2014); Soppet v.
Enhanced Recovery Co., LLC, 679 F.3d 637, 639-40 (7th Cir. 2012). Some courts have evaluated whether the subscriber gave
consent using principles established in common law. Osorio, 2014 U.S. App. LEXIS 5709 at *18-25. Other courts, drawing from
fourth amendment principles, have held that consent can be provided by a person with “common authority” over the cellular
telephone. Gutierrez v. Barclays Group, No. 10-cv-1012, 2011 U.S. Dist. LEXIS 12546, *6-9 (S.D. Cal. Feb. 9, 2011) dismissed on
other grounds, 2012 U.S. Dist. LEXIS 190049 (S.D. Cal Mar. 12, 2012). Most recently, the FCC seemed to question these holdings,
stating it found “inapposite” comments “that there is well-developed body of law addressing intermediary consent, including in the
context of the Fourth Amendment where consent to a police search may be obtained from a third party who possesses either actual or
apparent authority.” Cargo Airline Ass’n, 2014 FCC LEXIS 1073, *18 (March 27, 2014). The FCC has issued an order providing
-92-
numerous petitions on this topic are pending with the FCC. Under the TCPA, an ATDS is
“equipment which has the capacity (A) to store or produce telephone numbers to be called, using a
random or sequential number generator; and (B) to dial such numbers.”452 Courts have reached
different conclusions regarding what type of equipment satisfies the ATDS definition.453 Courts
have treated text messages the same as recorded and autodialed calls to cell phones,454 although at
least one FCC Commissioner expressed “hesitation on the applicability of the TCPA to text
messages,” noting that the TCPA was enacted in 1991 – before the first text message was ever
sent.455 Some courts have held that consumers who “opt out” of text messages may be sent a single
text message confirming receipt of the “unsubscribe” request.456 In addition, effective October 16,
2013, “prior express written consent” is required for telemarketing calls to cell phones.457
Second, the TCPA forbids using artificial or prerecorded voice messages to call residential
telephone lines without prior express consent,458 again subject to certain exemptions.459 Effective
October 16, 2013, all telemarketing robocalls are prohibited unless the consumer has given express
written consent.460 In addition, all such calls must include an interactive opt-out mechanism at the
that “autodialed . . . calls to wireless numbers that are provided by the called party to a creditor in connection with an existing debt
are permissible as calls made with the ‘prior express consent’ of the called party.” In re Rules & Regulations Implementing the Tel.
Consumer Prot. Act of 1991, 23 F.C.C. Rcd. 559, 559 (2007). The FCC has also issued an order clarifying that “neither the TCPA
nor [its] implementing rules and orders require any specific method by which a caller must obtain such prior consent for nontelemarketing
calls to wireless phones, and [concludes] that the TCPA does not prohibit a caller from obtaining consent through an
intermediary.” In the Matter of GroupMe, Inc/Skype Communications S.A.R.L. petition for Expedited Declaratory Ruling; Rules &
Regulations Implementing the Tel. Consumer Prot. Act of 1991, 2014 FCC LEXIS 1073, at *17-18 (Mar. 27, 2014).
452 47 U.S.C. § 227(a)(1); see also Satterfield v. Simon & Schuster, Inc., 569 F.3d 946, 951 (2009) (“system need not actually
store, produce, or call randomly or sequentially generated telephone numbers, it need only have the capacity to do so”).
453 Gragg v. Orange Cab Co., Inc., No. 12-cv-0576, 2014 U.S. Dist. LEXIS 29052, at *3 (W.D. Wash. Feb. 28, 2014); see
also Dominguez v. Yahoo!, Inc., No. 13-cv-1887, 2014 U.S. Dist. LEXIS 36542, at *18 (E.D. Pa. Mar. 20, 2014) (system was not an
ATDS where plaintiff did not offer evidence it had capacity to randomly or sequentially generate telephone numbers, as opposed to
simply storing telephone numbers); but see Hunt v. 21st Mort. Corp., 2014 U.S. Dist. LEXIS 13469, *13-17 (N.D. Ala. Feb. 4, 2014)
(question of fact whether defendant’s system was ATDS where defendant allegedly destroyed system when it knew of plaintiff’s
claim making it impossible to determine, as a matter of law, whether enabling software was installed or could easily have been
installed).
454 See, e.g., Dominguez, 2014 U.S. Dist. LEXIS 36542 at *n. 310 (“[f]ederal courts have made clear that the TCPA applies to
text messages as well as voice calls”), citing Gager v. Dell Fin. Servs., LLC, 727 F.3d 265, 268 (3rd Cir. 2013) (citing In the Matter
of Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, 27 F.C.C. Rcd. 15391 (2012) and Satterfield v. Simon &
Schuster, Inc., 569 F.3d 946, 954 (9th Cir. 2009)). On the other hand, courts have held § 227 does not apply to e-mails. Prukala v.
Elle, No. 14-cv-92, 2014 U.S. Dist. LEXIS 41887 (M.D. Pa. Mar. 28, 2014), citing Aronson v. Bright-Teeth Now, LLC, 824 A.2d
320, 323 (Pa. Super. Ct. 2003) (“that Plaintiff received the alleged e-mails on the same device that she uses as a telephone does not
bring [them] under the reach of the TCPA); see also In re Rule & Regulations Implementing the Tel. Consumer Prot. Act of 1991, 18
F.C.C. Rcd. 14014, 14133 (2013) (§ 227(b)(1)(C) (prohibition does “not extend to facsimile messages sent as email over the
internet”).
455 Cargo Airline Ass’n, 2014 FCC LEXIS at *26-27 (Cmr. O’Reilly, concurring).
456 Ibey v. Taco Bell Corp., 2012 U.S. Dist. LEXIS 91030 (S.D. Cal. June 18, 2012); Ryabyschuck v. Citibank (South Dakota)
N.A., 2012 U.S. Dist. LEXIS 156176 (S.D. Cal. Oct. 30, 2012).
457 47 C.F.R. § 64.1200(a)(2).
458 47 U.S.C. § 227(b)(1)(B).
459 See 47 C.F.R. § 64.1200(a)(2)(iv) and § 64.1200(f)(5).
460 Id., § 64.1200(a)(2) & (a)(3); § 64.1200(b)(2) & (b)(3). The Second Circuit has held that the provision of a telephone
number by itself may not be consent to phone calls for TCPA purposes. Nigro v. Mercantile Adjustment Bureau, LLC, 769 F.3d 804
(2nd Cir. 2014); compare Jones v. Stellar Recovery, Inc., C.A. No. 1:14-cv-21056-KMM (S.D. Fl. Feb. 20, 2015) (granting summary
judgment to TCPA defendant where consumer provided cell phone number to the original creditor on a prior account).
-93-
beginning of the message, and when a consumer chooses to opt-out, the number must be added to
the caller’s do-not-call list and the call must be immediately disconnected.461
Third, the TCPA prohibits sending “unsolicited advertisements” to fax machines,462 subject to
certain defenses. An “advertisement” is “any material advertising the commercial availability or
quality of any property, goods, or services.”463 The TCPA provides a safe harbor for such
transmissions where three elements are met: (1) the sender and recipient have an established
business relationship; (2) the recipient voluntarily shared its fax number within the context of the
established business relationship or the recipient voluntarily made its fax number available for
public distribution (e.g., by submitting the fax number to a website or directory); and (3) the fax
contained an opt-out notice as required by the statute and applicable FCC regulations.464 Where a
fax advertisement is solicited or sent with the recipient’s prior permission or consent, it must
nonetheless contain an opt-out notice with the specific language mandated by the FCC.465 On
October 30, 2014, the FCC issued an order466 clarifying the requirement that opt-out notices be
provided on all fax advertisements. In the same order, the FCC granted retroactive waivers of the
opt-out requirement to certain fax advertisement senders to provide “temporary relief from any past
obligation to provide the opt-out notice to such recipients required by [the Commission’s] rules.”467
Courts have reached different results as to whether this provision of the TCPA applies only to
intended recipients of a facsimile, as opposed to any recipient such as an owner or lessee of a fax
machine which may not have been the intended recipient.468
461 Id., § 64.1200(a)(7) & (b)(3). Debt collection calls to a landline are not considered telemarketing calls. Meadows v.
Franklin Collection Serv., 2011 U.S. App. LEXIS 2779, *11-12 (11th Cir. 2011) (debt collector “did not violate the TCPA because . .
. [it] had an established business relationship with the intended recipient of its prerecorded calls”)
462 47 U.S.C. § 227(b)(1)(C).
463 Id. § 227(a)(4). Courts have found that even faxes offering services which are ostensibly free may have a qualifying
commercial element if the sender intended to induce the recipient to take advantage of the commercial availability or quality of goods
and services offered by the sender. In re Rules and Regulations Implementing the Tel.Consumer Protection Act of 1991 and the Junk
Fact Prevention Act of 2005, 21 F.C.C. Rcd. 3787, 3814 (Apr. 2006); G.M. Sign., Inc. v. MFC.com, Inc., No. 08-cv-7106, 2009 WL
1137751*2 (N.D. Ill. Apr. 24, 2009). An advertisement is “unsolicited” if it “is transmitted to any person without that person’s prior
express invitation or permission.” 47 &U.S. C. §227(b)(1)(D).
464 Id. § 227(b)(1)(C) & (b)(2)(D); 47 C.F.R. § 64.1200(a)(4). Courts have confronted questions relating to the extent to
which the Hobbs Act (28 U.S.C. sec. 2342) prohibits a party from challenging the validity of the FCC rules in the context of a private
TCPA action. The Hobbs Act provides that federal appellate courts have exclusive jurisdiction to review and determine the validity
of FCC orders. Some Circuits have broadly held the Hobbs Act “generally precludes our court from holding the contested regulation
invalid outside the statutory procedure mandated by Congress.” Nack v. Wahlburg, 715 F.3d 680, 686 (8th Cir. 2013). At least one
Circuit Court of Appeal seems to disagree with this expansive reading of the Hobbs Act, suggesting the Hobbs Act does not bar
challenges to FCC rules as unconstitutional or ultra vires. Leyse v. Clear Channel Broadcasting, Inc., 2013 U.S. App. LEXIS 22770,
*832-38 (6th Cir. Nov. 5, 2013) (cert. denied, 135 S. Ct. 57, Oct. 6, 2014).
465 47 C.F.R. § 64.1200(a)(4)(iv).
466 CG Docket No. 05-33, Order FCC 1-164, October 30, 2014. The retroactive waivers granted by the FCC in its October 30,
2014 order apply to the opt-out notice requirement for faxes sent with consent, but not to faxes based on and established business
relationship. (Order FCC 1-164, ¶ 2, n.2.)
467 Id., ¶ 1.
468 Leyse v. Bank of America, N.A., No. 11-7128, 2014 WL 4426325 (D. N.J. Sept. 8, 2014) (“[A]n unintended and incidental
recipient of a properly-directed communication to someone else” does not have standing under the TCPA); (J2 Global Commc’ns,
Inc. v. Protus IP Solutions, No. CV 06-00566 DDP, 2010 WL 9446806 (C.D. Cal. Oct. 1, 2010) (“[A] facsimile machine does not
have standing under the TCPA; rather, “the recipient” has standing . . . being the person to whom the unlawful phone call or
unsolicited fax advertisement is directed.”); compare Chapman v. Wagener Equities, Inc., 2014 U.S. App. LEXIS 5962 (7th Cir.
Mar. 19, 2014) (“whether or not the user of the fax machine is an owner, he may be annoyed, distracted, or otherwise inconvenienced
if his use of the machine is interrupted by unsolicited faxes”), criticizing Compressor Eng. Corp. v. Mfgs. Fin’l Corp., 292 F.R.D.
-94-
Fourth, the TCPA bans using automatic telephone dialing systems to engage two or more of a
business’s telephone lines simultaneously.469
Fifth, the TCPA was amended in 2010 to make it unlawful to “knowingly transmit misleading or
inaccurate caller identification information with the intent to defraud, cause harm, or wrongfully
obtain anything of value,” except for law enforcement purposes or pursuant to court order.470
The TCPA also provides one of the statutory bases for the “National Do Not Call Registry.”471
Under FCC regulations, “[n]o person or entity shall initiate any telephone solicitation to . . . a
residential telephone subscriber who has registered his or her telephone number on the national donot-
call registry”.472 The regulations, however, contain an exemption for calls to persons with
whom the seller has “an established business relationship,” among other exemptions defined in the
regulations, unless the recipient has previously made a specific do-not-call request to that caller.473
Once the recipient makes a do-not-call request, then the caller must honor it within a reasonable
time, not exceeding thirty days, from the date such request was made.474 A different private right of
action provision governs “do not call” violations. A person who has received “more than one
telephone call within any 12-month period by or on behalf of the same entity in violation of the
regulations prescribed” by the FCC may bring suit for actual damages or “up to $500 in damages
for each such violation.”475
Statutory damages under the TCPA can become extensive when aggregated, and plaintiffs
frequently pursue such TCPA claims through class actions. Although many courts have denied
certification due to, e.g., a lack of commonality, predominance or superiority under Rule 23 or state
law counterparts,476 litigation and settlement classes have been certified,477 and some class actions
have settled for millions of dollars. In many of these cases, plaintiffs have settled with defendants
433, 448 (E.D. Mich. 2013) (finding ownership requirement because Congress’s concern was with the cost of the paper and ink
incurred by the owner of the fax machine and the fax machine’s owner’s loss of the use of the machine”).
469 47 U.S.C. § 227(b)(1)(D).
470 Id., § 227(e).
471 Id., § 227(c).
472 47 C.F.R. § 64.1200(c)(2).
473 Id., § 64.1200(f)(5)i), (f)(14)(ii).
474 Id., § 64.1200(d)(3).
475 47 U.S.C. § 227(c)(5).
476 See, e.g., Wolfkiel v. Intersections Ins. Servs. Inc., No. 13C 7133, 2014 U.S. Dist. LEXIS 28276 (N.D. Ill. Mar. 5, 2014)
(striking class allegations where court would have to conduct class-member-specific inquiries to determine whether each class
member revoked consent to defendants’ telemarketing calls); see also Local Baking Prods. v. Kosher Bagel Munch, Inc., 23 A.3d
469, 474-77 (Sup. Ct. N.J. 2011) (surveying TCPA cases and finding “lack of uniformity as to approach and result” on question of
certification; concluding “class action suit is not a superior means of adjudicating a TCPA suit” because “Congress has presented an
aggrieved party with an incentive to act in his or her own interest without the necessity of class action relief”); see also Bank v.
Independence Energy Group, 736 F.3d 660, 661 (2nd Cir. 2013) (even though New York statute prohibits class action claims for
statutory damages, Rule 23 – not state law – governs when TCPA suit is filed in federal court).
477 Hawk Valley, Inc. v. Taylor, No. 13-cv-1807, 2014 U.S. Dist. LEXIS 45700, at *42-52 (E.D. Pa. Mar. 31, 2014) (surveying
cases where plaintiffs pursued TCPA unsolicited-fax advertisement classes; concluding that individualized issues did not
predominate where no evidence suggested anyone sought or received express permission from the fax recipients and only small
percentage had done business with defendant); Ira Holtzman, C.P.A., & Associates Ltd. v. Turza, 728 F.3d 682, 684 (7th Cir. 2013)
(“[c]lass certification is normal in litigation under § 227 because the main questions . . . are common”); Gene and Gene LLC v.
BioPay LLC, 541 F.3d 318, 328 (5th Cir. 2008) (violations of § 221(b)(1)(C) “are not per se unsuitable for class resolution” but
depend on factual circumstances of each case).
-95-
for millions of dollars on the condition that plaintiffs will only seek satisfaction of the judgment
from the defendants’ insurance policies even if a court determined the insurers did not owe
defendants coverage.478 In turn, defendants have assigned their claims against and rights to
payments from their insurers to the class.479 (See Section on Privacy Related Litigation, Section
VII(f) below).”480
Numerous lawsuits have also been filed seeking coverage for underlying TCPA violations. (See
Section on Potential Insurance Coverages, below).
3. Federal Agency Privacy Guidances
As cybersecurity has gained increasing attention in recent years, and enactment of national
cybersecurity legislation has been repeatedly delayed, various federal agencies have issued
cybersecurity “Guidances” for entities subject to their oversight. The legal effect of such
Guidances has yet to be tested in the courts, but at least some may become a defacto standard of
care, and entities who are subject to a data breach who are found not to have at least attempted to
follow Guidances of their oversight entities may face difficulties in overcoming that in resulting
regulatory investigations and lawsuits. Some of these are discussed below.
a. SEC Guidances
i. SEC Guidance Regarding Public Company Obligations to
Disclose Cyber Security Risks and Incidents to Investors
Public companies need to assess their exposure to cyber risks and the procedures they take and costs
they incur in preventing cyber incidents as part of their overall assessment of matters that can have a
material effect on their company’s operations or financial condition.
In October 2011, the Division of Corporation Finance of the Securities and Exchange Commission
(the “SEC”) issued guidance that identifies cyber risks and incidents as potential material
information to be disclosed under existing securities law disclosure requirements and accounting
standards (the “Disclosure Guidance”).481 While the Disclosure Guidance states that it represents
the views of the Division of Corporation Finance and is “not a rule, regulation or statement of the
Securities and Exchange Commission,” public companies can now expect the SEC to review their
filings to determine whether cyber risks and incidents are adequately disclosed.
Federal regulations and guidance issued by other agencies in recent years have largely focused on
identifying data security risks that would affect consumers. This Disclosure Guidance, however, is
directed at protecting investors and encouraging companies to assess their risks of cyber incidents
478 See, e.g., Standard Mut. Ins. Co. v. Lay, 989 N.E.2d 591, 594-95 (Ill. 2013).
479 Id.
480 Id. at *6-7, citing FCC Ruling, 28 FCC Rcd 6574 at ¶ 46 (consumers may acquire evidence of relationship between
telemarketer and seller through discovery if they are not independently privy to such information).
481 Securities and Exchange Commission, Division of Corporation Finance, CF Disclosure Guidance: Topic No. 2,
Cybersecurity, Oct. 13, 2011, available at http://www.sec.gov/divisions/corpfin/guidance/cfguidance-topic2.htm. See also Edwards
Wildman Palmer LLP Client Advisory, Public Companies May Need to Disclose their Exposure to Material Cyber Risks According
to New Guidance Issued by SEC Division of Corporation Finance,
http://www.edwardswildman.com/newsstand/detail.aspx?news=2634.
-96-
and review the adequacy of their disclosures as to those risks and their impact on a company’s
operations, liquidity and financial condition. A broad range of factors are identified in the
Disclosure Guidance for consideration, including prior cyber incidents, business operations and
outsourced functions that have material cyber risks and potential costs and consequences, and
relevant insurance coverage purchased by the company to address its exposures. Public companies
now have a blueprint for assessing their cyber risk exposures, and for determining their reporting
obligations as to material exposures, along with the context for evaluating such disclosures.
The Disclosure Guidance was promulgated following a May 11, 2011 letter to the SEC from five
members of the Senate, including John D. Rockefeller IV, Chairman of the U.S. Senate Committee
on Commerce, Science, and Transportation. That letter expressed concern that “a substantial
number of companies do not report their information security risk to investors,” and that “once a
material network breach has occurred, leaders of publicly traded companies may not fully
understand their affirmative obligation to disclose information . . . .” As a result, the Senators
requested that the SEC “publish interpretative guidance clarifying existing disclosure requirements
pertaining to information security risk . . . .”482
The Disclosure Guidance was drafted to assist companies preparing disclosures required under U.S.
federal securities laws (such as registration statements under the Securities Act of 1933 and periodic
reports under the Securities Exchange Act of 1934) to assess whether they have a cyber risk
exposure that should be disclosed.
Companies are increasingly reporting cyber attacks and risks in their SEC filings, but even those
with breaches reportedly often include statements that there were no material financial losses. 483
ii. OCIE Cybersecurity Initiative for Broker-Dealers and
Investment Advisors
Following a Cybersecurity Roundtable held by the SEC in late March 2014, the SEC’s Office of
Compliance Inspections and Examinations (“OCIE”) announced that it will be conducting
examinations of more than 50 registered broker-dealers and investment advisors. In a
Cybersecurity Initiative Risk Alert issued by OCIE in connection with the announcement, OCIE
stated that its investigations will be designed to assess cybersecurity preparedness in the securities
industry and to obtain information about the industry’s recent experiences with certain types of
cyber threats. OCIE included in the Risk Alert a sample request for information and documents.484
In February 2015, OCIE issued a report of it examination of 57 registered broker-dealers and 49
482 Senator Rockefeller sent a similar letter on April 9, 2013 asking the SEC to elevate the cybersecurity guidance to the
Commission level, rather than the staff level (as noted, the current guidance was issued by the Division of Corporation Finance),
available at http://www.commerce.senate.gov/public/?a=Files.Serve&File_id=49ac989b-bd16-4bbd-8d64-8c15ba0e4e51.
483 See, Chris Strohm, Eric Engleman, Dave Michaels, Cyberattacks Abound Yet Companies Tell SEC Losses Are Few,
Bloomberg, Apr. 3, 2013, http:// www.bloomberg.com/news/print/2013-04-04/; Eamon Javers, Cyberattacks: Why Companies Keep
Quiet, CNBC Washington Reporter, Feb. 25, 2013, www.cnbc.com/id/100491610.
484 The Cybersecurity Initiative Risk Alert is available here:
http://www.sec.gov/ocie/announcement/Cybersecurity+Risk+Alert++%2526+Appendix+-+4.15.14.pdf
-97-
registered investment advisors, and how they dealt with the legal, regulatory and compliance issues
associated with the Cybersecurity Initiative.485
Based in part upon the OCIE’s findings, the SEC’s Division of Investment Management issued a
Guidance Update for registered investment companies (“funds”) and registered investment advisers
(“advisers”) in April 2015, entitled Cybersecurity Guidance (the “Guidance Update”). 486 The
Guidance Update identifies a number of specific measures that funds and advisors “may wish to
consider in addressing cybersecurity risk”. 487
b. Department of Justice Incident Response Guidance
In April 2015, the Cybersecurity Unit of the U.S. Department of Justice (the “DOJ”) issued incident
response guidance entitled Best Practices for Victim Response and Reporting of Cyber Incidents in
order to assist organizations in preparing to respond to a cyber incident.488 Based upon lessons
learned by federal prosecutors while handling cyber investigations and prosecutions, the DOJ
guidance also incorporates input from private sector companies that have managed cyber incidents.
The DOJ guidance focuses on the importance advance planning, providing detailed guidance
regarding establishment and execution of an incident response plan, and includes a user friendly
Cyber Incident Preparedness Checklist.
c. Food and Drug Administration Guidance regarding Medical Devices
The Food and Drug Administration (“FDA”) has taken steps to address the concern that, as medical
devices are increasingly “connected” to the Internet, hospital networks and to other medical devices,
they are also increasingly vulnerable to security breaches that could impact the safety and
effectiveness of the device.489 In October 2014, the FDA issued a final guidance document, Content
of Premarket Submissions for Management of Cybersecurity in Medical Devices,490 which contains
recommendations to medical device manufacturers relating to cybersecurity management and
information that should be included in pre-market submission, and which supplements the FDA’s
2005 guidance, Cybersecurity for Networked Medical Devices Containing Off-the-Shelf (OTS)
Software.491 In addition, the FDA’s June 2013 Safety Communication, Cybersecurity for Medical
Devices and Hospital Networks,492 recommends that medical device manufacturers and health care
facilities take steps to assure that appropriate safeguards are in place to reduce the risk of device
failure due to a cyber attack.
485 OCIE, Cybersecurity Examination Sweep Summary, available at https://www.sec.gov/about/offices/ocie/cybersecurityxamination-
sweep-summary.pdf.
486 Available at http://www.sec.gov/investment/im-guidance-2015-02.pdf
487 Id.
488 Available at
http://www.justice.gov/sites/default/files/opa/speeches/attachments/2015/04/29/criminal_division_guidance_on_best_practices_for_v
ictim_response_and_reporting_cyber_incidents.pdf.
489 FDA updates regarding its cybersecurity guidance, workshops and related developments is available here:
http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/ConnectedHealth/ucm373213.htm
490 Available at
http://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/UCM356190.pdf
491 Available at http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm077812.htm
492 Available at http://www.fda.gov/MedicalDevices/Safety/AlertsandNotices/ucm356423.htm
-98-
d. Critical Infrastructure – The NIST Cybersecurity Framework
By Executive Order in 2013, President Obama directed the National Institute of Standards and
Technology (NIST) to work with the private sector to develop a voluntary Framework – based on
existing standards, guidelines, and practices -- for reducing cyber risks to the nation’s critical
infrastructure. The resulting Cybersecurity Framework493 was released in February 2014. It has
been widely utilized by both private and public sectors in their evaluation and development of
cybersecurity practices and standards, and in the issuance of cybersecurity guidances by other
government agencies on both federal and state levels.
The Cybersecurity Framework was created through collaboration between industry and
government,494 and “provides a consensus description of what's needed for a comprehensive
cybersecurity program.” It references several generally accepted domestic and international
security standards, and collates such practices into a framework of activities that arguably
establishes a set of requirements for the development of “reasonable” security practices. It is
generally agreed by the participants to constitute best practice for cybersecurity,495 and carries the
weight of being a government-issued framework that was the result of a year-long collaboration
between industry and government to develop a voluntary “how to” guide for organizations to
enhance their cybersecurity.496
Technically, the Cybersecurity Framework was written only for businesses in the 16 critical
infrastructure sectors,497 but it is neither industry-specific, nor country-specific. Consistent with
existing law, the Framework adopts a risk-based approach to managing cybersecurity risk. As such,
it appears to fit quite well with the approach of existing legal requirements for cybersecurity
obligations. It provides generic approaches and activities to address cybersecurity for all
businesses.
Created through collaboration between government and the private sector, the Framework uses a
common and simplified language to address and manage cybersecurity risk. It provides a common
language for understanding, managing, and expressing cybersecurity risk, and thus provides a nontechnical
tool for aligning policy, business and technological approaches to managing risk.
The Cybersecurity Framework outlines a standardized approach – a process – for companies to
identify, describe, address, and communicate their cybersecurity measures and risks. In doing so,
493 http://www.nist.gov/cyberframework/upload/cybersecurity-framework-021214.pdf
494 The “framework is the culmination of a year-long effort that brought together thousands of individuals and organizations
from industry, academia and government.” Press release “NIST Releases Cybersecurity Framework Version 1.0,” February 12, 2014,
available at http://www.nist.gov/itl/csd/launch-cybersecurity-framework-021214.cfm.
495 “Over the past year, individuals and organizations throughout the country and across the globe have provided their thoughts
on the kinds of standards, best practices, and guidelines that would meaningfully improve critical infrastructure cybersecurity. The
Department of Commerce's National Institute of Standards and Technology (NIST) consolidated that input into the voluntary
Cybersecurity Framework that we are releasing today.” White House Press Release, Launch of the Cybersecurity Framework,
February 12, 2014, available at http://www.whitehouse.gov/the-press-office/2014/02/12/launch-cybersecurity-framework.
496 http://www.nist.gov/cyberframework
497 According to Presidential Policy Directive 21 (PPD-21), the 16 critical infrastructure sectors are: chemical, commercial
facilities, communications, critical manufacturing, dams, defense industrial base, emergency services, energy, financial services, food
and agriculture, government facilities, healthcare and public health, information technology, nuclear reactors, materials and waste,
transportation, and water and waste water systems.
-99-
the Framework provides organization and structure to the multiple existing approaches to
cybersecurity by assembling references to standards, guidelines, and practices that are working
effectively in industry today. Most of those standards are internationally recognized. Thus, the
Framework provides guidance to an organization on how to manage its cybersecurity risk.
The Framework allows organizations—regardless of size, degree of cyber risk or cybersecurity
sophistication—to apply the principles and best practices of risk management to improve the
security and resilience of critical infrastructure.498
At present, the Cybersecurity Framework has no legal standing. It is neither a law nor a regulation,
and thus does not impose on any business a legal duty to provide data security or constitute a
legally-binding standard to follow. However, it may well become the legal standard for defining
reasonable security in the near future. The key part of the Framework is referred to as the Core.
The Framework Core sets out a process that a business can follow to determine how to address its
own unique cybersecurity needs. It is an approach similar in concept to the WISP, is consistent
with the process-oriented risk-based approach of the WISP, and essentially incorporates all of the
elements of the WISP concept. Thus, it may well become the standard of care going forward.
The activities outlined by the Framework Core set forth, at a very high level, activities that are
likely to come to be viewed as basic requirements (i.e., best practices) for the data security
processes businesses should be following. The level of detail starts at the very general (Functions),
progresses to more detail (Categories within Functions), and then ultimately to the lowest of the
three levels of detail (Subcategories within Categories). Those five Functions and the
corresponding categories can be summarized as follows:
Identify Function. This function involves developing the organizational understanding to
manage cybersecurity risk to systems, assets, data, and capabilities. It is fundamental to all
data security activities, and includes the following categories:
Asset Management Category: Identification of all assets to be protected (physical
devices, software, data flows, etc.);
Business Environment Category: Identification of business environment, including the
organizations role in the supply chain and critical infrastructure;
Governance Category: Identification of governance policies, procedures and processes
to manage and monitor the organizational, regulatory, legal, risk, environmental, and
operational requirements;
Risk Assessment Category: Risk assessment – i.e., identification of the threats,
vulnerabilities, and impact thereof on the organization;
Risk Management Strategy: Identification of risk management strategy – i.e., the
organizations priorities, constraints, risk tolerances, and assumptions.
Protect Function. Once the assets to be protected and the risks they face have been
identified, the next step is to put in place the processes, procedures, and security measures to
498 See press release “NIST Releases Cybersecurity Framework Version 1.0,” February 12, 2014, available at
http://www.nist.gov/itl/csd/launch-cybersecurity-framework-021214.cfm.
-100-
provide such protection – i.e., to implement appropriate safeguards. This includes the
following categories:
Access Control Category: Access control processes and procedures should limit access
to processes, devices, and data to authorized users;
Awareness and Training Category: Appropriate education and training should be
provided for employees and business partners regarding security-related duties and
responsibilities;
Data Security Category: Security measures, processes, and procedures should be
implemented to protect data at rest, data in transit, data integrity and to protect against
data leaks;
Information Protection Processes and Procedures Category: Security measures should be
implemented to manage the protection of information systems and assets;
Maintenance Category: Address maintenance and repairs of control systems and
information system components consistent with policies and processes;
Protective Technology Category: Manage technical security solutions to ensure the
security and resilience of systems and assets (e.g., audit logs, removable media, and
communications & control networks).
Detect Function. Processes, procedures, and policies should be in place to detect the
occurrence of cybersecurity events. These include the following categories:
Anomalies and Events Category: The ability to detect anomalous activities in a timely
manner and understand the potential impact of events;
Security Continuous Monitoring Category: Continuous security monitoring of
information systems and assets to identify cybersecurity events and verify the
effectiveness of protective measures;
Detection Processes Category: and procedures to ensure timely and adequate awareness
of anomalous events.
Respond Function. Processes and procedures should be in place to properly and promptly
respond to detected cybersecurity events. These include the following:
Response Planning Category: Implement response processes and procedures designed to
ensure timely response to detected cybersecurity events;
Communications Category: Coordinate response activities with internal and external
stakeholders, including law enforcement agencies;
Analysis Category: Ensure adequate analysis (including forensics) is conducted to
ensure adequate response and support recovery activities;
Mitigation Category: Perform activities to prevent expansion of an event, mitigate its
effects, and eradicate the incident; and
Improvement Category: Ensure that organizational response activities are improved to
incorporate lessons learned from current and previous detection/response activities.
-101-
Recover Function. Processes and procedures should be in place to recover from security
incidents, and to restore any capabilities or services that were impaired. These include the
following:
Recovery Planning Category: Ensure execution of recovery processes and procedures to
ensure timely restoration of systems affected by cybersecurity events;
Improvements Category: Recovery planning and processes should be improved by
incorporating lessons learned;
Communications Category: Restoration activities should be coordinated with internal
and external parties.
As the Framework is intended to be a living document, since its issuance, NIST has continued its
collaboration with entities in the public and private sector, and held workshops to discuss feedback
and experience from uses, and updating of the Frameworks. Its site contains information on
updates, FAQs, and other information for assisting in the implementation of the Framework. 499
NIST had issued a request for Information on August 26, 2014, held workshops, and on December
5, 2014, NIST issued Update on the Cybersecurity Framework to provide a summary of the RFI
responses and feedback from the workshop and how NIST will support use of the Framework going
forward. 500
e. Additional Recent Federal Activity and Proposals
In light of a series of major data breaches and other cyber attacks against large institutions,
businesses, and entities that are part of critical infrastructure, there has been increasing recognition
on the federal level of the growing risk of cyber attacks from both domestic and external sources,
and the resultant exposures and disruptions to business, government operations and individuals’
interests. Thus, over the last several years, the White House and federal agencies have issued policy
frameworks and initiatives, the President has issued Executive Orders and legislative proposals, and
members of Congress have proposed numerous bills, in an effort to address privacy, data security
and cyber security issues and risks and institute federal standards.
i. Federal Privacy, Data Security and Cyber Security Proposals
Numerous federal bills have been proposed with the goals of increasing consumer privacy and data
security, combating breaches and theft from company and government computer networks, and
imposing national breach notification requirements. While Washington policymakers and Congress
had earlier seemed poised to enact legislation in this area, none of the recent proposals have gained
499 See http://www.nist.gov/cyberframework/.
500 Update on the Cybersecurity Framework, 5 December 2014, is available at
http://www.nist.gov/cyberframework/upload/nist-cybersecurity-framework-update-120514.pdf. See also presentation at NARUC
Winter Committee Meeting, Committee & Staff Committee on Critical Infrastructure, Framework for Improving Critical
Infrastructure Cybersecurity, February 15, 2015, at
http://www.nist.gov/cyberframework/upload/cybersecurity_framework_naruc_winter_2015_meeting_2015-02-15.pdf, and
presentation by NIST April 8, 2015, http://www.nist.gov/mwginternal/
de5fs23hu73ds/progress?id=NmYg3xLNL5AHZSmgjmjLltwvKsHQbkICt_dSJ4JKXvc.
-102-
sufficient momentum or bipartisan support, and there has been continuing debate on the balance
between national security needs and individual privacy concerns that are sought to be addressed in
many of the proposals.
The goals of the currently pending bills and the current White House legislative proposal vary, but
most would impose information security program requirements upon certain types of entities,
particularly those in the industrial and public sectors, and many would replace state data breach
notification requirements with federal requirements. Summaries of certain more significant bills
currently under consideration, as well as some of the White House Executive Orders and legislative
proposal, are provided below.
In an indication of the increasing attention that data and cyber security risks are generating from
federal policymakers, there has been a number of bills on the subject introduced in Congress in
every session since 2011, with more expected until legislation, in some form, is passed. As noted
below, while progress on a national uniform data breach notification standard remains elusive,
Congress has enacted legislation on cybersecurity issues, and the White House has been active in
this area during the past several years.
ii. White House Initiatives
Cybersecurity: In May 2011, the White House unveiled a comprehensive legislative proposal501 for
increased cyber security measures and standardization of notification of breach obligations. The
Administration’s proposal includes provisions for: (i) creating a national notification standard;
(ii) synchronization of penalties for computer crime with other types of crime, including mandatory
minimum penalties for cyber intrusions into critical infrastructure and enabling the Department of
Homeland Security to help and collaborate with private sector entities in responding to a cyber
intrusion; (iii) voluntary sharing of information of new cyber threats but with privacy oversight to
ensure that such actions do not adversely affect civil liberties or individual privacy; and (iv)
formalizing the Department of Homeland Security’s role in managing cyber security and the
Federal Information Security Management Act.
This was followed on February 12, 2013, by President Obama issuing an Executive Order titled
“Improving Critical Infrastructure Cybersecurity,”502 which formally acknowledged that “[t]he
cyber threat to critical infrastructure continues to grow and represents one of the most serious
national security challenges we must confront.”503 The President directed federal agencies to
develop their own voluntary cybersecurity standards for critical parts of the private sector. The
Order also requires federal agencies to produce unclassified reports of threats to U.S. companies
and to share them in a timely manner. Additionally, the Order instructs federal agencies to ensure
that privacy and civil liberties protections are incorporated into their activities. The Executive
Order ultimately led to the development of the Cybersecurity Framework by the National Institute
of Standards and Technology (see Section on Critical Infrastructure - The NIST Cybersecurity
501 See fact sheet issued by the White House, available at http://www.whitehouse.gov/sites/default/files/fact_sheetadministration_
cybersecurity_legislative_proposal.pdf.
502 Available at http://www.whitehouse.gov/the-press-office/2013/02/12/executive-order-improving-critical-infrastructurecybersecurity.
503 Id.
-103-
Framework, above) and President Obama renewing his call for Congressional action to enact
cybersecurity legislation.
In January 2015, the Obama administration issued another legislative proposal that would provide a
federal breach notification standard, and would also encourage the private sector to share
information about cyber threats with the Department of Homeland Security by providing limited
liability protection to companies that share such information. The administration proposal also
encourages the formation of Information Sharing and Analysis Organizations within the private
sector to share information concerning cyber threats to our critical infrastructure.
On February 13, 2015, the White House issued Executive Order 13691, which builds on the
foundation established by Executive Order 13636 of February 12, 2013 (Improving Critical
Infrastructure Cybersecurity), and Presidential Policy Directive-21 (PPD-21) of February 12, 2013
(Critical Infrastructure Security and Resilience) to further encourage the formation of ISAOs, to
create an ISAO Standards Organization, and to make changes to the Critical Infrastructure
Protection Program and the National Industrial Security Program.504 (See section discussing NIST
Framework above).
Surveillance Reform: In light of the revelations by Edward Snowden of the data monitoring and
collection practices of the NSA, in March of 2014, President Obama announced a proposal to end
the federal government’s bulk telephone metadata collection program. President Obama proposed a
new program in which among other reforms (1) the government will not collect telephone records in
bulk, but rather the records would remain at the telephone companies; (2) the government would
obtain such phone records only pursuant to individual orders from the Foreign Intelligence
Surveillance Act (“FISA”) Court that approve the use of specific phone numbers for searching; and
(3) telephone companies would be compelled to provide technical assistance to ensure that the
records provided to the government can be searched in a usable format.
Other Proposals: On January 23, 2014 the White House launched a 90 day review of “big data”
and privacy that culminated in a set of policy recommendations on May 1, 2014.505 The goal of the
review was to analyze the ways in which big data would “affect the way we live and work; the
relationship between government and citizens; and how public and private sectors can spur
innovation and maximize the opportunities and free flow of this information while minimizing the
risks to privacy.”506 The White House asked for comments from the public and has held public
workshops around the country on questions including:
What are the public policy implications of the collection, storage, analysis, and use of
big data?
What types of uses of big data could measurably improve outcomes or productivity with
further government action, funding, or research?
504 Available at https://www.whitehouse.gov/the-press-office/2015/02/13/executive-order-promoting-private-sectorcybersecurity-
information-sharing.
505 Big Data and the Future of Privacy, John Podesta, Jan. 23, 2014, http://www.whitehouse.gov/blog/2014/01/23/big-dataand-
future-privacy.
506 Id.
-104-
What technological trends or key technologies will affect the collection, storage, analysis
and use of big data?
How should the policy frameworks or regulations for handling big data differ between
the government and the private sector?
What issues are raised by the use of big data across jurisdictions, such as the adequacy of
current international laws, regulations, or norms?507
The report published by the White House’s working group on big data at the end of this review
highlighted six recommendations:
Consumer Privacy Bill of Rights. The Department of Commerce solicit public comment
on a Consumer Privacy Bill of Rights, based on Fair Information Practice Principles,
with the ultimate goal of drafting proposed legislation.
National Data Breach Notification Law. Congress should pass a law that established a
national standard for data breach notification.
Privacy Act of 1974. The Office of Management and Budget should apply the
protections of the Privacy Act of 1974, which protects personal information held by the
federal government, to non-U.S. persons where practicable.
Education Data. The federal government should consider modernizing the Family
Educational Rights and Privacy Act and Children’s Online Privacy Protection Act to
ensure data collected in schools is not misused, but also encourage innovation in
educational technologies and methods.
Expand Technical Expertise to Stop Discrimination. The several federal agencies that
protect consumers and civil rights, including the Consumer Financial Protection Bureau
and the Equal Employment Opportunity Commission should expand their technical
expertise so that they may identify how big data analytics might have discriminatory
impacts and develop plans for investigating and resolving such discrimination cases.
Amend the Electronic Communications Privacy Act (“ECPA”). ECPA provides
different levels of protection when the government seeks to access electronic
communications held by third parties (such as an email provider) depending on how long
the email has been stored - requiring probably cause and a search warrant for electronic
communications that have been held for less than 180, but only a subpoena or similar
court order for communications held for more than 180 days. The working group
recommends Congress amend the law to remove this distinction and “ensure the
standard of protection for online, digital content is consistent with that afforded in the
physical world.”
507 Government ‘‘Big Data’’; Request for Information, Office of Science and Technology Policy, 79 Fed. Reg. 12251, 12251-
52 (Mar. 4, 2014).
-105-
In February 2015, the White House released an interim progress report detailing both what had been
done to date and what it believed remained to be done going forward, Big Data: Seizing
Opportunities, Preserving Values.508
iii. Congressional Activity on the Legislative Front
After years of inability to enact legislation on cybersecurity, Congress took action at the end of the
113th Congress (2013-2015) on new statutes that were signed into law by the president.
A. USA FREEDOM Act of 2015. On June 2, 2015, President Obama signed into law the USA
FREEDOM Act,509 which outlaws bulk collection of phone records by the federal
government and does not require phone companies to maintain phone records longer than
they would in the normal course of business. Critics argue that the statute does not go far
enough as it was amended to allow for collection of phone records (and the metadata of
those two degrees of separation from the suspect) in certain cases. Also, the statute does not
include a special advocate to represent the privacy interests of the subject of the
investigation under the Foreign Intelligence Surveillance Act, where the subject would not
be present.
B. National Cybersecurity Protection Act of 2014. On December 18, 2014, President Obama
signed into law the National Cybersecurity Protection Act of 2014.510 The National
Cybersecurity Protection Act of 2014 codified the National Cybersecurity and
Communications Integration Center of the Department of Homeland Security as a “federal
civilian interface” through which information on cyber threats can be shared between the
private sector and government.
C. Cybersecurity Enhancement Act of 2014. The Cybersecurity Enhancement Act of 2014,511
also signed into law on December 18, 2014, is focused on the development of voluntary
cybersecurity standards for critical infrastructure through NIST.
D. Federal Information Security Modernization Act of 2014. The Federal Information
Security Modernization Act of 2014,512 also signed into law on December 18, 2014, updated
the Federal Information Security Management Act of 2002 by assigning the Department of
Homeland Security responsibility for the information security of federal agencies by
collecting data related to agency information security and installing tools to diagnose and
mitigate cyber threats and vulnerabilities.
508
https://www.whitehouse.gov/sites/default/files/docs/20150204_Big_Data_Seizing_Opportunities_Preserving_Values_Mem
o.pdf.
509 H.R. 2048, 114th Cong. (2015).
510 S. 2519, 113th Cong. (2014).
511 S. 1353, 113th Cong. (2014).
512 S. 2521, 113th Cong. (2014).
-106-
E. DHS513 Cybersecurity Workforce Recruitment and Retention Act of 2014, Homeland
Security Cybersecurity Workforce Assessment Act and the Cybersecurity Workforce
Assessment Act. Also signed into law on December 18, 2014, these statutes authorize
cybersecurity positions within DHS, and the federal agencies.514
F. Consolidated and Further Continuing Appropriations Act. This statute,515 signed by
President Obama on December 16, 2014, requires an assessment of cyber-espionage or
sabotage risks related to high- or moderate-impact information systems to be acquired from
any country posing a cyber threat, specifically including China.
iv. Additional Federal Agency Privacy and Cybersecurity Initiatives
In addition to the NIST Cybersecurity Framework and federal agency Guidances discussed above,
the Federal Trade Commission and the Department of Commerce have both unveiled privacy
frameworks outlining policy recommendations, which are expected to be influential in shaping
forthcoming legislation. The Securities and Exchange Commission (SEC) is in the process of
evaluating what role it can play in mitigating cybersecurity risks; its recent activities are discussed
below (and in Sections referencing them above).
(1) Federal Trade Commission
In March 2012, the FTC issued a report, Protecting Consumer Privacy in an Era of Rapid Change:
Recommendations for Businesses and Policymakers,516 which sets forth a framework of best
practices for how companies should protect consumers’ privacy, and is intended to inform
policymakers as they develop solutions, policies and potential laws governing privacy. The report
is also intended to guide and motivate the business community as it develops more robust and
effective best practices and self-regulatory guidelines.
The proposed framework would apply broadly to online and offline commercial entities that collect,
maintain, share, or otherwise use consumer data that can be reasonably linked to a specific
consumer, computer or device. A preliminary version of the report recommended that the
framework would apply to all such entities but the final report concludes that the framework should
not apply to companies that collect and do not transfer only non-sensitive data from fewer than
5,000 consumers each year, a change borne out of recognition of the potential burden on small
businesses.
Among the guidelines outlined in the proposed framework are: (i) building privacy protection into
everyday business operations and at every stage in product development; (ii) providing choices to
consumers about their data practices in a simpler, more streamlined way and providing a “Do Not
Track” option; (iii) making data practices more transparent to consumers; (iv) providing consumers
with reasonable access to the data that companies maintain about them; and (v) undertaking a broad
effort to educate consumers about commercial data practices and the choices available to them.
513 DHS stands for the U.S. Department of Homeland Security.
514 S. 1691 and S. 2952, 113th Cong. (2014).
515 H.R. 83, 113th Cong. (2014).
516 Available at http://ftc.gov/os/2012/03/120326privacyreport.pdf.
-107-
The FTC also recommends that Congress consider general privacy legislation, data security and
breach notification legislation, and data broker legislation. It also urges individual companies and
self-regulatory bodies to accelerate the adoption of the principles contained in the framework, and
recommends that data brokers who compile consumer data for marketing purposes should explore
the creation of a centralized website where consumers could get information about their practices
and options for controlling the use of the data.
Over the first half of 2014, the FTC hosted a “Seminar Series on Emerging Consumer Privacy
Issues” that will ultimately result in staff reports on the topics discussed.517 The first of these
seminars tackled mobile device tracking518 and the second addressed alternative scoring products to
determine consumers’ access to products and offers.519 The final seminar, is on “Consumer
Generated and Controlled Health Data” in May 2014.520
The FTC has been very active in enforcing privacy and data security. (See Section III.2 on FTC
Regulation of Privacy and Data Protection, above; see also Section III.2.c. on Federal Trade
Commission “Red Flags” Rule, above).
(2) U.S. Department of Commerce
The U.S. Department of Commerce Internet Policy Task Force issued a green paper entitled
Commercial Data Privacy and Innovation in the Internet Economy: A Dynamic Policy Framework
(the “2010 Green Paper”) in December, 2010.521 The 2010 Green Paper detailed initial policy
recommendations aimed at promoting consumer privacy online while ensuring that the Internet
remains a platform that spurs innovation, job creation, and economic growth. Key
recommendations set forth in the 2010 Green Paper include: (i) consider establishing fair
information practice principles comparable to a “Privacy Bill of Rights” for online consumers; (ii)
consider developing enforceable privacy codes of conduct in specific sectors with stakeholders; (iii)
create a privacy policy office in the Department of Commerce; (iv) encourage global
interoperability to spur innovation and trade; (v) consider how to harmonize disparate security
breach notification rules; and (vi) review the Electronic Communications Privacy Act for the cloud
computing environment.
In June 2011, the Department of Commerce issued another green paper, entitled Cybersecurity,
Innovation and the Internet Economy (the “2011 Green Paper”), addressing the economic
importance of strengthening cybersecurity protection and preserving consumer trust in the
Internet.522 The Task Force recognized that the threat of cybersecurity attacks has grown as Internet
517 FTC to Host Spring Seminars on Emerging Consumer Privacy Issues, Federal Trade Commission, Dec. 2, 2013,
http://www.ftc.gov/news-events/press-releases/2013/12/ftc-host-spring-seminars-emerging-consumer-privacy-issues
518 Spring Privacy Series: Mobile Device Tracking , Feb. 19, 2014, http://www.ftc.gov/news-events/eventscalendar/
2014/02/spring-privacy-series-mobile-device-tracking
519 Spring Privacy Series: Alternative Scoring Products, March 19, 2014, http://www.ftc.gov/news-events/eventscalendar/
2014/03/spring-privacy-series-alternative-scoring-products
520 Spring Privacy Series: Consumer Generated and Controlled Health Data, May 7, 2014, http://www.ftc.gov/newsevents/
events-calendar/2014/05/spring-privacy-series-consumer-generated-controlled-health-data
521 Available at http://www.commerce.gov/node/12471.
522 Available at
http://www.commerce.gov/sites/default/files/documents/2011/june/cybersecurity_green_paper_finalversion_0.pdf.
-108-
business has grown. Key recommendations in the 2011 Green Paper include: (i) the establishment
of nationally recognized but voluntary codes of conduct to minimize cybersecurity vulnerabilities;
(ii) the development of incentives to combat cybersecurity threats;523 (iii) the improvement of the
public understanding of cybersecurity vulnerabilities through education and research; and (iv) the
enhancement of international collaboration on cybersecurity best practices to support expanded
global markets for U.S. products.
In March 2013, as part of an effort to prepare a report identifying ways to incentivize companies
and organizations to improve their cybersecurity, the Department of Commerce issued a series of
inquiries for public response.524 Forty-five different entities, including energy companies,
technology companies, governmental agencies and consultants, provided responses.525 This
ultimately led to a series of recommendations526 that were instrumental in NIST’s development of
the Cybersecurity Framework. (See Section III.4.d. on Critical Infrastructure – The NIST
Cybersecurity Framework, above).
(3) Securities and Exchange Commission
In March 2013, the SEC proposed Regulation SCI, which would require certain market participants
to have policies and procedures in place to protect their electronic systems.527 On March 26, 2014,
the SEC held a roundtable discussion on this topic, and related topics, including disclosures of cyber
risk and data breaches, the role of boards of directors with regard to cyber risk, and simulations to
identify problem areas and improve defenses.528 (See Section on Regulation S-P and SEC
Enforcement of Privacy, Data Protection and Cybersecurity, above; see also Section on SEC
Guidance Regarding Public Company Obligations to Disclose Cyber Security Risks and Incidents
to Investors, above).
In April 2014, the SEC’s Office of Compliance Inspections and Examinations (“OCIE”) issued a
Risk Alert to provide additional information concerning its initiative to assess cybersecurity
preparedness in the securities industry, entitled OCIE Cybersecurity Initiative.529 This was
followed by the OCIE’s February 3, 2015 Cybersecurity Examination Sweep Summary reporting on
523 According to the 2011 Green Paper, these incentives could include the reduction of cyber insurance premiums for
companies that adopt best practices and openly share details about cyber attacks for the benefit of other businesses.
524 A Chance to Comment on Commerce’s Report on Cybersecurity Incentives, Mar. 28, 2013,
http://www.commerce.gov/blog/2013/03/28/chance-comment-commerce%E2%80%99s-report-cybersecurity-incentives. Notice of
Inquiry available at http://www.ntia.doc.gov/federal-register-notice/2013/notice-inquiry-incentives-adopt-improved-cybersecuritypractices.
525 Responses are available at http://www.ntia.doc.gov/federal-register-notice/2013/comments-incentives-adopt-improvedcybersecurity-
practices-noi.
526 Recommendations to the President on Incentives for Critical Infrastructure Owners and Operators to Join a Voluntary
Cybersecurity Program, Department of Commerce, Aug. 8, 2013,
http://www.ntia.doc.gov/files/ntia/Commerce_Incentives_Recommendations_Final.pdf.
527 Regulation Systems Compliance and Integrity, Release No. 34-69077; File No. S7-01-13,
https://www.sec.gov/rules/proposed/2013/34-69077.pdf
528 SEC Holds Cybersecurity Roundtable, Digilaw Blog, March 31, 2014,
http://digilaw.edwardswildman.com/blog.aspx?entry=5317.
529 http://www.sec.gov/ocie/announcement/Cybersecurity-Risk-Alert--Appendix---4.15.14.pdf.
-109-
the results of the OCIE’s National Examination Program staff’s examination of 57 registered
broker-dealers and 49 registered investment advisors.530
In April 2015, the SEC’s Investment Management Division issued cybersecurity guidance (No.
2015-02) to investment companies and investment managers.531
v. Additional Federal Developments
Cyber security in all senses is clearly a growing concern of the federal government, as demonstrated
by both the legislative and agency developments discussed above, and additional Obama
administration initiatives.
(1) Office of the Cyber Czar
Shortly after taking office, President Obama announced the creation of the office of “Cyber Czar” –
a national cyber security chief to oversee the security of the U.S. communications networks and
electronic infrastructure, in May 2009.
That office has continued, with the current holder Michael Daniel, Special Assistant to the President
and Cybersecurity Coordinator. In that position, he leads the interagency development of national
cybersecurity strategy and policy, and oversees the implementation of those policies. 532
(2) Government Accountability Office Reports
In early 2013, the U.S. Government Accountability Office (“GAO”) issued a report titled
“Cybersecurity – National Strategy, Roles, and Responsibilities Need to Be Better Defined and
More Effectively Implemented” (the “GAO Report”)533 in which it summarized several key
challenge areas in the federal government’s approach to cybersecurity. The GAO stated that the
increase in risks is demonstrated by the “dramatic increase in reports of security incidents” and the
ease of obtaining and utilizing hacking tools as well as the advances in the effectiveness and
sophistication of the attack technology. The GAO also recognized that cyber attacks could have a
potentially devastating impact on the nation’s computer systems and networks and could disrupt
government and business operations as well as the lives of individuals.
The GAO Report focuses on the increasing threat to sensitive information at risk that has potentially
serious impacts on federal and military operations and critical infrastructure. According to the
GAO Report, the number of incidents reported by federal agencies to the U.S. Computer
Emergency Readiness Team increased by 782% from 2006 to 2012.
According to the GAO Report, the many continuing cyber security challenges that are faced by the
federal government identify the need for a clearly defined oversight process to ensure that
530 https://www.sec.gov/about/offices/ocie/cybersecurity-examination-sweep-summary.pdf.
531 http://www.sec.gov/investment/im-guidance-2015-02.pdf.
532 See https://www.whitehouse.gov/blog/author/Michael%Daniel.
533 United States Government Accountability Office, Cybersecurity – National Strategy, Roles, and Responsibilities Need to
Be Better Defined and More Effectively Implemented, GAO-13-187, Feb 14, 2013.
-110-
individual agencies are held accountable for implementing effective information security programs.
However, the Report recognizes that until there is a national cyber security strategy that addresses
all of the necessary key elements, progress is likely to remain limited.
The GAO recommends that the White House develop an overarching federal cyber security
strategy. Additionally, the GAO recommends that the strategy ensure that federal agencies are held
accountable for making significant improvements in cyber security and that Congress consider
legislation to better define roles and responsibilities for implementing and overseeing federal
information security programs.
On November 15, 2013, the GAO released another report entitled Information Resellers: Consumer
Privacy Framework Needs to Reflect Changes in Technology and the Marketplace.534 In this report,
the GAO noted that self-regulation has thus far been inadequate at protecting consumer privacy and
recommended federal legislation to provide such protection. This legislation, the GAO suggested,
should generally give consumers the right to access and correct information held about them by
private companies, and align with Fair Information Practice Principles. However, the GAO stopped
short of recommending specific laws, and acknowledged the challenge in providing sufficient
protection to individuals, without stifling innovation and commerce, which bring their own benefits
to consumers.535
On February 11, 2015, the GAO published its update to the GAO’s High-Risk Series, which
expanded the section entitled “Ensuring the Security of Federal Information Systems and Cyber
Critical Infrastructure and Protecting the Privacy of Personally Identifiable Information (PII).”536
As the GAO explained,
This risk area is expanded because of the challenges to ensuring the privacy of
personally identifiable information posed by advances in technology. These
advances have allowed both government and private sector entities to collect and
process extensive amounts of PII more effectively. The number of reported
security incidents involving PII at federal agencies has increased dramatically in
recent years.537
While noting the various legislative and executive initiatives to address cyber threats reviewed
above, the GAO found, “cyber threats and incidents to systems supporting the federal government
and national critical infrastructures are increasing,” and noted,
Over the past 8 years, the number of information security incidents reported by
federal agencies to the U.S. Computer Emergency Readiness Team (US-CERT)
has increased from 5,503 in fiscal year 2006 to 67,168 in fiscal year 2014, an
increase of 1,121 percent ....538
534 Nov. 15, 2013, http://www.gao.gov/products/gao-13-663.
535 Information Resellers: Consumer Privacy Framework Needs to Reflect Changes in Technology and the Marketplace, at 46,
http://www.gao.gov/assets/660/658151.pdf.
536 GAO 15-290 (Feb. 2015).
537 Id. at Highlights.
538 Id. at 241.
-111-
On April 22, 2015, the Director of Information Security Issues testified before the Committee on
Oversight and Government Reform, House of Representatives, on the GAO findings, in a statement
entitled Cybersecurity: Actions Needed to Address Challenges Facing Federal Systems. 539
This was before the full scope of the high profile breach of the Office of Personnel Management
was publicly reported.
4. PCI -The Payment Card Industry Standards for Protection of Payment
Card Information
The Payment Card Industry Security Standards Council (the “PCI”) is an open, global forum that
develops and manages data security standards for payment cards and cardholder data used and
transmitted around the world. The PCI members are the major payment networks: American
Express, Discover Financial Services, JCB International, MasterCard, Visa Inc. and Visa Europe
(the “Brands”).
a. PCI-DSS
The PCI has developed Payment Card Industry Data Security Standards (“PCI-DSS”). PCI-DSS is
the set of industry standards intended to help protect the security of electronic payment card
transactions which include Personal Information of cardholders, and are periodically revised. It is
incorporated into contractual agreements binding the various entities involved in the chain of
payment card processing. It is generally enforced by fines, penalties and other assessments imposed
by agreements among the participating entities and passed down the chain of entities involved in
payment card transactions through contractual indemnification. It operates as an industry
requirement for security for all organizations utilizing payment card information. Thus, the
obligation to comply and the imposition of fines and penalties and other assessments are essentially
contractual private arrangements rather than government regulatory schemes. However,
government entities are starting to adopt these standards, as discussed below.
PCI-DSS applies to all entities that store, process or transmit cardholder data, and imposes other
standards for software developers and manufacturers of applications and devices which are used in
applicable payment card transactions.540 It imposes requirements upon those entities for security
management, policies, procedures, network architecture, software design and validation obligations
to ensure entities appropriately protect customer card account data. Certain Brands, including
MasterCard541 and Visa Inc.542, categorize merchants and service providers according to the number
of card transactions they process for that Brand in a twelve-month period and impose different
obligations depending on the category. For example, a Level 1 designation indicates that the entity
539 See http://www.gao.gov/assets/670/669810.pdf.
540 See, e.g., PCI DSS Quick Reference Guide, PCI Security Standards Counsel, available at
https://www.pcisecuritystandards.org/documents/PCIDSS_QRGv3.pdf.
541 See http://www.mastercard.com/us/company/en/whatwedo/determine_merchant.html.
542 See http://usa.visa.com/merchants/protect-your-business/cisp/merchant-pci-dss-compliance.jsp.
-112-
is among those with the largest number of transactions. Because it processes more than six million
credit card transactions annually, across all channels, including the Internet, a Level 1 merchant
must perform an annual on-site PCI data security assessment and quarterly network scans.
A large number of malicious data breaches are targeted at obtaining electronically transmitted,
collected or stored payment card information, making PCI-DSS compliance one of the first aspects
investigated when a breach occurs. More companies implement PCI-DSS compliance each year,
but four out of five companies reportedly were still failing their interim assessments in 2014.543
PCI-DSS compliance is an ongoing effort and companies can easily fall short of compliance over
time without constant vigilance. Verizon has reported that every data breach (involving payment
cards) they investigated over a ten-year period involved companies that were found to be noncompliant
at the time of the breach, even if they had been compliant in the past.544
When a data breach occurs involving payment card information maintained by an entity subject to
PCI-DSS, and the breached entity has not satisfied PCI-DSS standards, the consequences can be
substantial. Deviation from PCI-DSS standards can be used as evidence of departure from industry
standards in both industry and third-party investigations, resulting in significant fines and other
contractual assessments and in lawsuits and regulatory investigations. Each Brand has its own
assessment rules that are modified periodically. Such assessments, imposed separately by each
Brand, can include:
Fines for violation of PCI-DSS non-compliance, and possibly additional fines for
prohibited data retention;
Significant additional monthly fines until confirmation of compliance;
Assessments for fraud recovery that the Brand identifies as being potentially tied to a
security data breach;
Assessments by the Brand on its own behalf and/or its members for operational
expense reimbursement; and
Additional administrative assessments.
In addition, the Brands will often adjust the breached merchant’s classification upwards to Level 1,
regardless of the number of payment card transactions it processes. This reclassification results in
the imposition of further obligations and could potentially lead to even greater assessments should
another breach occur. Merchants are responsible for all costs associated with any system
modifications required to achieve and maintain PCI-DSS compliance.
PCI-DSS is reviewed and updated periodically in response to the evolving threats facing the
payments card industry. PCI-DSS 3.0 was published in November 2013 and was made fully
effective for all entities on January 1, 2015. PCI-DSS added significant changes for documentation
requirements, current security standards, emerging technologies including tokens and smart cards,
543 Verizon 2015 PCI Compliance Report, Verizon, at p. 2, available at
http://www.verizonenterprise.com/resources/report/rp_ppcci-report-2015_en_xg.pdf.
544 Id. at p. 3
-113-
and additional physical protections for portable devices including card swipers. 545 A newer version
with minor modifications, PCI-DSS 3.1, provided updates effective April 15, 2015.546
In response to increasingly high profile merchant data breaches, the Brands are encouraging a shift
in the U.S. to payment cards using microchips547 rather than the more vulnerable magnetic strips.
Acquirers, processors and subprocessors are now to have the capability to process chip-enabled
cards. Effective October 1, 2015, the Brands will impose a shift in the current liability system
whereby the issuing bank or merchant that does not support chip-enabled cards will bear the
liability for any resulting counterfeit card fraud.548 Many large merchants have already made the
technological shift, but the cost of terminal upgrades and training for smaller and mid-sized
merchants is slowing adoption for a large number of entities. 549
In addition, President Barack Obama signed an Executive Order in October 2014 requiring all
federal credit and debit cards to include chip technology and all federal government agencies to
accept chip-enabled cards.550 Interestingly, one of the lawsuits arising out of the Target mega breach
of millions of payment cards was a consumer class action naming as defendants Visa and
MasterCard as well as Target, alleging that Visa and MasterCard should have required the more
secure chip and pin technology earlier in the United States, and that by not doing so they allegedly
exposed the consumer plaintiffs to “unnecessary risk.”551 Virginia was the first state to follow when
its Governor issued an Executive Directive titled “Securing Consumer Transactions” requiring the
Commonwealth’s main purchase card program to include advanced chip authentication security
features by December 2015, and directing various state agencies to develop a plan by October 1,
2015 to enhance the security features of merchant and prepaid debit card programs.552
Issues as to the enforceability of Brand assessments, and whether their nature is that of fines and
penalties versus reasonably calculated reimbursement for fraudulent transactions and card
monitoring and replacement costs resulting from a data breach, have been raised in several recent
lawsuits. For instance, in 2011, a Utah restaurant filed a counterclaim in a Utah state court action
against its acquiring bank and payment processor after the bank and processor demanded
indemnification for fines and penalties assessed by Visa and MasterCard arising from an alleged
data breach at the restaurant. The restaurant argued that the bank and card processor improperly
545 https://www.pcisecuritystandards.org/documents/PCI_DSS_v3.pdf.
546 https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-1.pdf.
547 Commonly called “EMV cards” based on their initial development by Europay, MasterCard and Visa.
548 See Cathy Medich, Smart Card Alliance, EMV Migration – Driven by Payment Brand, http://www.emvconnection.
com/emv-migration-driven-by-payment-brand-milestones/
549 See Experian, 2015 Second Annual Data Breach Industry Forecast, discussing potential effects of adoption of “chip and
pin” requirements on payment card breaches.
550 October 17, 2014 Executive Order – Improving The Security of Financial Transactions, https://www.whitehouse.gov/thepress-
office/2014/10/17/executive-order-improving-security-consumer-financial-transactions.
551 See Amended Complaint filed 03/03/14 in Christensen, et. al. v. Target Corporation, Visa Corporation, MasterCard
Incorporated, et al., Case 2:13-cv-01136-CW-DBP (U.S. District Court, District of Utah, Central Division).
552 See https://governor.virginia.gov/media/3811/ed-5-securing-consumer-transactions.pdf.; see also Allison Grande, Va. Is 1st
State to Mandate Enhanced Payment Security, Law360, May 6, 2015, http://law360.com/articles/652466/print?section =privacy.
-114-
collected payment from the restaurant’s bank account and that “the fines and penalties were
punitive in that they bore no relationship to the non-existent harm to Visa or MasterCard.”553
Another merchant (a specialty retailer) brought suit directly against Visa in March 2013 alleging
that Visa assessed approximately $13 million in “non-compliance fines and issuer reimbursement
assessments that Visa wrongfully imposed and collected from” the merchant’s acquiring banks
following a data breach. The complaint also alleged that the acquiring banks in turn collected that
amount from the merchant pursuant to the merchant’s contractual indemnification obligations. The
merchant further alleged that the assessments constituted “unenforceable penalties” and that “Visa
had no reasonable basis for concluding that [the merchant] was non-compliant with the PCI DSS
requirements.” The suit is currently pending in Tennessee federal court.554
Similar questions regarding the characterization of the Brand assessments as losses, fees, fines or
penalties were recently raised by a breached merchant against its payment processor. The
merchant’s agreement with the payment processor had a limitation of liability clause that capped the
merchant’s indemnification obligation for reimbursement of losses claimed by issuing banks, which
limitation did not apply to “fees, fines, and penalties assessment by payment card networks.”555
b. Incorporation of PCI-DSS into State Law
State legislators are increasingly sensitive to the risks associated with payment card breaches and
are beginning to codify PCI-DSS requirements into state data protection laws. Examples of this
include Minnesota, Nevada and Washington as detailed below.
i. Minnesota
The Minnesota Plastic Card Security Act,556 enacted on May 21, 2007, was the first of its kind. This
act prohibits companies doing business in Minnesota from retaining card security code data, PIN
verification code numbers or the full contents of any track of magnetic stripe data following
authorization of a transaction, for longer than forty-eight hours following authentication of a PIN
debit transaction. The act establishes liability of such companies to financial institutions that issue
payments cards (e.g., issuing banks) for certain costs of reasonable actions undertaken by them in
the event of a breach exposing data stored in violation of the Act. This statute was cited in litigation
553 See Elavon Inc. v. Cisero’s Ristorante Inc., No. 100500480 (3rd Dist. Ct., Summit County, Utah). In a March 21, 2013
decision, the court dismissed the merchant restaurant’s second amended answer and counterclaim cause of action for negligence, on
the basis that this claim was barred by the economic loss doctrine under Utah law (the court found that the restaurant could not
establish a duty owed by the bank or payment processor “independent of any contractual obligations between the parties”). The
merchant’s remaining counterclaims, which include breach of contract, conversion, and breach of fiduciary duty, survived. The case
is marked as dismissed with prejudice as of February 2, 2015.
554 See Genesco Inc. v. Visa U.S.A. Inc. et al, Case No. 3:13-cv-00202 (U.S. District Court, Middle District of Tennessee).
Causes of action alleged in the suit by the merchant include breach of contract, breach of the implied covenant of good faith and fair
dealing, violation of the California Unfair Business Practices Act, and unjust enrichment. Motions for summary judgment on
various causes of action and motions in limine in preparation for trial were pending as of May 2015.
555 Schnuck Markets, Inc. v. First Data Merchant Data Services Corp. and Citicorp Payment Services, Inc., Case No. 4:13-
CV-2226-JAR (U.S. District Court, Eastern District of Missouri). In January 2015, the court ruled in favor of the merchant,
instructing the payment processor to return all money in excess of the liability cap. The decision was appealed, and
motions to reargue were also filed. The trial date of April 6, 2015 was vacated, and as of May 2015, the District Court litigation was
stayed.
556 Minn. Stat. § 325E.64.
-115-
filed by financial institutions against Target concerning its well-publicized 2013 breach as a ground
for recovery against a breached entity.557
ii. Nevada
A Nevada data protection law amendment that became effective January 1, 2010, requires
companies doing business in Nevada that accept payment cards to comply with PCI-DSS.558 The
amendment also requires other data collectors doing business in Nevada to encrypt personal
information contained in certain kinds of transmissions and when stored on a data storage device.
iii. Washington
Under a Washington law effective July 1, 2010, if a credit or debit card processor or business fails
to take reasonable steps to guard against unauthorized access to account information that is in its
possession, and such failure is found to be the proximate cause of a breach, the processor or
business is liable to the issuing financial institution for reimbursement of its reasonable actual costs
related to the reissuance of credit or debit cards by the financial institution to mitigate potential,
current or future damages to its card holders that reside in the state of Washington as a
consequence of the breach, even if the issuing financial institution has not suffered another injury as
a result of the breach.559 The processor or business may also be liable to the issuing bank for
attorneys’ fees and costs incurred in connection with any legal action. In addition, vendors of card
processing software and equipment may be held liable for the damages incurred by an issuing
financial institution if the vendor’s negligence was the proximate cause of such damages. The new
law exempts processors, businesses and vendors that are compliant with PCI-DSS at the time of the
breach. They are deemed compliant if their PCI-DSS status was validated by an annual assessment
that took place no more than one year prior to the date of the breach. In addition, processors,
businesses and vendors are not liable if the breach involved encrypted card information.
557 See complaint in Trustmark National Bank and Green Bank N.A., on behalf of themselves and all other similarly situated
institutions v. Target Corporation, et al., initially filed in Case 1:14-cv-02069 (U.S. District Court, Northern District of Illinois,
Eastern Division), consolidated as part of In re: Target Corporation Customer Data Security Breach Litigation, Financial Institution
Cases, MDL No. 14-2522 (U.S. District Court, District of Minnesota). In a decision filed December 2, 2014, Judge Paul A.
Magnuson of the U.S. District Court, District of Minnesota, granted in part and denied in part Target’s motion to dismiss the
Consolidated Amended Class Action Complaint in the Financial Institutions Cases, and allowed a Count that Target violated
Minnesota’s Plastic Security Card Act to continue (and dismissed the negligent misrepresentation count). In the Spring of 2015,
Target and MasterCard attempted a settlement over the amount Target owed MasterCard’s issuing banks for the December 2013
breach in issue. MasterCard reportedly sent to its issuer banks an estimate of the damages each bank had suffered in the breach and
offered to pay the banks a fixed percentage of the MasterCard estimated banks but any bank accepting that payment had to do so by
May 20, 2015 and was required to release its claims against Target in this litigation. Plaintiff financial institutions sought an
injunction against the settlement, on the grounds that, among other things, they were never involved in nor informed of the settlement
before its public announcement. The court denied the injunction, noting it has “almost no authority to oversee such settlements, but
its order included the statement: “The Court agrees with Plaintiffs’ counsel that the terms of the settlement do not appear altogether
fair or reasonable.” See May 7, 2015 decision of Judge Magnuson, filed in MDL NO. 14-2422. However, the settlement failed
anyway when the requisite percentage of financial institutions refused to support it and execute the releases. See Target data breach
settlement with MasterCard falls through, Advisen FPN May 25, 2015,
http://crnfpn.advisen.com/articles/article2392696051985652926.html?user=; Joseph Ax, MasterCard, Target databreach settlement
falls apart, Reuters, May 22, 2015, http://www.reuters.com/article/2015/05/22/us-target-mastercard-settlementidUSKBN0O71TD20150522.
558 Nev. Rev. Stat. § 603A.215.
559 Wash. Rev. Code § 19.255.
-116-
IV. THE REGULATORY AND STATUTORY LANDSCAPE OUTSIDE THE U.S.
1. Introduction to the International Scope of Privacy and Data Protection
Global compliance with data protection laws presents an increasing challenge, as the number of
jurisdictions with such laws increase and the multi-jurisdictional scope of business operations and
customers increases. It is trite to say that the flow of data in today’s digitalized world may not
recognize international borders and yet when data crosses into different legal jurisdictions, the rules
that apply to it may change. Whilst there have been some attempts by privacy regulators to
cooperate on the development of international standards560, there is still no recognized set of
international standards. Moreover, given that over 90 countries have enacted data protection laws561
(a number that is increasing), the regulatory challenges facing multi-national companies are
substantial. Added to this, the penalties for non-compliance with data protection laws also seem set
to increase.562
Many companies’ operations may be affected by the data security laws of multiple countries, apart
from the jurisdiction in which they are domiciled. Many companies have subsidiaries, affiliates or
employees in other countries. Thus, taking the example of breach notification (although data breach
notification is not a central plank of data protection law in many non-U.S. jurisdictions, including
the European Union, although it is becoming more so and, indeed, is a central tenet of the draft Data
Protection Regulation currently going through the European Union’s “trilogue” legislative
enactment process), a U.S. company that sustains a breach that includes Personal Information of
international customers may need to consider carefully the impact of the data security and breach
notification laws of other countries, and whether they impose reporting or notification obligations
on the U.S. breached company. Breach notification requirements of various countries are described
in the World Law Group Global Guide to Data Breach Notification Requirements.563
In addition to the Member States of the European Union (“EU”), over 45 other countries now have
data protection or privacy laws and others are in the process of developing them. Some of those
with existing laws are contemplating revising them to enhance obligations, and increase penalties
for non-compliance.
Aspects of the EU Data Protection Directive, as well as selected countries’ national laws and
enforcement powers, are considered below.
2. The Dilemma of Whistleblower Hotlines
Many multi-national companies have implemented whistleblower hotlines, which permit employees
and service providers to report allegations of fraud, infractions of codes of conduct or similar
560 For example, the Madrid Resolution on International Standards on the Protection of Personal Data and Privacy of 2009 was
approved by more than 50 countries at the 31st International Conference of Data Protection and Privacy Commissioners, but has
failed to become influential.
561 Greenleaf, Graham, Global Data Privacy Laws: 89 Countries, and Accelerating (February 6, 2012). Privacy Laws &
Business International Report, Issue 115, Special Supplement, February 2012; Queen Mary School of Law Legal Studies Research
Paper No. 98/2012. Available at SSRN: http://ssrn.com/abstract=2000034
562 For example, Article 79 of the Proposed Regulation, in its current draft, allows national DPAs to impose fines of up to €1m
or 2% of worldwide gross revenue of an infringing organization.
563 This is available at the www.globaldatabreachguide.com.
-117-
complaints. For U.S. companies, such hotlines are often part of compliance with the Sarbanes-
Oxley Act of 2002, the Foreign Corrupt Practices Act of 1977 or other U.S. laws.
Implementing such hotlines in EU Member States gives rise to certain data protection issues which
should be given careful consideration.564 In some Member States, amendments must be made to the
hotline reporting procedure in order to comply with local laws or guidelines. Certain issues which
may arise in selected Member States are considered below. In February 2006, the Article 29
Working Party issued an opinion565 to provide guidance to industry in establishing whistleblower
hotlines throughout the EU that were compliant with both the Sarbanes-Oxley Act and the Data
Protection Directive, although individual Member States’ laws and guidance must still be
considered.
Most EU Member States require notification of hotlines to the relevant data protection authority
(DPA), and in some Member States hotlines cannot be operated until approval has been obtained.
Where hotlines involve the transfer of personal data from the EU to the U.S., Member States will
require certain contractual and technical security measures to be in place. Company works councils
may need to be consulted prior to the implementation of a whistleblower hotline.
The World Law Group has published a Global Guide to Whistleblowing Programs, which provides
a brief overview of legislation governing whistleblowing programs in a number of countries.566
3. The European Union
The EU Data Protection Directive and the rules for determining when a particular Member State’s
laws apply are considered below. Many U.S. companies maintain subsidiaries, affiliates or
employees in the E.U and such companies, whether or not publicly traded, must comply with
relevant EU Member States’ data protection laws and guidelines where “personal data” (as defined
by the pertinent law) is collected, processed or transferred by local Member State operations.
Moreover, because EU data protection law requires that personal data may only be transferred to a
non-European Economic Area country567 where that country ensures an adequate level of protection
for that data or that certain other safeguards have been implemented between the data transferor and
transferee (such as agreement to the standard contract clauses or the approval and implementation
of binding corporate rules), this demands (albeit indirectly) that U.S. companies wishing to engage
with EU consumers or businesses adhere to EU data protection standards. 568
564 See World Law Group Global Guide to Whistleblowing Programs (2012), available at
http://www.theworldlawgroup.com/?cm=Doc&ce=details&primaryKey=53535; see also Mark Schreiber, The Practitioner’s Guide to
The Sarbanes-Oxley Act, Volume II, Chapter 9 – Anonymous Sarbanes-Oxley Hotlines for Multi-National Companies: Compliance
with EU Data Protection Laws, 2009, and http://www.ico.gov.uk/news/blog/2011/half-term-report-on-cookies-compliance.aspx.
565 Opinion 1/2006 on the application of E.U. data protection rules to internal whistleblowing schemes in the fields of
accounting, internal accounting controls, auditing matters, fight against bribery, banking and financial crime.
566 Available at
http://w.theworldlawgroup.com/files/file/WLG%20Global%20Guide%20to%20Whistleblowing%20Programs-2012-Web.pdf.
567 The European Economic Area or EEA comprises the Member States of the European Union (excluding Croatia who’s EEA
membership is pending approval by all EEA states) plus Iceland, Liechtenstein and Norway.
568 See Handbook on European data protection law, published by the European Union Agency for Fundamental Rights, 2013,
Council of Europe, 2013. The Handbook, and any updates to it, are available at the FRA website at fra.europa.eu and at the Council
of Europe website at coe.int/dataprotection, and on the European Court of Human Rights website under Case-Law menu at
echr.coe.int.
-118-
The main criteria in determining which Member States’ laws apply are the location of the
establishment of the data controller and, where the data controller is established outside the EEA,
the location of the equipment used by the controller to process the data. This is illustrated by the
following three examples:
(a) Where a controller is established in one Member State, the national law of that Member
State will apply.
(b) Where a controller has an establishment in two or more Member States the national law of
the host Member State will apply to the data controller based therein, provided the
processing is carried out in the context of the activities of that controller. Where the
activities are carried out in the context of each data controller, then the relevant host’s laws
will apply to the local data controller’s activities in relation to the activities taking place
within that Member State.
(c) Where a controller is not established in any Member State, the law of each applicable
Member State in which the data controller uses equipment to process the data will apply.
Since it is the location and activity of the data controller that determines the applicability of
the EU Data Protection Directive, then the processing of personal data relating to data
subjects who are not EU residents may still fall within the scope of the EU Data Protection
Directive.
a. EU Data Protection Directive
EU Member States’ data protection laws are based on EU Directive 95/46/EC, known as the “Data
Protection Directive.” Since Directives do not have direct applicability in national law, Member
States were required to implement the Data Protection Directive by passing national laws. This has
been done by all Member States of the EU, together with the three member states of the European
Economic Area (the “EEA”)569. There are notable variations between Member States’
interpretation and implementation of the Data Protection Directive, and thus individual Member
States’ laws must be considered as well as the Data Protection Directive.
Under the Data Protection Directive, responsibility for legislative compliance rests with the “data
controller,” who is the natural or legal person who alone or jointly with others determines the
purposes and means of the processing of personal data. Subject to certain exceptions, data
controllers are required to notify their national DPA of their data processing activities.
The Data Protection Directive requires data controllers to process personal data only in accordance
with certain data protection principles, including the requirements that data be processed fairly and
lawfully, that there be a justification for processing, and that there be implementation of appropriate
technical and organizational measures to protect personal data against accidental or unlawful
destruction or accidental loss, alteration, unauthorized disclosure or access.
Under the Data Protection Directive, the meaning of “personal data” is broader than the term
“Personal Information” generally applicable in the U.S. (and broader than as used elsewhere in this
White Paper). It includes any information relating to an identified or identifiable natural (i.e. living)
569 The three member states of the EEA are: Iceland, Liechtenstein and Norway.
-119-
person from which that individual may be identified, either by that data alone or that data in
combination with other data.
Controllers are prohibited from transferring personal data to countries outside the EU that do not
ensure an adequate level of protection of personal data. The U.S. is not currently considered to
provide such an adequate level of protection,570 and thus personal data may not be transferred from
the EU to persons in the U.S. without additional protections. Such additional protections include
the transferee being enrolled in the Safe Harbor program,571 under which the transferee voluntarily
agrees to be bound by data protection rules broadly equivalent to those set out in the Data Protection
Directive, or entering into a compliant data transfer agreement. The European Commission has the
authority to determine whether a country ensures an adequate level of protection by reason of its
domestic law or international commitments. It has deemed there to be an adequate level of
protection in other jurisdictions (including Andorra, Argentina, Canada, Faroe Islands, Guernsey,
Isle of Man, Israel, Jersey, New Zealand, Eastern Republic of Uruguay and Switzerland),572
pursuant to Model Contracts,573 pursuant to “binding corporate rules”574 or pursuant to another
relevant exception.
U.S. companies may encounter such a prohibition on transfer in a wide range of circumstances. For
example, in a U.S. court case,575 a Utah court ordered a U.S. company to disclose customer
complaint data that was relevant to a claim that had been filed against it, notwithstanding that the
data was located in Germany and the transfer to the U.S. may breach German data protection laws.
The court was not sympathetic to the dilemma faced by the U.S. company. One concern is that to
allow other countries’ data transfer restrictions to trump U.S. court directions to produce
information in U.S. legal proceedings could operate to encourage transfer of sensitive and perhaps
unfavorable information outside the U.S. in jurisdictions that render transfer back into the U.S.
difficult.
Traditionally, DPAs of each Member State have tended to enforce data protection legislation
independently of other Member States. However, the actions launched against Google for violation
of EU privacy law576 have been coordinated simultaneously by six national DPAs (France,
Germany, Italy, the Netherlands, Spain, and the UK). This was the first time national DPAs have
launched a coordinated action in this sector.
In January 2012, Viviane Reding, the Vice-President of the European Commission and European
Union Justice Commissioner, formally released the Commission’s Proposed Regulation577. The
570 Opinion 1/99 concerning the level of data protection in the United States and the ongoing discussions between the
European Commission and the U.S. Government, EC Commission Working Party on the Protection of Individuals with regard to the
Processing of Personal Data.
571 Commission Decision 2000/520/EC of 26.7.2000.
572 http://ec.europa.eu/justice/data-protection/document/international-transfers/adequacy/index_en.htm.
573 http://ec.europa.eu/justice/data-protection/document/international-transfers/transfer/index_en.htm.
574 http://ec.europa.eu/justice/data-protection/document/international-transfers/binding-corporate-rules/index_en.htm.
575 AccessData Corp. v. Alste Techn. Gmbh, 2010 WL 318477 (D. Utah Jan. 21, 2010).
576 Google privacy policy: six European data protection authorities to launch coordinated and simultaneous enforcement
action, CNIL, Apr. 2, 2013, http://www.cnil.fr/english/news-and-events/news/article/google-privacy-policy-six-european-dataprotection-
authorities-to-launch-coordinated-and-simultaneo.
577 Available at http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf.
-120-
Proposed Regulation implements a comprehensive reform of European data protection laws
intended to strengthen online privacy rights and boost Europe’s digital economy. It seeks to take
into account the realities of modern data flows, particularly in light of the increased use of social
networking sites, cloud computing, location-based services and smart cards. The Proposed
Regulation’s release followed a period of uncertainty after it was understood that at least six EU
policy units had issued negative opinions on the draft Regulation leaked in December 2011, and it is
still subject to discussion (at the time of writing, the Proposed Regulation had been commented on
by all three major EU institutions (the European Council, Parliament and Commission) and a final
draft was being debated and agreed upon in what is known as the “trialogue process,” which is
expected to be finalized by the end of 2015). If and when it is adopted and implemented, the
Proposed Regulation will impact organizations doing business in the EU, including U.S.
organizations that are active in the European Union market and offer their services to EU citizens.
The following are key areas of the reform that will affect privacy and data protection compliance
for organizations:
A Single Set of Rules: The Proposed Regulation provides for a single set of rules for all
organizations processing personal data in the EU. Unlike the Data Protection Directive, the
Proposed Regulation (once finalized) will have direct effect in all Member States, therefore
removing many of the inconsistencies across Member States that have been associated with
the Directive.
Fines: National DPAs will be allowed to impose fines of up to EUR €1m or 2% of the
worldwide gross revenue of an organization. By way of comparison, the current maximum
fine in the UK (which has never been imposed) is £500,000.
“One-Stop Shop”: The Proposed Regulation implements a “one-stop shop” approach to
data protection compliance in the EU, meaning that an organization only needs to comply
with the data protection laws in place in the jurisdiction in which it has its main
establishment.
Data Breach Notification: The Proposed Regulation imposes a general requirement on all
businesses to notify DPAs and data subjects in the event of a data breach. Notice of data
breaches must be provided to the DPA “where feasible” within 24 hours, and to affected
data subjects “without undue delay.” While breach notification has recently become a
requirement for telecommunications and Internet service providers and has always been a
requirement for public sector bodies in the UK, the Proposed Regulation extends this
requirement to all organizations.
Consent: Where consent is to be used as a justification for processing personal data, the
Proposed Regulation requires that it be given explicitly, rather than assumed.
Data Portability: The Proposed Regulation introduces a new individual right of data
portability, which is designed to facilitate an individual’s access to personal data and
improve competition. It requires organizations to permit customers to move their data to
new organizations offering similar products or services.
-121-
The “Right to be Forgotten”: The Proposed Regulation also adds a new “right to be
forgotten” or “right to erasure” that allows an individual to require an organization to delete
personal data where there is no longer any legitimate reason for keeping it. This new right is
more stringent than the existing obligation of data controllers not to keep personal data for
longer than is necessary.578 This is not as an absolute right, however. There are exceptions
proposed for compliance with legal obligations, which means that data subjects will not
unequivocally be entitled to demand that their personal data be erased in circumstances
where that data is required for legal reasons;
International Transfer of Data: The Proposed Regulation provides for a shift in the rules
to reflect the way that personal data is currently transferred internationally. It seeks to
address the problem that current data protection laws function only within a given territory,
usually defined along national borders, and do not reflect the reality of international
business, and that organizations making use of the cloud may collect data in one territory
and subsequently process it in other territories. The Proposed Regulation will simplify the
requirements for organizations seeking to do this and aims to improve the current system of
“binding corporate rules” (typically a set of intra-corporate global privacy policies that
satisfy the EU standard of adequacy when organizations are seeking to transfer the data
outside of the EEA), by requiring all DPAs to recognize “binding corporate rules” approved
by an individual DPA.
Data protection by design and by default: The Proposed Regulation requires data
controllers to collect and retain personal data only to the minimum extent necessary in
relation to the purposes for which they are intended by design to be processed.
Accountability and Data Protection Officers: The Proposed Regulation seeks to increase
the accountability of data controllers and data processors, including by requiring that they
carry out data protection impact assessments prior to risky data processing activities being
undertaken. In addition, organizations with more than 250 full-time employees will be
required to have a Data Protection Officer.
The Proposed Regulation is subject to approval by the European Parliament and the Council of the
European Union (which comprises representatives of the Member States) following a process of
tripartite negotiations between the European Commission, European Parliament and the Council of
the European Union. The triologue negotiations between the Council of the European Union, the
European Parliament and the European Commission commenced on 24 June 2015. There has been
considerable negotiation and discussion of the Proposed Regulation over the past three and a half
years and it is fair to say that this has caused delays to the original timetable for implementation the
Proposed Regulation (which was originally intended to be adopted by the end of 2014) and means
that adoption of the Proposed Regulation will be delayed until late 2015 at the earliest. Once
adopted, Member States are likely to be given a two year period within which to familiarize
themselves with the Regulation before it comes into full force and effect.
578 Article 12 of the Data Protection Directive (Directive 95/46/EC).
-122-
b. Cookies and other tracking technologies
Tracking of users’ internet usage remains an issue in the EU. In November 2009, an amendment to
the EU E-Privacy Directive579 was adopted that required EU Member States to ensure that the
storing of or access to information such as cookies, spyware or other tracking devices on the
equipment of an Internet user is permitted only if the user has been provided with clear and
comprehensive information about the purposes of the processing and has given his or her consent.
Prior to this amendment, user consent was not required and users had only to be given the
opportunity to refuse the storing of or access to devices (which was commonly achieved by the user
adjusting browser settings to prevent such storage or access). There is an exception to the
requirement to obtain consent where the storage is strictly necessary for a service expressly
requested by the user. EU Member States were required to pass national legislation implementing
the amendment to the E-Privacy Directive by 26 May, 2011. Whilst a number of EU Member States
were slow to implement the relevant amendments to the E-Privacy Directive (to the extent that the
European Commission felt compelled to commence legal action against EU Member States for their
failure to implement580), as of January 2013, the amendments had all been implemented in all EU
Member States according to the Article 29 Working Party’s guidance on obtaining consent for
cookies dated 2 October, 2013581.
The Article 29 Working Party has taken the position that the requirements for consent are for “prior
consent” and that all information must be provided and consent obtained before any information is
sent or collected from a user’s device.582 This gives rise to complicated and prohibitive pop-ups or
similar notifications for users, particularly where there are numerous third-party advertising
networks involved. On the other hand, advertising networks, advertisers and content providers are
seeking to rely on Recital 66 of the EU E-Privacy Directive, which seems to offer a more pragmatic
approach to consent by inferring that prior consent is only required “where it is technically possible
and effective.” In addition, Recital 66 infers that consent may be obtained through the use of
“appropriate settings of a browser or other application.” The Interactive Advertising Bureau (IAB)
Europe and the European Advertising Standards Alliance (EASA) have sought to build on this
pragmatic approach with industry-led solutions providing for a means of opting-out from
tracking.583 However, the Article 29 Working Party has been repeatedly very critical of these
solutions and this has raised issues as to whether they comply with law.584
The UK Information Commissioner’s Office issued guidance585 allowing for a 12-month grace
period, which ended in May 2012, for companies to develop ways of complying with the UK’s
579 Id.
580 Digital Agenda: Commission starts legal action against 20 Member States on late implementation of telecoms rules, Jul.
19, 2011, http://europa.eu/rapid/pressReleasesAction.do?reference=IP/11/905.
581 http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp208_en.pdf
582 Available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/otherdocument/
files/2011/20110803_letter_to_oba_annexes.pdf.
583 EDAA to be launched, European Advertising Standards Alliance, Dec. 8, 2011, http://www.easaalliance.
org/News/News/page.aspx/46?xf_itemId=146&xf_catId=1.
584 Opinion 16/2011 on EASA/IAB Best Practice Recommendation on Online Behavioural Advertising, Article 29 Data
Protection Working Party, Dec. 8, 2011, http://ec.europa.eu/justice/data-protection/article-29/documentation/opinionrecommendation/
files/2011/wp188_en.pdf.
585 Enforcing the revised Privacy and Electronic Communications Regulations (PECR), May 25, 2011.
-123-
national legislation implementing the amendment to the E-Privacy Directive. On the eve of the
expiry of this grace period, the UK’s Information Commissioner’s Office updated its Guidance on
the rules on use of cookies and similar technologies to specifically state that implied consent can in
some contexts be considered a valid form of consent.586 As noted in Section 7(b) above, EU laws
regarding the use of cookies are continuously being interpreted and re-interpreted by bodies such as
the Article 29 Working Party and the courts of Member States, in light of new technologies that find
increasingly intrusive ways to use cookies to commercialize personal data.
c. Mobile Privacy
On March 14, 2013, the EU data protection authorities of the Article 29 Working Party
announced587 that they had adopted an opinion588 addressing the key data protection risks of mobile
apps. According to the Working Party, the risks range from a lack of transparency and lack of
awareness among app users to poor security measures, invalid consent mechanisms, a trend towards
data maximization and elasticity of data processing purposes. The Working Party noted in its
Opinion that many of these risks have already been examined by other international regulators, such
as the FTC and the California Attorney General.
The Working Party particularly focused on the obligations of app developers, like Google and
Facebook, but also considered all other parties involved in the development and distribution of
mobile apps in the EU. Such parties include manufacturers of operating systems and devices, app
stores, and other parties involved in the processing of personal data, such as advertisers and
analytics providers. The Working Party claims that, on average 1,600 new apps are added to app
stores daily and 37 apps are downloaded per smartphone user per day. These apps collect large
quantities of data, including photographs, or use location data. The Working Party has said that
“[t]his often happens without the free and informed consent of users, resulting in a breach of
European data protection laws.”
The Working Party highlighted that apps may have significant risks to the private lives and
reputations of users of smartphones, and added that individuals must be in control of their personal
data. In some instances where the purpose of the data processing is excessive and/or
disproportionate, even if the user has consented, the app developer will not have a valid ground for
processing data and would likely be in violation of EU data protection laws. The Working Party
said that app developers should request user consent before the app collects, processes, or stores
information on a mobile device.
Particular focus has been given to the processing of personal data relating to children. The Working
Party shares the concerns expressed by the FTC in its staff report on mobile apps for kids. The
Working Party also made conclusions and recommendations to the various parties involved in the
mobile app ecosystem to consider and implement. The Working Party has called on the industry to
586 Guidance on the rules on use of cookies and similar technologies, Information Commissioner’s Office, May 2012,
available at http://ico.org.uk/for_organisations/privacy_and_electronic_communications/the_guide/cookies.
587 Press Release, Article 29 Data Protection Working Party, Mar. 14, 2013, http://ec.europa.eu/justice/data-protection/article-
29/press-material/press-release/art29_press_material/20130314_pr_apps_mobile_en.pdf.
588 Opinion 02/2013 on apps on smart devices, Article 29 Data Protection Working Party, Feb. 27, 2013,
http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp202_en.pdf.
-124-
“use [its] creative talent to deliver more innovative solutions to effectively inform users on mobile
privacy.”589
The Working Party released further guidance on 5th February 2015 to help determine what kind of
personal data constitutes health data in the context of lifestyle and wellbeing apps.590 The Working
Party includes a broad range of data in this category, and even goes so far as to include any personal
data (health data or not) which is used by a controller in order to identify disease risks. Health data
falls within the category of sensitive personal data under the Data Protection Directive: sensitive
personal data is subject to more stringent rules under the Directive than other types of personal data,
including an obligation on the data controller to obtain the explicit consent of the data subject prior
to collecting and using such data. This is a potentially broad interpretation of health data for
lifestyle and wellbeing app owners to consider when evaluating whether to take further steps to
ensure compliance with the Directive in this regard.
4. Selected Countries’ Data Protection Laws
EU Member States have passed their own national data protection laws in order to implement the
Data Protection Directive. Many non-EU countries have also instituted data protection laws in
recent years, addressing what is a worldwide problem and presenting additional compliance issues
for companies with multinational operations.
a. United Kingdom
In the UK, the Data Protection Directive was implemented by the Data Protection Act 1998 (“UK
Act”). The Information Commissioner’s Office (“ICO”) is the relevant DPA for the UK and is
responsible for ensuring compliance with, and bringing enforcement action for breaches of, the UK
Act.591 Since April 2010, the ICO has had the power to impose fines of up to £500,000 where there
has been a serious contravention of the principles set out in the UK Act and certain other
requirements are met. However, the maximum fine has never been imposed by the ICO and this has
led to much criticism that the fines issued by the ICO have not been high enough.
In January 2013, the ICO levied a £250,000 fine against Sony, following the hack of the Sony
network in 2011. The ICO found that the attack on the network, and the subsequent compromise of
the personal data of millions of Sony customers, could have been easily prevented with up-to-date
software.592 Further, in March 2013, the ICO fined DM Design, a Glasgow company, £90,000 for
repeatedly targeting members of the public with nuisance marketing phone calls, and refusing to
remove customer details even when explicitly requested to do so.593
589 Id.
590 http://ec.europa.eu/justice/data-protection/article-29/documentation/otherdocument/
files/2015/20150205_letter_art29wp_ec_health_data_after_plenary_en.pdf and http://ec.europa.eu/justice/dataprotection/
article-29/documentation/otherdocument/
files/2015/20150205_letter_art29wp_ec_health_data_after_plenary_annex_en.pdf
591 More information about the UK Act and the ICO is available at www.ico.gov.uk.
592 Sony fined £250,000 after millions of UK gamers’ details compromised, Information Commissioner’s Office, Jan. 24, 2013,
http://ico.org.uk/news/latest_news/2013/ico-news-release-2013.
593 Glasgow company fined £90,000 as ICO tackles nuisance calls, Information Commissioner’s Office, Mar. 20, 2013,
http://ico.org.uk/news/latest_news/2013/glasgow-company-fined-90000-as-ico-tackles-nuisance-calls-20032013.
-125-
In March 2014 the British Pregnancy Advice Service was fined £200,000 where a hacker threatened
to publish thousands of names of people who sought advice on abortion, pregnancy and
contraception. Also in March 2014, Kent Police were fined £100,000 after highly sensitive and
confidential information, including copies of police interview tapes, were left in a basement at the
former site of a police station. Further, in April 2014, The ICO served home improvement company
Amber Windows with a £50,000 fine after an investigation discovered they had made unsolicited
marketing calls to people who had registered with the Telephone Preference Service (TPS).594
In March 2015, the ICO fined the Serious Fraud Office (“SFO”) £180,000 after evidence containing
sensitive personal data related to a bribery investigation concerning BAE Systems was mistakenly
sent to the wrong witness who then disclosed the sensitive personal data to a newspaper, The
Sunday Times.
Although historically the ICO has tended to impose money penalties on the public sector, recent
fines indicate that the ICO is starting to find a balance between enforcement actions in the private
and public sectors. In February 2015, for example, the ICO imposed a fine of £175,000 on
Staysure.co.uk Limited, an online travel insurance company, after it was hacked by online hackers
who accessed details of over 100,000 customers including bank card details. The ICO found that the
company had not taken adequate steps to protect the personal data of its customers. The traditional
emphasis on enforcement actions against the public sector is due, in no small part, to the more
stringent notifications requirements that were placed on public authorities for privacy breaches. A
greater number of public sector enforcement actions is an inevitable result of a greater number of
reported public sector breaches.
Fines are likely to increase once the EU data protection regime is overhauled by implementation of
the Proposed Regulation, discussed above, which will take effect two years after it is adopted by the
European Parliament. National DPAs will be allowed to impose fines of up to 2% of the worldwide
gross revenue of an organization or EUR €1m.
In the UK, regulated financial services firms, such as banks, insurance companies and brokers, must
also comply with the rules prescribed by the Financial Services Authority (“FSA”). The FSA’s
enforcement powers include private censure, removal of authorization, withdrawal of approved
person status and potentially large fines. The FSA has taken a very strict approach when dealing
with weaknesses in information security, in circumstances where there has been a breach of
Principle 3 of the FSA Handbook requiring an organization to take reasonable care to organize and
control its affairs responsibly and effectively.
b. Germany
In Germany, the Data Protection Directive was implemented by the Federal Data Protection Act
2001595 (“German Act”). Germany has a number of regional DPAs rather than a single national
DPA. Under the German Act, a data controller must notify the relevant German DPAs and affected
data subjects if it determines that certain serious or sensitive categories of personal data have been
recorded, unlawfully transferred or otherwise unlawfully disclosed to third parties, threatening
594 To keep up to date with The ICO’s monetary penalty notices visit http://ico.org.uk/enforcement/fines
595 Federal Data Protection Act (Bundesdatenschutzgesetz), published in the Bundesgesetzblatt I Nr. 23/2001, p. 904 , May 22,
2001.
-126-
serious harm to the data subjects’ rights or legitimate interests. If notifying all affected data
subjects individually would require a disproportionate effort, notification can be replaced by public
advertisements in daily newspapers or other effective means.
German DPAs have the power to impose administrative fines of up to €50,000 for simple violations
and €300,000 for serious violations of the German Act and to order organizations to remedy
compliance failures. The German data protection authorities regularly impose fines, including a
€1.5 million fine on the Lidl group in 2008 for using secret cameras and private detectives to spy on
its staff; a €1.1 million fine on Deutsche Bahn in 2009 for repeated breaches of the German Act;
and in 2014 a €1.3 million fine on the German insurance company Debeka for unlawful transfers of
personal data which took place during the 1980s and 1990s. Criminal sanctions may also be
imposed up to a maximum of two years’ imprisonment or a fine.
In April 2007, a working group of German DPAs adopted a report entitled “Whistleblowing –
Hotlines: Internal Warning Systems and Employee Data Protection”596 that introduces guidelines to
allow companies to introduce whistleblower hotlines which are compliant with German data
protection law. A company’s works council needs to be consulted prior to implementation of a
whistleblower hotline and the works council has a right of co-determination, such that the terms of
the hotline program are to be negotiated with them.
In Germany, a data breach notification regime is wider in scope than the breach notification regime
pursuant to the EU Privacy Directive and applies to all companies subject to the German Federal
Data Protection Act as well as to companies subject to the German Telecommunications Act and the
German Telemedia Act. This regime came into force in September 2009.
c. France
In France, the Data Protection Directive was implemented through an amendment to the existing
law 78-17 of January 6, 1978 relating to the Protection of Data Subjects as Regards the Processing
of Personal Data.597 The financial sanctions which the French DPA, the Commission nationale de
l’informatique et des libertés (“CNIL”), can impose include fines of up to €150,000 for the first
breach and up to €300,000 in the case of a repeat breach within five years. Criminal sanctions may
also be imposed of up to a maximum of five years’ imprisonment and fines from €15,000 to
€300,000 for legal entities, and up to €1,500,000 for corporate entities.
In November 2005, CNIL published guidelines598 to assist companies in the introduction of
whistleblower programs that are compliant with both the Sarbanes-Oxley Act and French law.
Since then, the French DPA has had a two-tier system of authorization in place, under which
whistleblower programs may be authorized by either: (a) self-certifying to the French DPA through
an automated online process that a whistleblower program complies with certain specified
parameters (the “AU-004 authorization”); or (b) seeking CNIL’s formal approval.
596 Available at http://fhh.hamburg.de/stadt/Aktuell/weitereeinrichtungen/
datenschutzbeauftragter/informationsmaterial/wirtschaft/whistleblowing.html.
597 Available at www.cnil.fr/fileadmin/documents/en/Act78-17VA.pdf.
598 Available at http://www.cnil.fr/fileadmin/documents/en/CNIL-recommandations-whistleblowing-VA.pdf.
-127-
In late 2010, revised guidance599 issued by CNIL for the AU-004 authorization became mandatory,
narrowing the permitted scope of whistleblower programs that qualify for AU-004 authorization.
Companies wishing to qualify for the AU-004 authorization must now restrict their whistleblower
program scope to concerns about accounting, financial, banking, anti-competitive or corruption
matters. Matters in the “vital interests” of the company or its employees’ physical or mental
integrity, which were permitted under the earlier guidance, are now outside the scope of
whistleblower programs that qualify for AU-004 authorization. These other serious “vital interest”
matters arguably covered matters relating to discrimination, environmental violations, violations of
workplace safety rules and disclosures of trade secrets.600
In April 2011, CNIL announced that it intends to increase inspections of companies transferring
data into and out of France to ensure compliance with French data protection laws.601 The
inspections will include a focus on verifying that U.S. companies enrolled in the Safe Harbor
program are, in fact, compliant with its rules.
In March 2011, CNIL issued a fine of €100,000 against Google with respect to its Street View data
processing, which reportedly recorded information from Wi-Fi networks that Google cars drove
past as part of the mapping process. The resulting revision of Google’s privacy policy continued to
attract the scrutiny of the CNIL as well as of other European DPAs as to whether their data
protection laws have been breached.602 CNIL and DPAs of at least five other EU Member States, as
well as a number of U.S. states, are continuing their scrutiny of Google’s privacy policies.603 In
May 2012, CNIL published guidance604 on breach notification law affecting electronic
communications service providers. The guidance was issued with reference to the EU Privacy
Directive, which imposes specific breach notification requirements on electronic communication
service providers. However, given the ongoing discussions relating to the Data Protection Directive
covering personal data, it is likely that such an obligation will be extended to all sectors.
In January 2014, CNIL's Sanctions Committee issued a monetary penalty of 150 000 € to Google
Inc. upon considering that it did not comply with several provisions of the French Data Protection
Act605. In its decision, the Sanctions Committee considers that the data processed by the company
about the users of its services in France must be qualified as personal data. It also judged that
French law applies to the processing of personal data relating to Internet users established in France,
contrary to the company's claim. Overall, six EU Authorities individually initiated enforcement
599 Available at http://www.cnil.fr/en-savoir-plus/deliberations/deliberation/delib/83/.
600 See Edwards Wildman Client Advisory, Global Whistleblower Hotlines: New French Restrictions Require Immediate
Program Amendments, July 2011, http://www.edwardswildman.com/newsstand/detail.aspx?news=2354.
601 Available at http://www.cnil.fr/la-cnil/actu-cnil/article/article/programme-des-controles-2011-une-ambition-reaffirmee-descompetences-
elargies/?tx_ttnews%5bbackPid%5d=2&cHash=91ae300acd.
602 Letter from the President of the CNIL to Larry Page, CEO, Google Inc., dated Feb. 27, 2012.
603 Charles Arthur, Google facing legal threat from six European countries over privacy; The Guardian, Apr. 2 2013; see also
Ian Steadman, Google fined by German regulator over Street View privacy breach, Wired, Apr. 22 2013,
http://www.wired.co.uk/news/archive/2013-04/22/google-germany-fine.
604 Available at http://www.cnil.fr/la-cnil/actualite/article/article/la-notification-des-violations-de-donnees-a-caracterepersonnel.
605 The CNIL's Sanctions Committee issues a 150 000 € monetary penalty to GOOGLE Inc., 08 January 2014, at:
http://www.cnil.fr/english/news-and-events/news/article/the-cnils-sanctions-committee-issues-a-150-000-EUR-monetary-penalty-togoogle-
inc/
-128-
proceedings against Google Inc. and the latest French conclusions are similar to those laid down by
the Dutch and Spanish Data Protection Authorities in November and December 2013 on the basis of
their respective national laws.
In August 2014, the CNIL issued a warning against Orange, the telecommunications company,
further to Orange’s notification to the CNIL that it had suffered a data security breach as a result of
the failure of one of its data processors. The CNIL warning noted that Orange had a responsibility
to audit the data processor’s activities (used by Orange to manage its email marketing activities)
before proceeding.
The CNIL has now implemented an online breach reporting mechanism on its website, www.cnil.fr.
While there is no strict legal requirement regarding the method of giving notice, the new online
breach reporting mechanisms now means that notice should be given using the notice form that may
be downloaded from the website. The notice form may then be submitted online or sent by post.
d. Spain
In Spain, the Data Protection Directive was implemented through Organic Law 15/99 of December
13, 1999 on the Protection of Personal Data (the “Spanish Law”). The Spanish DPA has issued an
opinion to the effect that it considers anonymous reports to be unsuitable and not permissible for a
whistleblower hotline in Spain. Notification to the Spanish DPA is required, so it has the
opportunity to carry out a review of whistleblower programs and confirm compliance with local
law. There may be options for compliance in Spain, outside the hotline system, for making reports,
but this area is still unsettled.
Under one of the data protection principles set out in the Data Protection Directive discussed above,
controllers must process personal data fairly and lawfully (the “fair processing principle”). In most
EU Member States, controllers may seek to comply with the fair processing principle on the basis
that processing is for the purposes of legitimate interests pursued by the data controller (the
“legitimate interests condition”). The legitimate interests condition gives controllers a broad basis
on which to comply with the fair processing principle. Under Spanish law, the legitimate interests
condition is not available to data controllers and so other, less flexible, conditions must be relied
upon instead.
In accordance with the Spanish Law, the authorities have the power to impose administrative fines
from €900 up to €600,000 for each individual infringement. Whilst the data protection authorities
are not authorized to impose criminal sanctions, the criminal courts may impose fines from €720 to
€288,000 in respect of individuals and from €10,800 to €3.6 million in respect of companies, as
well as imprisonment for up to four years.
In early 2011, the Spanish Data Protection Agency fined a Spanish bank €150,000 following a court
ruling that the bank improperly included the names of a couple in two of Spain’s most widely used
debtors databases (Asnef and Badexcug). The couple had been victims of a scam and had found
themselves obliged to make payments on a bank loan. The loan was subsequently declared void by
the courts. The couple informed the bank of their intention to withhold any further payments and
also sent written notice to the bank stating that if the withholding of payments led to their inclusion
in the debtors databases, that would be considered a breach of personal data protection rules. The
bank, ignoring both that the judgment had voided the loan and the couple’s notice, proceeded to
-129-
include them in the aforementioned registries. As a result the Spanish Data Protection Agency
imposed the fine. The Spanish DPA also levied subsequent fines of €50,000 on three other banks
and financial institutions for the same offence of improperly adding the data subject to their
respective debtor files.
The Spanish DPA continued to be active in other areas throughout 2012, most notably levying a
€100,000 fine on Telefónica Móviles for processing data without the data subject's consent, and for
charging invoices (to which the data subject had not contracted) to the data subject's account.
In January 2014 The Spanish Data Protection Authority (AEPD) issued fines against two jewelry
companies, Navas Joyeros S.L. and Luxury Experience S.L., a total of 5,000 Euros for not
providing clear and comprehensive information about the tracking programs they used and therefore
violated the Spanish 'cookie consent' requirement606. These are the first monetary penalties imposed
under Spain's Law of Information Society Services and Electronic Communications (LSSI-CE)
which implements the e-Privacy Directive (Directive 2002/58). The Directive obliges website
owners to give clear and comprehensive information about the tracking programs they use, and to
gain consent from users. The AEPD began investigations in July 2013, four months after releasing
their guidelines on the use of cookies. The Spanish legislative has also drafted a General
Telecommunications Act, which allows the AEPD to pursue enforcement against site owners who
fail to collect prior consent from users and will widen its range of tools in applying the cookie
regime. The General Telecommunications Act also includes compulsory notification obligations to
the DPA and to data subjects in cases of breaches or violations of security.
e. Sweden
In Sweden, the Data Protection Directive was implemented through the Personal Data Act 1998607
(“Swedish Act”). Under the Swedish Act, it is generally prohibited for companies to process data
relating to criminal allegations or violations of law, including in a hotline. Companies wishing to
operate a hotline in Sweden must therefore apply to the Swedish DPA for an exemption from such
prohibition. The Swedish DPA has a policy of granting such exemptions subject to certain
restrictions, including that only key personnel and employees in a management position may be
reported and personal data relating to other groups of employees may not be processed through the
hotline. This may require certain language in the notice to employees in Sweden that the hotline
should be used only where the report relates to a member of management or a key employee of the
company. In some cases, it may not always be possible to impose such a limitation or the
boundaries may become inevitably blurred.
The Swedish courts may impose fines as well as prison sentences of up to six months (or two years
for serious offences), and require the breaching entity to pay damages to the victim of the data
breach. In practice, the courts rarely issue the latter and on the few occasions that they have done so,
the matter has involved additional offences. For example, two individuals who aligned themselves
with the Nazis set up a register containing details of religious beliefs, political beliefs, sexual life
and race in respect of a large number of people. The imprisonment sentence took into account both
606 Spain: AEPD issues first European cookie fine, Updated: 06/02/2014 , at:
http://www.dataguidance.com/dataguidance_privacy_this_week.asp?id=2203
607 Available at http://www.regeringen.se/content/1/c6/01/55/42/b451922d.pdf.
-130-
the data protection aspects of the matter together with the defamation concerns. The Swedish courts
are more active in requiring breaching entities to pay damages to victims of breaches, including for
example in the case just mentioned, one of the victims being awarded SEK 10,000.
f. Austria
Under Austria’s Federal Act concerning the Protection of Personal Data (the “Austrian Act”), data
controllers that process certain personal data must notify the Austrian Data Protection Authority,
which keeps a register of all data applications that is accessible by the data subjects.608
Additionally, if a data collector learns that personal data from his data application are
“systematically and seriously misused” and the data subject may suffer damages, the collector is
required to immediately inform the data subject in an “appropriate manner.” Such obligation does
not exist if the information – taking into consideration that only minor damage to the data subject is
likely and the cost of the information to all persons concerned – would require an inappropriate
effort.
The authorities in Austria have the power to impose fines of up to €25,000 for breaches of the
Austrian Act relating to data secrecy, and up to €10,000 for breaches relating to the notification and
information obligations. The courts may also impose prison sentences of up to six months for the
illegal access of computer systems. The Austrian authorities tend to take a more remedial approach
to breaches than other European countries: taking the Google Street View example mentioned under
the section on France above, whilst France imposed a monetary fine, the Austrian authority issued a
number of recommendations to Google regarding the steps that they should take in order to be
compliant with the Austrian Act (including, for example, the blurring of individuals’ faces and
vehicle number plates).
g. Canada
Canada has federal, provincial and territorial privacy statutes that apply to the collection, use,
disclosure and management of personal information in the private, public and health sectors, each
with some variation in provisions.609
The Privacy Act governs the personal information handling practices of federal departments and
agencies.610 In the private sector, the four main privacy statutes are: the federal Personal
Information Protection and Electronic Documents Act (“PIPEDA”);611 Alberta’s Personal
Information Protection Act (“PIPA”);612 British Columbia’s Personal Information Act;613 and
Quebec’s An Act Respecting the Protection of Personal Information in the Private Sector.614
608 Federal Act concerning the Protection of Personal Data (2000), available at http://www.dsk.gv.at/site/6274/default.aspx.
609 See http://www.priv.gc.ca/resource/pb-avp/pb-pa_e.asp
610 Privacy Act (R.S.C., 1985, c. P-21).
611 S.C. 2000, ch. 5 (“PIPEDA”).
612 S.A. 2003, ch. P-6.5 (“PIPA Alberta”).
613 S.B.C. 2003, ch. 63 (“PIPA BC”).
614 R.S.Q. ch. P-39.1 (“Quebec Privacy Act”).
-131-
In May 2010, Alberta became the first Canadian province to pass a general data breach notification
law,615 amending its Personal Information Protection Act (“PIPA”), to require notice of a data
breach to Alberta’s Information and Privacy Commissioner (“Alberta Commissioner”) under certain
circumstances. The amendment also granted the Alberta Commissioner authority to require
organizations to notify individuals who face a “real risk of significant harm” as a result of the
breach. The Alberta Commissioner is also required to issue a Notification Decision, which is
published on the Alberta Commissioner’s website.
Ontario, New Brunswick, Newfoundland and Labrador have privacy legislation that applies to
health information and that has been declared substantially similar to PIPEDA with respect to health
information custodians.616
On June 18, 2015, Canada passed into law Bill S-4, The Digital Privacy Act, which amended
PIPEDA, including a requirement for organisations to give notice to affected individuals as well as
to the Office of Privacy Commissioner of Canada about data breaches in certain circumstances,
such as whether it is reasonable to believe that the reach creates “a real risk of significant harm to
the individual.” There are some exemptions to this requirement in certain instances, such as when
the information is business contact information (e.g. an email address). There are also record
keeping requirements for organizations regarding breaches, and new consent requirements
regarding organizations’ collection of information about individuals, with the exception of business
transactions, such as M&A, a partial sale of assets or transfer upon insolvency, provided certain
conditions are met. 617
The PIPEDA amendments make it a criminal offence for an organization to knowingly fail to
comply with the notification and record-keeping requirements following a breach of data security,
with potential fines of up to CAD $100,000. The proposed changes place more onerous obligations
on organizations with regards to their handling of personal data and greater implications for noncompliance
with PIPEDA, but they also serve to expand the permissible scope of information
sharing.
h. China
China does not presently have a single comprehensive data protection law. The essence of the
definition of data privacy, albeit with some variations, is found in various PRC laws, regulations,
and non-binding standards which include data privacy provisions. Rights to privacy may be traced
back to the Constitution of the People’s Republic of China (“Constitution”).618 Article 40 of the
Constitution provides that no organizations or individuals are permitted to infringe upon, among
other things, the freedom and privacy of a citizen’s communications for any reason, except in the
615 Mexico and Alberta Pass New Data Protection Laws, InsureReinsure.com, June 10, 2010.
http://www.insurereinsure.com/?entry=2507.
616 See http://www.priv.gc.ca/resource/pb-avp/pb-pa_e.asp.
617 The Act is available at https://www.cba.org/cba/submissions/pdf/14-34-eng.pdf. For discussion of new provisions, see
Cameron, Alex, Digital Privacy Act: Mandatory breach notification and other important changes to Canadian privacy law, Fasken
Martineau Bulletin June 23, 2015, http://www.fasken.com/mandatory-breach-notification-and-other-important-changes-to-canadianprivacy-
law/.
618 Available at: http://www.npc.gov.cn/englishnpc/Constitution/2007-11/15/content_1372964.htm.
-132-
case of national security, investigation of a criminal offence or monitoring by the public security or
prosecutorial authorities in accordance with legally-prescribed procedures.
On December 31, 2011, the Ministry of Industry and Information Technology (“MIIT”), the
Chinese internet and telecommunications industry regulator, issued the Several Provisions on
Regulating the Internet Information Service Market Order (“IIS Provisions”).619 The IIS Provisions
apply to entities in mainland China providing information services through the internet or engaging
in related activities and have a special focus on protecting internet users’ legitimate expectation of
privacy from perceived abuses. The IIS Provisions define “users’ personal information” to mean
any information associated with a user from which, either independently or when combined with
other information, such user can be identified. Under the IIS Provisions, information services
providers are (i) prohibited from collecting personal information without prior consent of the user;
(ii) required to expressly inform users of the method, content and purposes of collecting and using
their personal information; (iii) prohibited from collecting personal information other than as is
necessary in connection with the product or service provided; (iv) prohibited from disclosing or
transferring users’ personal information to a third party without the consent of the user, unless the
laws and regulations provide otherwise; and (v) prohibited from deceiving, misleading or coercing a
user into transferring any information the user has uploaded.
In February 2013, the People’s Republic of China’s General Administration for Quality
Supervision, Inspection, and Quarantine and the Commission for the Administration of
Standardization jointly issued the Guidelines for Personal Information Protection within
Information System for Public and Commercial Services on Information Security Technology
(“Guidelines”).620 The Guidelines are intended to regulate all organizations and entities with regard
to protection of personal information; however, they do not have the force of law. The Guidelines
contain rules and principles collecting, processing, transferring and deleting personal information on
"computer information systems" (as opposed to other data storage media in hard copy form). The
Guidelines divide personal information into “sensitive personal information” and “general personal
information,” similar to the distinction in the EU data privacy regime. The collection and use of
sensitive personal information requires the relevant owner's express consent, and evidence of such
consent must be kept. The collection and use of general personal information only requires implied
consent (that is, where the owner raises no objection to its collection). In either case, express
consent is required for transfer of any personal information outside of mainland China.
On 16 July, 2013, following the Decision on the Strengthening of the Protection of Network
Information passed by the Standing Committee of the National People’s Congress (“NPC”), the
MIIT promulgated the Provisions on Protection of Personal Information of Telecommunications
and Internet Users (“Personal Information Provisions”) effective from 1 September 2013.621 The
Personal Information Provisions address collection and use of personal information of individual
users such as passwords, names, dates of birth, addresses, account numbers and so forth by
providers of telecommunications and internet information services within mainland China. The
Personal Information Provisions also include standards, security measures and penalties concerning
619 Available at: http://www.miit.gov.cn/n11293472/n11293832/n12843926/n13917012/14414975.html.
620 Available at: http://www.cinic.org.cn/site951/zcdt/2013-03-29/636814.shtml.
621 Available at: http://www.miit.gov.cn/n11293472/n11294912/n11296542/15514014.html
-133-
collection and use of information by service providers and third parties engaged to handle collection
and use of such information (i.e. outsourcing).
In addition, amendments to the Consumer Protection Law (“Amended Consumer Protection
Law”)622 issued by the State Administration of Industry and Commerce (“SAIC”) came into force
on 15 March, 2014. The Amended Consumer Protection Law covers all businesses that provide
goods or services to consumers, and extends to all means of collection of personal data (such as
membership enrollments at supermarkets, credit card applications made at shopping malls and
patient details provided at clinics). For the first time, it establishes the right of consumers to have
their personal information protected, and consumers may obtain compensation from business
operators which infringe upon this right.
On January 1, 2015, the SAIC issued Measures for Penalties for Infringing Upon the Rights and
Interests of Consumers,623 which came into effect on March 15. The Measures generally define
personal information as identifying information in the context of consumer transactions both online
and offline and the following list of data categories: “a consumer’s name, gender, occupation, date
of birth, identification card number, address, contact information, status of income and assets,
health status, and consumption habits.”
Five categories in the list go beyond the categories found in the Provisions on Protecting the
Personal Information of Telecommunications and Internet Users and the Guidelines: gender,
occupation, status of income and assets, health status and consumption habits.
The definition of personal information is significant because businesses must treat personal
information they collect and use in accordance with their obligations under applicable law;
specifically, under the SAIC Measures and the Consumer Rights Protection Law. These SAIC
consumer protection data privacy requirements are generally consistent with, though do not
completely reflect, the requirements found in other data privacy legislation in China.
A failure to comply, whether stemming from the Consumer Rights Protection Law or the SAIC
Measures, is subject to the varied and potentially onerous penalties set out in Article 56 of the
Consumer Rights Protection Law (except in those situations where the penalty for a failure to
comply is set out in another relevant law or regulation). Such penalties are separate from any civil
liabilities that may also arise due to the compliance failure.
These SAIC consumer protection data privacy requirements are generally consistent with, though
do not completely reflect, the requirements found in other data privacy legislation in China. The
SAIC Measures define and clarify which categories of information are protected as consumer
personal information in China. It is an expansive list that covers more categories, but not exactly the
same categories, as the Provisions on Protecting the Personal Information of Telecoms and Internet
Users and the Guidelines.
622 Available at: http://www.saic.gov.cn/zcfg/fl/xxb/201310/t20131030_139167.html.
623 Available at: http://www.saic.gov.cn/zwgk/zyfb/zjl/xfzbhj/201501/t20150114_151320.html.
-134-
The Chinese regulatory environment was already complicated, with the Ministry of Commerce, the
National Development and Reform Commission and the State Administration for Industry &
Commerce frequently having overlapping jurisdiction. MIIT’s involvement as the regulator of IIS
adds further regulatory complexity.
i. Hong Kong
The principal privacy law in Hong Kong is the Personal Data (Privacy) Ordinance (Cap 486)
(“Ordinance”),624 enacted in 1995 and amended in 2013 to protect personal data, i.e. data relating
directly or indirectly to a living individual (data subject), from which it is practical to ascertain
(directly or indirectly) the identity of the individual; and in a form in which access to or processing
of the data is practicable. The Ordinance applies to any person (data user, including private sector,
public sector and government department) who controls the collection, retention, processing or use
of personal data. The Ordinance sets out six data protection principles625 governing the proper
collection, accuracy, retention, use, security, access and correction of personal data, the
contravention of which per se is not an offense.
The independent Office of the Privacy Commissioner for Personal Data (“PCPD”)626 was
established in 1996 with the mandate to promote data protection practices and to oversee data users’
compliance with the Ordinance. Since its establishment, PCPD has issued various guidance to data
users on different areas to promote good data protection practices. In February 2014, PCPD issued
“Privacy Management Programme: A Best Practice Guide” calling for businesses to adopt
comprehensive privacy management programs for achieving compliance in all aspects of business.
In Hong Kong, personal data cannot be transferred to another data user (even if it is within a group
of companies) without the data subject’s prior consent. The Ordinance also prohibits transfers of
personal data outside Hong Kong except in specified circumstances; however, that provision has not
yet gone into effect.627
Hong Kong heavily regulates the use of personal data and provision of data for use in direct
marketing. Data users are required to inform data subjects of the kinds of personal data they will be
using for direct marketing purposes and the classes of goods or services that will be marketed. Data
users may not use personal data in direct marketing or provide data to another person for use in
direct marketing unless they have obtained the data subject’s consent. Silence is not sufficient.
Hong Kong maintains “do not call” registries for commercial electronic messages communicated
within Hong Kong to Hong Kong recipients by email, fax, SMS, MMS or by pre-recorded voice
message. These registries are separately provided for under the Unsolicited Electronic Messages
Ordinance (Cap.593).628
624
625 See Schedule 1 to the Personal Data (Privacy) Ordinance (Cap 486).
626 More information about the PCPD is available at http://www.pcpd.org.hk/.
627 Section 33 of the Personal Data (Privacy) Ordinance (Cap 486).
628 Available at:
http://www.legislation.gov.hk/blis_pdf.nsf/CurAllEngDoc/BE5AA57E2A0358C7482575EF00201941/$FILE/CAP_593_e_b5.pdf
-135-
While data users are not statutorily required to inform the PCPD about a data breach incident, the
Guidance on Data Breach Handling and the Giving of Breach Notifications issued by the PCPD
advises data users to provide such notice as a recommended practice for proper handling of such
incidents. In 2014, there were 70 known data breach incidents (compared with 61 incidents in 2013
and 50 in 2012), affecting 47,000 individuals.629 The PCPD was made aware of these incidents
through voluntary notifications from the data users as well as reports from the media and the
general public. These incidents ranged from unauthorised disclosure of personal data through
hacking to inadvertent circulation of lists of personal data to unrelated third parties.
The PCPD may investigate suspected breaches of the Ordinance, either in response to a complaint
or at its own initiative. If the PCPD concludes a contravention is likely to be repeated, it may issue
an enforcement notice and impose a penalty. Individuals may also claim compensation through
civil proceedings for damage caused to them as a result of a contravention of the Ordinance.
In 2014, the PCPD received a total of 1,702 complaints, which represented a slight decrease of 5%
compared with the record high figure of 1,792 for 2013. Seventy four percent were made against the
private sector (1,264 cases), 10% against the public sector/government departments (176 cases) and
16% against individuals (262 cases). Forty one percent of those complaints concerned the use of
personal data without the consent of data subjects (694 cases), 37% were about the purpose and
manner of data collection (633 cases), 12% were related to data security (197 cases) and 6% were
about data access/correction requests (112 cases).
The PCPD issued 20 warnings and 90 enforcement notices (compared with 32 warnings and 25
enforcement notices in 2013). It referred 20 cases, the same number as in 2013, to the Police for
consideration of prosecution, an increase of 33% compared to 2012. Of these, 17 cases related to
suspected contraventions of the provisions governing direct marketing, including the making of
repeated telemarketing calls despite the complainants' request to opt out from such marketing
approach and failing to take specified steps before using individuals' personal data for direct
marketing. Only one conviction was recorded in 2014 involving an insurance agent's contravention
of section 50B(1)(c)(i) under the Ordinance by making false statements to the Commissioner during
an investigation into his misleading the complainant as regards the identity of the issuer of the
insurance policy to be sold to the complainant. Together with convictions under other charges, the
accused was sentenced to 4 weeks' imprisonment. Since the Ordinance came into force in 1996,
this is the first conviction for misleading the Commissioner in discharging his statutory functions
and the first conviction with a custodial sentence.630
Hong Kong’s current protection for personal data transferred to overseas jurisdictions under section
33 is far from comprehensive and has not been brought into force since its enactment in 1995.
However, to regulate cross-border flows of personal data, the PCPD published a Guidance on
December 29, 2014 to assist organizations to prepare for the eventual implementation of section 33,
which expressly prohibits all transfers of personal data 'to a place outside Hong Kong' except in
specified circumstances. The Guidance contains a set of recommended model data transfer clauses
organizations are encouraged to adopt as part of their corporate governance responsibility to
629 See http://www.pcpd.org.hk/english/infocentre/press_20140123a.htm.
630 See https://www.pcpd.org.hk/english/news_events/media_statements/press_20150127.html.
-136-
enhance privacy protection for cross-border data transfer agreements with an overseas data
recipient.631
Going forward the PCPD’s aim is to increase privacy and data protection efforts in enforcement in
addition to public education. Their strategic focus in 2015 includes issues associated with the
prevalent use of mobile apps; surveys on public perception of the PCPD and privacy issues, and on
protection of personal data contained in public registers maintained by the Government; assisting
the Government and the private sector in administering privacy management programs; and
assisting the Bills Committee in the deliberations of the Electronic Health Record Sharing System
Bill as they relate to privacy and data protection.
j. India
In April 2011, India adopted new privacy regulations, known as the Information Technology
(Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules,
2011 (“Indian Rules”).632 The Indian Rules impose a number of obligations on data controllers,
including requirements to have a privacy policy, to obtain the consent of data subjects before
collecting or processing sensitive personal data, and to comply with reasonable security practices
and procedures, as well as restrictions on disclosing personal data to third parties.
Guidance issued by the Ministry of Communications and Information Technology clarifies that the
Indian Rules only apply to Indian entities and that several provisions, including those relating to the
collection and disclosure of personal information, do not apply to Indian outsourcing services
providers, other than in relation to the data of their own India-based personnel or customers, or to
individuals who contract directly with them.633
In April 2012, Indian authorities requested that the European Commission designate India as a
“white listed” country for transferring personal data outside of the EEA. This would allow the
transfer of personal data from the EEA to India on the basis that India would be deemed to “have an
adequate level of protection for personal data.” The issue was raised during the negotiation of a
bilateral trade agreement.634 For many organizations outsourcing services to India, that the
European Commission has not designated India as a secure country means that complex procedures,
consents or, in some cases, prior authorizations are required before personal data can be transferred
to India, which is a significant barrier to the continuing success of outsourcing services to Indiabased
companies.
631 See https://www.pcpd.org.hk/english/news_events/media_statements/press_20141229.html.
632 Available at www.mit.gov.in/sites/upload_files/dit/files/GSR313E_10511(1).pdf.
633 See Clarification on Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data
or Information) Rules, 2011 Under Section 43A of the Information Technology ACT, 2000, Ministry of Communications &
Information Technology Press Note, Aug. 24, 2011, http://pib.nic.in/newsite/erelease.aspx?relid=74990.
634 Amiti Sen & Harsimran Julka, India seeks 'Data Secure Nation' status, more Hi-end business from European Union, The
Economic Times, Apr. 16, 2012, http://articles.economictimes.indiatimes.com/2012-04-16/news/31349813_1_data-security-councildata-
protection-laws-standard-contractual-clauses.
-137-
k. Mexico
On 6 July 2010, the Federal Law on Protection of Personal Data held by Private Parties (Ley
Federal de Protección de Datos Personales en Posesión de Particulares) (“FLPPDPP”) came into
force. It addresses how private and public entities handle the collection, use and disclosure of
personal information of Mexican residents.635 The new law expands the authority of the Mexico’s
DPA, now called the Federal Institute of Access to Information and Data Protection (“IFAI”). In
December 2011, a second-draft of regulations implementing the new data protection law, came into
force, establishing principles relating to the clarification of notice and consent requirements,
changes to restrictions on cloud computing, updates to requirements regarding data transfers, and
clarifications regarding data subjects’ rights.636
The FLPPDPP regulates personal data and sensitive personal data. It relates to all data protection
issues occurring in Mexico, regarding the private sector, and applies to any private person or entity
who stores and handles personal data for commercial exploitation and use. Under this law, a data
subject has the right to access, rectify, cancel or object to the processing of their personal data. It
does not, however, apply to (i) Credit information societies; or (2) any private person who stores
and handles personal data without the intent to commercially exploit or use that data.
Controllers of personal data must issue a privacy notification to the person whose data will be
processed. Express or tacit consent must be obtained from any individual whose personal data will
be processed. Written or online consent is permitted, and if a person does not respond to the privacy
notification, tacit consent is presumed. Where sensitive personal data is being processed, however,
express, not tacit, consent must be obtained. All transfers of personal data to domestic or foreign
third parties must be approved by the data subject, except for instances specifically provided for in
Article 37 of the FLPPDPP.
The consent of data subjects must be obtained for all transfers of personal data to both domestic or
foreign third parties, except for instances specifically provided for in Article 37 of the FLPPDPP.
The exceptions arise where the transfer is: (i) permitted by domestic law or a treaty to which
Mexico is a party; (ii) necessary for medical diagnosis or prevention; (iii) made within the same
group of companies operating under the same internal processes and policies; (iv) necessary
pursuant to a contract entered into, or to be entered into, for the benefit of the data subject, by the
data controller and a third party; or (v) necessary to safeguard public interest or for the pursuit of
justice.
The main sanctions for data breaches are economic fines, though criminal offences are also included
in the FLPPDPP. Penalties may be doubled when involving sensitive personal data.
l. Turkey
Turkey does not have a unified code of data protection; protection of data is regulated in an ad hoc
manner under scattered provisions of various laws and regulations. Right to privacy is protected
under the Turkish Constitution as a fundamental human right whilst privacy and protection of
635 Mexico and Alberta Pass New Data Protection Laws, InsureReinsure.com, Jun. 10, 2010,
http://www.insurereinsure.com/?entry=2507.
636 Available at: http://dof.gob.mx/nota_detalle.php?codigo=5226005&fecha=21/12/2011.
-138-
personal information is addressed under other fundamental codes such as Turkish Civil Code, Penal
Code and the Code of Labor. According to Article 20 of the Turkish Constitution, personal data can
be processed only in cases envisaged by law or by the person’s explicit consent and the principles
and procedures regarding the protection of personal data shall be laid down in ancillary legislation.
Furthermore the illegal processing of the personal data is subjected to civil and criminal sanctions
however assuring compliance to such limitation and/or methods for enforcing the stipulated
sanctions remains undeterminable due to the absence of a specific code regulating the protection of
personal data and a national data protection authority.
‘Illegal processing’ does not have a description within the general fundamental laws therefore no
specific restriction in respect of collecting, maintaining and processing or transfer of personal data
exists. However in addition to the fundamental codes listed above, certain sectoral laws provide a
regulatory framework and certain limitations for their respective specific sectors such as banking,
telecommunications, e-commerce and healthcare.
In 2012 Turkey’s telecommunications sector regulator, the Information Technologies and
Communication Authority (“ICTA”) issued a new regulation on the Processing of Personal Data
and Protection of Privacy in the Electronic Communications Sector (“Regulation”), which
prohibited the offshore transfer of personal data without any exceptions. However Turkish
Constitutional Court annulled Article 51 of the Electronic Communications Act, the main provision
empowering ICTA to enforce the Regulation, in April 2014 only after 3 months the Regulation was
entered into force on January 2014. The Constitutional Court's ruling entered into force in January
2015 and since then there were no law regulating the area in telecommunications sector. In April
2015 the new Article 51 of Electronic Communications Act has been adopted by the Grand National
Turkish Parliament, stating that the transfer of sector specific data (e.g. traffic and location data)
abroad will only be permitted with the data subjects' explicit consent. Additionally the new Article
51 re-introduced several data protection restrictions to apply to the electronic communication
service providers, previously regulated under the cancelled Regulation, accordingly (i) the
recording, retention, interception or tracking of electronic communications is prohibited; (ii) the
data in users' terminal equipment cannot be retained or accessed without the explicit consent of the
user and (iii) the personal data can only be processed where it is permitted by law explicitly.
The e-commerce sector is also one of the sectors that was introduced a sector specific law that
includes restrictions regarding protection of personal data. The E-Commerce Law, adopted in
October 2013 and become enforceable as of May 2015, prohibiting all e-commerce service
providers from sharing commercial messages by email, text messaging (sms), fax and autodial
machines (robocalls) to consumers without their prior approval. Even though a governmental “do
not call” registry has not been brought to life, the service is being provided professionally by certain
NGOs.
As part of Turkey’s EU accession process and efforts to adopt the EU Acquis (Turkey’s National
Program637), the Draft Law on Protection of Personal Data (“Draft Law”) was introduced by the
Ministry of Justice in 2005. The Draft Law has been in the agenda of the Grand National Turkish
Parliament since 2008, yet it was first approved in 2014 and was sent to the related parliament
commissions for further negotiations. The Draft Law is intended to harmonize Turkish data
637 Available at: http://ec.europa.eu/enlargement/pdf/turkey/npaa_full_en.pdf
-139-
protection laws with the Council of Europe’s Convention for the Protection of Individuals with
Regard to Automatic Processing of Personal Data (i.e., European Data Protection Convention which
was signed by Turkey in 1981 however was never ratified and therefore not implemented
domestically) and EU Directive 95/46/EC. Even though has not yet been enacted, if and when
comes into force the Draft Law shall be Turkey’s first and primary data protection law.
V. The Exposures Presented by Data Breaches
1. The Breadth of the Problem
The costly and growing exposure presented by data breaches is demonstrated by the following
recently reported statistics. As breaches of Personal Information are the most reported and studied,
these statistics focus on the costs associated with data incidents involving Personal Information.
Less quantified are the increasing breaches involving theft of other type of company information,
such as trade secrets and other confidential business information, and denial of service attacks.638
a. The Big Picture: Number of Breaches and Associated Costs
According to a recent study of 350 organizations in 11 countries, the average total
cost of a data breach was $3.79 million (increased from $3.52 million in the study
the year before, with the average cost per lost or stolen record $154 (increased from
$145). Higher costs were attributed to increase in malicious and criminal attacks,
which are more costly, increase in lost business as a consequence of a breach, and
increase in costs associated with forensic and investigative activities, assessment and
audit services, crisis team management, and communications to executive
management and the board of directors. 639 Data breaches cost the most in the U.S.
and Germany, and the least in Brazil and India. 640
In the U.S., the average cost per record of a data breach has been reported as $217
(increased from $201), of which $143 pertains to indirect costs such as abnormal
customer turnover, and $74 represents direct costs incurred to resolve the data breach
such as investments in legal fees and technology. The total average cost paid $6.5
million (increased from $5.9 million). 641 Heavily regulated industries such as
healthcare and financial have higher than average costs, while public sector and
hospitality reportedly have a lower than average cost. 642
638 See, e.g., Prolexic, Quarterly Global DDos Report for information on DDoS attacks, whose Q4 2014 report noted that there
were approximately 90% more DDoS attacks in Q4 2014 compared to Q4 2013. Reports are available at www.prolexic.com.
639 Poneman Institute LLC, 2015 Cost of Data Breach Study: Global Analysis, May 2015, sponsored by IBM. The 11
countries studied were: U.S., U.K., Germany, Australia, France, Brazil, Japan, Italy, India, the Arabian Region (United Arab
Emirates and Saudi Arabia), and Canada.
640 Id.
641 Poneman Institute LLC, 2015 Cost of Data Breach Study: United States, May 2015, sponsored by IBM.
642 Id.
-140-
One study on 2014 per capita costs noted that the Healthcare industry reports the
higher per capita data breach costs of any surveyed industry at $398; Pharmaceutical
came in second at $298; Financial at third at $259, and Energy at fourth at $256.643
In the UK, a study of 39 companies reported an average per capita cost of data
breach of £104, and the total average organizational cost of £2.37 million.644
The average cost of cyber-crime generally for 257 large organizations in six
countries (U.S., UK, Australia, Germany, France and Japan) was $7.6 million per
year, with a range of $500,000 to $61 million; business disruption represented the
highest external cost at 38% of external costs (defined as a cost created by external
factors such as fines, litigation and marketability of stolen information), with costs
associated with information loss accounting for 35% of external costs.645
In 2014, in the U.S. there were 783 reported breaches, a significant increase of
27.5% over the total number of tracked braches in 2013. 646
Globally, 79,790 security incidents with 2,122 confirmed data breaches were
reported by a group of 70 organizations tracking such incidents worldwide.647
A late 2014 analysis of claims payout data submitted by cyber insurers of 117
incidents occurring resulted in somewhat different results than studies of costs
without regard to what was paid by insurers. The mean claim payout was $733,109,
and the median at $144,000. The average cost per record was $956.21 and the
median cost was $19.84; the average number of records exposed was 2.4 million,
with the median of 3,500; the average cost for legal defense was $698,797, for legal
settlement was $150,000, and for crisis services (including forensics, notification,
legal guidance and miscellaneous other) was $366,484 with the median $110,594.648
To be noted are preliminary findings of an updated study of payout data of by the
same organization reported in early June 2015, which were of an average cost per
record of $1,094 and a median cost of $10, with a range from 2¢ to $35,000; the
average total cost was $805,000, with the median at $94,000, and the range at $0 to
$15 million.649
Another global study reported costs of a breach as a range, which varied with the
size of the breach, noting that larger organizations tend to have higher losses per
breach but also tend to have more records involved in a breach. The predicted
643 Ponemon, 2015 Cost of Data Breach Study: United States, supra.
644 Ponemon Institute LLC, 2015 Cost of a Data Breach Study: United Kingdom, May 2015, sponsored by IBM.
645 Ponemon Institute LLC, 2014 Global Report on the Cost of Cyber Crime, October 2014.
646 See Identity Theft Resource Center, http://www.idtheftcenter.org/ITRC-Surveys-Studies/2014databreaches.html. This
number reflects data breaches that the ITRC considers published by a reliable source. Additional breaches may have occurred, and
not all reported breaches identify the number of records exposed.
647 Verizon, 2015 Data Breach Investigations Report, April 2015 (“DBIR”).
648 NetDiligence, Cyber Claims Study 2014, Dec. 2014.
649 NetDiligence, Claims Study (preliminary findings), reported at HB Litigation Conference, June 2015.
-141-
average range for the cost of a breach included: for 100 records, $18,120 to $35,730
(although predictions also noted it could be as low as $1,170 or as high as $555,660);
for 1000 records the average range was $52,260 to $87,140 (with a range as low as
$3,110 and as high as $1,461,730); for 10,000 records the average range was
$366,500 to $614,600 (with a range of $21,900 to $10,283,200); and for 1,000,000
the average range was $892,400 to $1,775, 350 (with a range of $57,600 to
$27,500,090). 650
The total economic burden created by data breaches in the healthcare industry has
been estimated as $6 billion annually, with the average cost of a data breach for a
healthcare organization estimated to be more than $2.1 million. The same study
found that medical identity theft has nearly doubled in the last five years, with over
2.3 million adult victims in 2014.651
A recent study found that, although 52% of respondents believe their companies’
exposure to cyber risk will increase over the next 2 years, only 19% of respondents
say their company has cyber insurance coverage.652
One study noted that on average, each respondent encountered 1.6 successful cyberattacks
on their systems per week. This represented an increase in 21% over the
frequency of attacks in 2013. The most costly of such attacks were caused by
malicious insiders, denial of services, and web-based attacks.653
“Mega Breaches” (single incidents exposing personal details of at least 10 million
identities) decreased from 8 from in 2013 to 4 in 2014.654
One study found that 76% of all websites in 2014 contained “vulnerabilities” that
could potentially be exploited by attackers, compared to only 77% in 2013.655
In a 2015 survey, companies that reported a “material or significantly disruptive
security exploit or data breach” one or more times in the past 2 years reported that
the total financial impact (including all costs, out-of-pocket expenditures such as
consultant and legal fees, indirect business costs such as productivity losses,
diminished revenues, legal actions, customer turnover and reputation damages) from
such exploits/breaches was more than $2.1 million.656
In 2014, 12.7 million consumers suffered identity fraud.657
650 Verizon, 2015 DBIR, supra.
651 Ponemon Institute LLC, Report, May 2015, Fifth Annual Benchmark Study on Privacy & Security of Healthcare Data.
652 Ponemon, 2015 Global Cyber Impact Report.
653 Ponemon 2014 Cost of Cyber Crime Report, supra.
654 Symantec, Internet Security Threat Report 2015, April 2015.
655 Symantec, Internet Security Threat Report 2015, supra.
656 Ponemon, 2015 Global Cyber Impact Report, supra.
657 Javelin, 2015 Identity Fraud: Protecting Vulnerable Populations, 2015. https://www.javelinstrategy.com/brochure/347
-142-
b. The Industries, Assets, and Types of Data Most Frequently
Compromised
Large scale data security incidents continue to be front page news, with hundreds of lesser but still
costly breaches each year affecting companies in a broad range of industries.658 Indeed, any entity
that has Personal Information in its possession, whether that of employees, customers, clients or
third parties, is a potential target for data breaches, either malicious or accidental. Companies with
intellectual property or other confidential business information can also be a target of espionage,
and trade secrets and confidential business information is frequently a target of malicious cyber
attacks. Moreover, virtually every company is also susceptible to the more garden-variety data
security incident that results from lost laptops and smart phones, improperly disposed of paper
records, and lack of security measures with vendors to whom businesses provide access to Personal
Information or company networks.
As demonstrated by the recent reports of large scale data breaches involving retailers, breaches
involving theft of Personal Information remain a major exposure, as cyber criminals target points of
data concentration to acquire large amounts of consumer information, such as personal
identification numbers (PINs) with associated debit and credit card numbers, usually for resale.
Social Security numbers are also a prime target, due to their usefulness in identity theft and the fact
that, unlike credit cards, they are not easily cancelled and removed from usage.
While the industry that has the top spot for publicly reported breaches can vary somewhat from year
to year, certain industries are always on the top ten list, such as retail, healthcare, hospitality,
financial services and educations institutions, and others may be under the radar but also exposed,
such as professional service firms. Below are some of the industries with those exposures, as well
as some recent statistics on types of data and number or records exposed in recent breaches.
Targeted industries include:
Retailers
Because of their heavy use of credit and debit card transactions, and the
financial value of credit and debit cards for use for fraudulent charges,
retailers have long been targets of cyber criminals. Hackers have attacked
online networks as well as in store pin pads and registers. The continued
exposure of both bricks and mortar and online retailers to massive data
breaches affecting millions of consumers has been vividly demonstrated in
recent months.
Many attacks may be automated searches for vulnerable networks containing
payment card information, rather than deliberate targeting of a particular
store or retailer.
Retailers are subject not only to data breaches, but also to non-breach related
consumer litigation directed at business practices in the collection and usage
658 This does not include the vast number of incidents where Personal Information is stolen directly from individuals, or in
which the breach involves heft of information that does not qualify as Personal Information subject to mandatory breach reporting.
-143-
of Personal Information, thus rendering retailers a target of exposures on both
breach and other privacy related fronts.
One of the most publicized of recent retail breaches was that sustained by
Target in late 2015, which demonstrated the vulnerability presented by
vendors and continues to demonstrate the breadth of exposures that a large
data breach can present. In late 2013, Target endured a massive data breach
that made headlines around the world, compromising over 40 million
customers’ payment card accounts and other information of an additional 70
million customers, including names, mailing addresses, phone numbers and
email addresses. The U.S. Senate Committee on Commerce and
Transportation identified issues that are of concern to many others, including
providing a third party vendor with access to company networks; adequacy of
response to automated warnings from the company’s anti-intrusion software;
issues of whether there was proper isolation of the most sensitive network
assets. 659 The large losses resulting from the breach have generated dozens
of lawsuits, including those by banks that issued credit cards as well as by
consumers, resignation of a CEO, and proposed legislation on both the state
and federal levels in the U.S. seeking to impose upon retailers greater liability
(holding retailers responsible for reimbursement of costs sustained by
customers as a result of the breach) and security obligations.660
In 2014, eBay, the quintessential ecommerce site, reportedly sustained a
security breach potentially impacting over 145 million people around the
world, although the exact nature of the information accessed and the
circumstances is still to be determined with potential investigations
announced in the UK as well as the U.S. Reports are of accessed email
addresses and passwords, and so far not financial accounts.661
Retailers remain a significant target for data breaches. Retailers comprised
58% of the compromises investigated by Trustwave in 2014.662
Hospitality/Food and Beverage
The heavy use of credit and debit card transactions in the hospitality industry,
which includes hotels, restaurants, and food retailers, makes businesses in
this industry a target for cyber criminals as well as more garden-variety theft
659 United States Senate Committee on Commerce, Science and Transportation, A “Kill Chain “Analysis of the 2013 Target
Data Breach, March 26, 2014.
660 See Allison Grande, Retailer Take Brunt of Breach Liability Under New Bills, Law360, May 30, 2014,
http://www.lw360.com/articles/540367/print?selection=consumerprotection (discussing recent legislative proposals in California and
Minnesota.
661 Harry Wallop, eBay hacking: on line gangs are after you, Telegraph, 23 May 2014,
http://www.telegraph.co.uk/technology/internet-security/10849689/eBay-hacking-online-gangs-are-after-you.html; Lauren Hertzler,
Scams expected to hit customers hard after eBay data breach, Philadelphia Business Journal, May 25, 2014,
http://www.bizjournals.com/philadelphia/news2014/05/25/scams-expected-to-
662 2015 Trustwave Global Security Report.
-144-
or inadvertent disclosure of Personal Information.663 Hotels were a particular
target in 2014 and early 2015.664 For restaurants and others in this industry,
breaches can simply be the result of careless use or disposal of credit and
debit card information, or it can be that they are the target of cyber criminals
seeking to obtain credit and debit card numbers as they are transmitted by
customers for payment.665
Trustwave reported that 95% of breaches in the food and beverage industry,
and 65% of those in hospitality, were compromises of point of service (POS)
assets.666
Further, this industry like all others is susceptible to lost laptops and other
breaches of security involving employee and customer Personal Information.
Healthcare Providers and Healthcare Insurers
The healthcare industry has one of the highest rates of reported breaches,
with the new rules governing healthcare breach reporting that went into effect
in 2013 increasing the likelihood that unauthorized access to information
about a patient will be reported. (See discussion of breach notification
obligations under HIPAA and the HITECH Act above, Section III. 2.e and f).
The U.S. Department of Health and Human Services reports that as of April
2015, the number of breaches of patient Personal Information affecting 500
or more people since it began keeping records in 2009 reached 1,163, with
more than 120 million patients affected.667 Another source tracking breaches
reported as of April 28, 2015, the healthcare industry accounted for 34.4% of
all data breaches, with 99,422,874 records exposed. This represents 97.1% of
all records exposed, with an overwhelming majority of the records exposed
as a result of the recent Anthem data breach.668
663 Joe Sharkey, Credit Card Hackers Visit Hotels All Too Often, The New York Times, Jul. 5, 2010 (citing study released by
SpiderLabs); see also Hospitality Industry Data Theft: Hotel Owners Must Prevent Breaches of Credit Card Processing Systems,
Aug. 7, 2010, http://hospitalityrisksolutions.com/2010/08/07/hospitality-industry-data-theft-hotel-owners-must-prevent-breaches-ofcredit-
card-processing-systems-by-cyber-criminals-who-install-malicious-programs-to-steal-data/.
664 See, e.g., Rinehart, Geneva, Hotel Industry: When it comes to Data Breach Incidents - Follow the Money Focus on POS,
Hospitality Upgrade, April 22, 2015; Sources: Data breach shows industry liability, Hotel News Now, February 5, 2014,
http://www.hotelnewsnow.com/Article/13077/Sources-Data-breach-shows-industry-liability; Langfield, Amy, Hotel data breach
went undiscovered for nine months, CNBC, Feb. 6, 2014, http://www.cnbc.com/2014/02/06;
665 See Will Oremus, A Burger, An Order of Fries, and Your Credit Card Number, Slate, Mar. 22, 2012 (discussing that
restaurants are prime targets for hackers as small businesses such as most restaurants often don’t set up unique passwords after they
install point-of-sale charge systems).
666 2015 Trustwave Global Security Report, supra.
667 HIPAA & Breach Enforcement Statistics for April 2015, produced by Health Information Privacy/Security Alert, published
by Melamedia, LLC. See also Health Information Privacy, U.S. Department of Health & Human Services,
http://www.hhs.gov/ocr/privacy/hipaa/administrative/breachnotificationrule/breachtool.html.
668 See Identity Theft Research Center 2015 Data Breach Stats, April 2015.
http://www.idtheftcenter.org/images/breach/ITRCBreachStatsReportSummary2015.pdf
-145-
The well-publicized breach of health insurer Anthem contributed to both the
large number of records reportedly impacted, and the attention on the
industry.669
Criminal attacks on the healthcare systems have risen 125% since 2010.670
The economic impact of breaches has remained consistent; in 2010 it was
$2.1 million and in a 2015 report it remained $2.1 million.671
Data breaches involving healthcare institutions and health insurers can range
from simple loss of a laptop, to systemic electronic data breach of patient
Private Information, to a reported incident of a worm infecting medical
equipment run with the assistance of computer systems. A concern with
compromise of medical center systems that is not an issue with most other
industries is that it has the potential to negatively affect patient care, either by
directly affecting operation of equipment or by interrupting the systems that
provide information used in the rendering of care, with resultant bodily
injury. Healthcare service providers and their vendors include not only major
medical centers but also small groups that, as with other small businesses,
cannot easily bear the costs of a major data breach. The financial burden that
a breach can place on a small vendor was demonstrated by the reported
bankruptcy of a medical records vendor faced with a break-in to its offices
that resulted in a breach of electronic medical records that included Personal
Information and medical diagnoses of 14,000 people. As a result of the costs
of responding to the breach, the vendor filed for bankruptcy.672
Healthcare insurance identification is apparently worth many times more than
payment card information on the black market and this makes healthcare
institutions a target. According to one research firm, criminals tend to use
information stolen from medical records for an average of 320 days versus 81
days for data stolen from other sources, and it takes twice as long to detect a
medical data breach compared with other kinds of thefts of Personal
Information.673
Increasingly, healthcare providers faced with a data breach are also being
subjected to large regulatory fines, particularly when the post-breach
regulatory scrutiny reveals pre-breach security procedures issues.
669 See reports of breach announced by Anthem, Inc. in early 2015, potentially affecting as many as 80 million individuals.
https://www.anthemfacts.com; http://www.naic.org/documents/anthem_data_breach.htm.
670 Ponemon Institute LLC, Report, May 2015, Fifth Annual Benchmark Study on Privacy & Security of Healthcare Data.
671 Ponemon Institute LLC, Ponemon Institute LLC, Report, May 2015, Fifth Annual Benchmark Study on Privacy & Security
of Healthcare Data.
672 Katy Stech, Burglary Triggers Medical Records Firm’s Collapse, The Wall Street Journal, Mar. 12, 2012.
673 Neil Versel, Report: Medical data theft growing as more adopt EMRs, Fierce EMR, Apr. 1, 2010,
http://www.fierceemr.com/story/report-medical-data-theft-growing-more-adopt-emrs/2010-04-01.
-146-
Financial Institutions
While financial institutions no longer hold the top spots of industry targets,
perhaps because they generally have among the most sophisticated security
and response. However, they were reported to be among the top three
industries by frequency of reported breaches 674, and they remain a target of
cyber criminals due to the volume and nature of the information they collect
and maintain, as well as less malicious exposures (as Willie Sutton, an
American bank robber is apocryphally know to have said, he robbed banks
“because that is where the money is”). They are also subject to losses
indirectly from payment card breaches of companies in other industries, as
card issuing banks and credit unions often bear losses from card replacement
costs and fraudulent transactions using stolen PI. One of the main routes for
data breaches at financial institutions remains payment card skimmers on
ATMs.675
Average costs of a data breach vary by industry, and as a heavily regulated
industry, the cost of a breach involving a company in financial services can
be more than the average costs of data breaches generally, and also are
reported to result in more than average customer churn.676
Banks have been the target not only of hackers seeking to obtain Personal
Information of customers for financial gain, but also those seeking to disrupt
the bank’s operations677, often for political reasons.678 This has not been
limited to U.S. banks, as demonstrated by the well-publicized attacks on
banks in South Korea, with the culprits suspected to be from North Korea
reportedly using malware intended to render computer unusable.679
Hacking of financial institutions raises not only concerns not only of large –
scale theft of Personal Information, but also of politically motivated hackers
deliberately trying to wreak havoc on global financial markets.
Payment Processors
Payment processors of credit card transactions are a major target of malicious
attacks, as a successful attack on their systems can yield large amounts of
Personal Information, particularly credit card information of consumers as
well as of merchants, potentially for use in fraudulent financial transactions.
674 Verizon, 2015 DBIR, supra, at p.3.
675 Verizon, 2015 DBIR, supra.
676 Ponemon, 2015 Cost of Data Breach Study: United States, supra.
677 Verizon, 2015 DBIR, supra at page 44.
678 Ellen Nakashima, U.S. response to bank cyberattacks reflects diplomatic caution, vexes bank industry, Washington Post,
Apr. 27, 2013.
679 Chris Strohm & Eric Engleman, Cyber Attacks on U.S. Banks Expose Computer Vulnerability, Bloomberg,
http://www.bloomberg.com/news/2012-09-28/cyber-attacks-on-u-s-banks-expose-computer-vulnerability.html, Sept. 28, 2012; Choe
Sang-Hun, Computer Networks in South Korea Are Paralyzed in Cyberattacks, The New York Times, Mar. 21, 2013.
-147-
Among the best known of these data breaches are those of Global Payments
(March 2012); Heartland Payment Systems (January 2009); and RBS World
Pay (December 2008). In early 2013, a ring of hackers conducted a large and
complex international theft, by utilizing malware to breach two card
processors used by banks in the United Arab Emirates and Oman, one in the
U.S. and one in India. The criminals reportedly overrode security protocols,
found prepaid debit cards and deleted the limits on those accounts, and
loaded the account information on magnetic strips, which were then used by a
“cashing crew” to withdraw over $45 million in cash from ATMs.680 Several
cashing crew members were identified and indicted shortly after the
withdrawals.681
The reported costs resulting from the Heartland breach demonstrate the
potential size of exposures presented by data breaches of payment processors,
who by the nature of their business have Personal Information of thousands
(or millions) on their systems. In May 2009, Heartland disclosed it had spent
or set aside more than $12.6 million to cover legal costs and fines related to
the data breach. Apart from its settlements with the class of the class of
affected consumers, and settlements with other third parties, settlements
reached by Heartland with the card brands represent an additional significant
cost. It reportedly reached a settlement with American Express for $3.6
million;682 settled with Visa for up to $60 million;683 and with MasterCard for
$41.4 million.684 As discussed below, it is still in litigation with several
issuing banks for reimbursement of losses they allegedly sustained. A number
of financial institutions were reportedly affected by the Heartland data
breach, including banks in 40 states. Many banks apparently had credit or
debit cards they had issued compromised by the incident. Heartland
shareholder litigation was also commenced, although unsuccessful. The
Heartland breach demonstrates the wide range of third-party claims that may
be asserted when there is a large breach resulting in unauthorized access of
credit card numbers, as well as the significant costs to which a company that
has a large breach is subject.
Universities and Other Educational Institutions
Universities have been one of the major sources and targets of data breaches,
as have lower level educational institutions. This may be because of the large
number of computer terminals accessible by a myriad of students and
680 See, e.g., Penny Crosman, Data Breaches Back in Spotlight After $45 M ATM Heist, American Banker, May 14, 2013.
681 United States of America against Alberto Yusi Lajud-Pena, et al. Indictment CR13-0259, United States District Court,
Eastern District of New York, Apr. 25, 2013.
682 Robert McMillan, Heartland Pays Amex $3.6 Million Over 2008 Data Breach, PC World, Dec. 17, 2009,
http://www.pcworld.com/article/185052/article.html.
683 Grant Gross, Heartland to Pay up to $60 Million to Visa Over Breach, Computer World, Jan. 8, 2010,
http://www.computerworld.com/s/article/9143480/Heartland_to_pay_up_to_60M_to_Visa_over_breach.
684 Nancy Gohring, Heartland, MasterCard Settle Over Data Breach, Computer World, May 9, 2010,
http://www.computerworld.com/s/article/9176999/Heartland_MasterCard_settle_over_data_breach.
-148-
employees and a more casual attitude toward computer security at some
educational institutions, or because many universities have research facilities
and programs whose information is a target for those with financial or
political motives. In any event, almost every year educational institutions
have a place relatively high on the list of industries with reported breaches.685
Reports summarizing 2014 breaches note that there were at least 28 in
2014686 (some counted 57687), with the first half of 2015 keeping pace if not
exceeding it. 688
Law Firms
Law firms are a repository of clients’ confidential and Personal Information,
and the Personal Information of claimants in litigations they handle, as well
as of their own employees’ Personal Information. Thus, they are a significant
potential source of inadvertent data breaches as well as a potential target of
malicious ones.689
Many breaches are simply due to improper information disposal practices of
paper and electronic devices, or lost laptops and other mobile devices. There
is, however, increasing concern of law firms being targeted by hackers to
obtain information about firm clients, particularly clients whose security
procedures have made intrusion more difficult. The FBI and other
organizations following the issue of law firms as the target of cyber attacks
have tried to educate and warn law firms as an industry that they are being
targeted by hackers.690 Other security experts have also warned that law
firms may be increasing targeted by those seeking to obtain confidential
information of their clients, either for political motivation or economic
advantage (e.g., bidding information or trade secrets).691 Some firms are
obtaining special cyber security certifications, called ISO 27001.692 The ABA
685 See www.idtheftcenter.org, which lists reported breaches by industry each year.
686 See list of reported breaches maintained by The Privacy Rights Clearinghouse, at www.privacyrights.org. See also
McCarthy, Kyle, 5 Colleges With Data Braches Larger Than Sony’s in 2014, March 17, 2015, www.huffingtonpost.com/kylemccarthy/
five-colleges-with-data-b_b_647480.html.
687 See idtheftcenter.org list of 2014 breaches, which lists 57 for the Educational industry.
688 Id.
689 See Law Firms Now Prime Targets for Hackers, Today’s General Counsel, October 31, 2014,
http://www.todaysgeneralcounsel.com/law-firms-now-prime-targets-hackers/?utm_source, Matthew Goldstein, Law Firms Are
Pressed on Security for Data, The New York Times, March 26, 2014, http://dealbook.nytimes.com/2014/03/26/law-firmsscrutinized-
as-hacking-increases/?_php=
690 Goldstein, Matthew, Citigroup Report Chides Law Firms for Silence on Hackings, The New York Times, March 26, 2015;
Jennifer Smith, Lawyers Get Vigilant on Cybersecurity, The Wall Street Journal, Jun. 26, 2012; ABA Law Practice Management
Newsletter, Preventing Law Firm Breaches, Vol. 38, No. l.
691 See, e.g., Verizon, 2015 DBIR, supra at p. 52 (listing Professionals as among the top three industries affected by cyberespionage);
Legal Week Benchmarker study in association with Stroz Friedberg, Locked Down? A Closer Look at the Rise of
Cybercrime and the Impact on Law Firms, May 2013.
692 Gluckman, Neil, To Satisfy Clients, Law Firms Submit to Cybersecurity Scrutiny, The American Lawyer, March 12, 2015,
.http://www.americanlawyer.com/id=1202720468020.
-149-
has issued ethics opinions on lawyer obligations with regard to metadata, use
of cloud computing, and other related issues.
In addition, law firms frequently allow their employees to use their own
devices to access firm data bases, giving rise to the security risks attendant
with BYODs (Bring Your Own Device). Many law firms also do a
significant amount of business with companies in the healthcare industry and
may qualify as “business associates” of entities covered by HIPAA, and thus
be subject to its breach notification requirements. Furthermore, law firms
that perform large-scale document review and productions often use outside
vendors and Internet-based data storage systems. As this practice continues
to grow, law firms increase their exposure to potential cyber attack as well as
inadvertent data breaches involving their vendors as well as themselves.
Real Estate Agents
Real estate and rental agents and others involved in the sale or rental of
properties, particularly residential properties, often collect and maintain
applications that contain financial as well as other Personal Information of
applicants. Those, those involved in such real estate transactions are a course
of data breaches that is often not fully considered. Some reported breaches in
this industry are due to disgruntled employees, and other due to improper
disposal of records as well as thefts.
Government Entities
2015 may well become known as the year of the Government Agencies
Breaches, due to discovery of the mega breaches of records containing
Personal Information of millions at the federal Office of Personnel
Management. Government agencies on both local and national levels
aggregate vast amounts of sensitive information about individuals, and they
can be as susceptible to breach of such data as any private entity, as
demonstrated by reports of breaches of government agencies in the U.S. and
UK. Apart from other causes and motives for breaches, public entities have
been identified as among the top three industries for cyber espionage. 693
Vendors
Breaches by companies’ third-party service providers such as outsourcers,
contractors, consultants and business partners remain a concern, with reports
of large breaches such as Target noting that vendors can be a point of access
to company networks.694
693 Verizon, 2015 DBIR, supra at p. 52.
694 United States Senate Committee on Commerce, Science and Transportation, A “Kill Chain” Analysis of the 2013 Target
Data Breach, March 26, 2014.
-150-
Outsourcing of services is done to some extent in almost every business, and
often involves transfer of or allowing access to Personal Information from a
company to it vendor, such as IT, payroll, accounting, pension and other
financial services, and operations vendors that obtain access to company
networks even if their function does not directly involve Personal
Information. Entities that provide vendor services to other companies are a
potential source of data breach risk for their clients, and their data protection
procedures and standards can be as important as the companies’ own. The
data they have or have access to of their client’s employees or customers is
subject to loss or malicious theft, from insiders and outsiders. Even when
data protection security standards are in place, vendors with access to large
amounts of Personal Information or other confidential or sensitive
information can make attractive targets for hackers.
The risk presented by vendors is generally recognized, but not always
addressed. However, state and federal legislature and agencies are
increasingly aware of the potential vulnerability that vendors can present.
Massachusetts was one of the first states to encourage that it be addressed
when it included in the Massachusetts Regulation a requirement that
companies require by contract that their vendors implement and maintain
appropriate security measures for Personal Information (see Section III. 2.e.
above on Data Security Requirements: Massachusetts Remains at the
Forefront in the U.S.), and other states also have contract requirements.
Employers of All Varieties
Many reported data breaches involve not the data of a company’s customers,
but that of its own employees. Employers retain Personal Information data of
their employees for a variety of reasons, including payroll and benefits.
Breached information in some cases involved the data of former employees,
as well as current ones, illustrating the long-term hazard that may have
prompted many regulators overseeing data security to scrutinize the period of
time that companies retain data and whether the retention time is necessary
for business operations. Compromised employee data also illustrates that
virtually any type of entity that employees a staff, whether for profit or not, is
potentially at risk for a data breach.
c. Causes
Motives generally vary by industry; for example, financial motive tends to be
primary in incidents involving the retail industry, and espionage is more often the
motive in incidents involving manufacturing companies.695
Malicious and criminal attacks continue to be the primary cause of data breaches.
One study found 49% of incidents studied to involve such attacks.696
695 Verizon, 2015 DBIR, supra.
-151-
In 2014, cyber-espionage attacks remained common, with 548 such incidents. The
most common targets were manufacturing firms.697
A 2015 study found that the most common attack (70%) on social networks was a
“manual sharing” scam, which relies on victims to share the scam by presenting
them with intriguing videos, fake offers or messages that they share with their
friends.698
A recent preliminary study of claims submitted to insurers found that lost or stolen
laptops and other devices were no longer in first place, with hackers the most
frequent cause.699
At least for breaches regarding credit cards, less than 1% of perpetrators used tactic
rated as of high difficulty, and 78% of the techniques reviewed were in the low or
very low categories of difficulty for initial compromise. Organizations breached
tended to be less compliant with Payment Card Industry Data Security Standards
than the average organizations.700
The types of malware being used in attacks show an interesting trend: today’s
malware is extremely opportunistic and relatively short-lived. 95% of the malware
types examined in one report showed up for less than a month, while four out of five
didn’t last beyond a week.701
The industries most commonly affected by Point-of-Sale intrusions remain
restaurants, hotels, grocery stores and other brick-and-mortar retailers. For web
based attacks, the top industries are information, utilities, manufacturing and
retail.702
Of the major mobile devices, a 2015 study found that Apple iOS iPhone/iPad had by
far the most documented vulnerabilities in 2014 with 84% of all mobile
vulnerabilities, up 2% from the previous year. The Android came in second at
11%.703
One study found that one of the greatest mobile threats, and a significant trend in
2014, are scam campaigns, whereby scammers send automated inquiries offering
fictitious items for sale, such as jobs and houses for rent, and interact with potential
victims. They typically use fake checks or spoofed payment notifications to make
696 Ponemon, 2015 Cost of Data Breach Study, United States, supra.
697 Verizon, 2015 Data Breach Investigations Report, supra.
698 Symantec, Internet Security Threat Report 2015, supra.
699 NetDiligence Claims Study (preliminary), supra.
700 Verizon 2014 PCI Compliance Report.
701 Verizon, 2015 Data Breach Investigations Report, supra.
702 Verizon, 2015 Data Breach Investigations Report, supra.
703 Symantec, Internet Security Threat Report 2015, supra.
-152-
victims ship their items or to take victims’ deposits, but of course, the victims never
hear back from them. 704
One 2014 study of over 6,500 applications found the majority of all such
applications were vulnerable to problems related to server misconfiguration. Cookie
security, system information leak, privacy violations and cross-frame scripting
rounded out the top-five list of web-application vulnerabilities.705
According to one study, the top five mobile vulnerabilities in 2014 were (in
descending order) privacy violations, insecure storage, insecure transport, insecure
deployment and poor logging practice.706
One study noted that in 2014, approximately 23% of all users will open phishing
messages, compared to 11% of users that will click on an attachment.707
In 2014, 1 in 244 emails contained an email virus, up from to 1 in 196 in 2013.708
A 2015 study found that hackers are accountable for 49% of all exposed identities,
with 22% of exposed identities attributable to accidents and 21% attributable to
thefts or losses of computers or hard drives.709
The use of “bots” to compromise computers declined in 2014, in large measure
because the FBI, European Cybercrime Centre (EC3) at Europol, and other
international law enforcement agencies have been actively disrupting and shutting
bots down. China now has the world’s highest rate of malicious bot activity, with the
U.S. coming in second.710
d. Breach Discovery and Response
Reports of breaches provide the following information about the discovery of, and response to, data
breaches:
In 2014, over 90% of all attackers were able to compromise their targets within a day
or less, whereas only approximately 25% of victims were able to detect such
breaches within a day. This disparity has been increasing year-to-year over the last
decade.711
704 Symantec, Internet Security Threat Report 2015, supra.
705 Hewlett-Packard, Cyber Risk Report 2015. http://images.info.arcsight.com/Web/ArcSight/%7Bfe8772f5-04dc-42f2-900c-
68907be21770%7D_Cyber_Risk_Report_2015_EN_final.PDF
706 Hewlett-Packard, supra.
707 Verizon, 2015 Data Breach Investigations Report, supra.
708 Symantec, Internet Security Threat Report 2015 supra.
709 Symantec, Internet Security Threat Report 2015, supra.
710 Symantec, Internet Security Threat Report 2015, supra.
711 Verizon, 2015 DBIR Report, supra.
-153-
In one study, the time from intrusion to detection of compromises investigated
ranged from one day to 1,655 days (4.5 years), with the average 188 days (6.25
months) and the median 86 days (just under 3 months).712 Another study found that
in the incidents it reviewed, the average time to identify a data breach was 206 days,
with a range of 20 to 582 days, and the average time to contain it was 69 days with a
range of 7 to 175 days.713
One study found that 10% fewer victims detected a breach themselves in 2014 as
compared to the year prior, but when the company was capable of detecting breach
on its own or partnering with a managed security services provider that can do so on
its behalf, detection and containment was quicker.
Mitigation of certain attacks, such as denial of service, malicious insiders and webbased
attacks, will often require enabling technologies such as SIEM, intrusion
prevention system, application security testing and enterprise governance, risk
management and compliance (GRC) solutions.714
Having an incident response plan and team in place reduces the cost of a data breach,
as does CISO leadership, employee training, board level involvement and insurance
protection.715
The reality is that any entity that obtains, maintains or transmits Personal Information of employees,
customers, clients, or any other third party is potentially exposed to a data security incident and
related costs. These costs include direct expenses such as engaging forensic experts, obtaining legal
advice as to whether notifications are required and if so to whom and their content, payment of any
fines imposed, and the defense and resolution of third-party claims, as well as the indirect costs of
in-house time spent addressing the incident and supporting the resulting investigations, the damage
to reputation, and the loss of customers, business and related revenue. While not all incidents of
penetration of networks or breach of data security are confirmed to be data breaches as defined by
applicable law, those that are generally involve substantial costs of providing notifications and
outsourced call center support, offers of free credit monitoring subscriptions and identity theft
insurance and discounts for future products and services. It should be no surprise that being
prepared, or at least advance planning for the event of a data breach, leads to a less expensive
response.
2. The Importance of Timely and Proper Notification
A poorly executed breach response can harm a company’s reputation and increase its out-of-pocket
costs, including exposure to fines and lawsuits arising from non-compliance with data security laws
and regulations.
712 2015 Trustwave. Global Security Report, supra.
713 Ponemon, 2015 Cost of Data Breach Study: Global, supra.
714 Ponemon, 2013 Cost of Cyber Crime Report, supra.
715 Ponemon, 2015 Cost of Data Breach Study: Global, supra.
-154-
One study noted that quick notification is actually a factor that increases the cost of a data breach,
by $10.45 per record. 716 This indicates that a thoughtful, unrushed response is important in
responding to a breach and avoiding cost inefficiencies and the potential need to supplement
notifications with resulting cost duplications.
Loss of customers remains a major cost of a data breach, as discussed in many of the studies of the
cost of data breaches this past year cited here. According to one study, 83% of respondents said,
“organizations that fail to protect my personal information are untrustworthy,” and 82% said “the
privacy and security of my personal information is important to me.”717 Not surprisingly,
following a breach, 62% of respondents said the breach decreased their trust and confidence in the
breached entity.718 This disruption to the relationship is costly, as 15% of respondents said they
already had discontinued or will discontinue their relationship with the breached entity, 39% said
they might do so, and 35% said they would continue the relationship as long as it does not happen
again.719 Unfortunately, the handling of incidents does not appear to be helping preserve the
customer relationship, as the percentages of respondents who find breach notices believable has
declined from 61% in 2005 to 55% in 2012. 720 The percentages of respondents who believe the
breach notice was easy to understand has declined from 48% in 2005 to 39% in 2012.721 According
to a recent study, merely having an incident response plan in place lowered the cost of a breach by
$12.60 per record.722 These statistics reinforce the importance of companies establishing a good
response plan before a breach occurs so they can address a breach promptly and properly. This is
critical both for maintaining regulatory compliance and for minimizing the negative impact on
customer relationships and business reputation.
Also to be taken into account is that lawsuits arising from data breaches often include causes of
action alleging the breached company failed to timely notify customers and others whose Personal
Information was compromised by the breach, proximately causing damages that allegedly would
have been avoided or minimized with a more timely response. Regulatory investigations of data
breaches, and related enforcement actions, continue to focus on the length of time the company
suffering a breach took to notify those affected.
Having an incident response plan in place unquestionably improves a company’s ability to respond
to a breach both appropriately and within a reasonable time frame, thereby mitigating the negative
effects on customer and other relationships, and supporting a company’s legal defense against thirdparty
lawsuits, regulatory investigations, and enforcement actions arising from the breach.
716 Ponemon, 2014 Cost of Data Breach Study: Global Analysis, supra. Similar results of a quick notification increasing costs
was found in prior years’ studies as well. See, e.g., 2010 Annual Study: U.S. Cost of a Data Breach.
717 Ponemon, 2012 Consumer Study on Breach Notification, p.2.
718 Id. at p.9.
719 Id.
720 Id. at p. ll.
721 Ponemon, 2012 Consumer Study on Breach Notification,. at p. 10.
722 Ponemon, 2015 Costs of Data Breach Study: Global Analysis, at p. 13.
-155-
3. The Potential Costs and Damages of a Breach
The costs to a breached entity of a data breach include both the direct costs of immediate
investigation, response, notification and remediation costs, and the indirect and at times longer-term
costs of reputational damage, loss of customers, and business interruption that can result from a
publicized data breach. Costs of a breach can also often include liability to third parties whose
Personal Information is acquired without authorization causing them financial detriment, or who
sustain other losses as a result of a data breach that can be attributed to the negligence of the
breached entity. Even if such third-party claims do not ultimately succeed, they can involve very
substantial litigation costs to investigate and defend. For publicly traded corporations, there can
also be an effect on the stock price when a breach of their data security is reported or shareholder
derivative suits, as discussed above, recent SEC Guidance identifies cyber risks and incidents as
potentially material information to be disclosed by publicly traded companies.
For insurers of companies that sustain a data breach, there are often claims under a variety of
policies ranging from traditional lines policies such as general liability policies, D&O policies,
professional liability and errors and omissions policies, and crime policies, to specialty data breach
and cyber risk policies, as insureds seek recovery of at least some of the substantial financial costs
that they incur when they are involved in a data breach.
As demonstrate above, while studies vary in their methodologies and results in calculating the
actual cost of a data breach, under all the costs can be substantial, both in short term out of pocket
costs and long term impact on a business. One recent study found that the average cost of data
breach incidents for companies located in the Unites States increased again from the prior year
(there had been an increase the year before as well), from $5.9 million in 2013 to $6.5 million in
2014 (the prior year there had been an increase reported from $5.4 million in 2012 to $5.9 million),
and the average cost for each lost or stolen record containing sensitive and confidential information
increased from $201 to $217 (for breaches of between 5,655 to 96,550 records).723 Another study
notes that the costs per record generally decrease the larger the number of records involved in a
breach, as costs are spread among a larger number of records, and forecasts an average loss for a
breach of 1,000 records as between $52,000 and $87,000; for a breach of 100,000 records as
between $366,500 and $614,600, and for 1,000,000 records between $892,400 and $1,775,350.724
A study of payouts by insurers for covered breaches reported a mean (average) claim payout of
$733,109 but a median of $144,000, with a range of $1,000 to $13.7 million; it also noted the
impact on the number of records and inclusion of outlier large breaches in the calculations cost per
record, as the average cost per record was $956.21, but the median cost was $19.84, in a study that
included breaches that ranged from 0 to over 2.4 million in the number of exposed records.725 A
723 Ponemon Institute, 2015 Cost of Data Breach Study: United States, supra.. This study appears to not include breaches of
more than 100,000 records, under the assumption that such breaches do not affect most organizations. The study found that the
number of breached records per incident this year ranged from approximately 5,000 to slightly more than 100,000 records, with an
average number of breached records of 29,087.
724 Verizon, 2015 Data Breach Investigations Report, supra.
725 NetDiligence 2014 Cyber Claims Study, supra.
-156-
preliminary update of in early June 2015 noted an average cost per record of $1,094, but when the
mean median was considered it was only $10, with a range from 2¢ to $35,000.726
Although averages may be driven up by a few outlier breaches of either extraordinarily large
number of records or unusually large costs per record, the unavoidable reality is that a breach results
in substantial costs, due to forensic investigation of the incident and retention of legal consultants,
mandatory reporting requirements for breaches involving Personal Information and third-party
claims that many breaches trigger, and the reputational damage and business disruption to the entity
sustaining the breach.
a. First-Party Costs
The range of immediate economic costs to entities sustaining a breach involving
Personal Information often include:
Payment of forensic experts to find the cause of the breach and what needs to be
done to stop it or prevent recurrence, and to evaluate if the cause was due to any noncompliance
with applicable law or standards;
Obtaining legal advice on whether notice requirements are triggered and, if so, which
ones and the types and content of notice required;
The cost of providing notice, including printing and mailing of letters;
The cost of providing a call center to answer inquiries by individuals receiving the
notice;
The cost of credit monitoring services and identity theft insurance, if offered; and
Payment of public relations consultants for publicity control.
Added to these are the “indirect” costs of loss business and reputational damage, which some of the
studies cited above quantify as over half the cost of a data breach. (See Section V above on the
Exposures Presented by Data Breaches, which cites to numerous recent studies that have attempted
to quantify the cost of a data breach).
b. Fines and Penalties
Additional significant costs to entities subject to data breaches are contractual and regulatory
assessments, often referred to as fines and penalties, although the legal nature of such assessments
and whether they qualify as fines, penalties or compensatory damages has been the subject of
dispute and litigation.727
726 NetDiligence, 2015 preliminary report, supra.
727 As discussed above in Section III.3, on PCI Standards for Protection of Credit Cards, recent litigation about the nature of
the PCI assessments include: Elavon Inc. v. Cisero’s Ristorante, No. 100500480 (3d Dist. Ct, Summit County, Utah); Genesco Inc. v.
Visa USA Inc., et al., Case No. 3:13-cv-00201 (U.S. District Court, Middle District, Tennessee.); Schnuck Markets, Inc. v. First Data
-157-
For entities subject to payment card breaches, there are often contractual fines and other
assessments imposed under the Payment Card Industry rules, regulations and contractual
agreements if there is a failure to comply with their standards for protection of payment cardholder
information. Such assessments are in various categories, with some for non-compliance with PCIDSS
(the Payment Card Industry Data Security Standards) often expressly labeled as a fine, with
other categories labeled in PCI industry contracts as for fraud reimbursement and for
operational/administrative costs. (See Section on the Regulatory and Statutory Landscape in the
U.S., subsection on PCI Standards for Protection of Credit Cards, above).
Additionally, breached entities are often subject to regulatory fines and penalties that may be
imposed by regulatory agencies and state attorneys general, as well as statutory imposition of
assessments per violation that raise the issue of whether they are in the nature of fines, punitive
damages or compensation. These often raise insurance coverage issues, as many policies preclude
or limit coverage for fines, penalties and damages that are punitive, exemplary, or multipliers of
compensatory damages. (See section on Insurance Company Exposures, below).
c. Third-Party Claims
Third-party claims by those who have allegedly been damaged by a data breach trigger longer-term
costs to the breached entity, and those generally include substantial defense costs even when the
claims are defeated. Section VII of this White Paper, Privacy Litigation: Current Issues, discusses
trends in privacy-related litigation. However, we also identify here some of the exposures to thirdparty
claims faced by entities that have been the subject of a data breach of Personal Information.
i. Consumer Claims
Consumer claims in the breach context in the past have had only limited success, as they face a
number of obstacles, although recently there have been some successes by plaintiffs in avoiding
early dismissals.
At the inception of a lawsuit, courts scrutinize whether the consumers have Article III
Constitutional standing to pursue their claims, which requires an injury in fact (see discussion on
Standing in Litigation section below). Courts will also analyze whether the consumers have
sustained a legally cognizable injury under the applicable state’s law. While many data breaches
involve unauthorized access to Personal Information, affected individuals have not always been able
to demonstrate that they sustained the requisite injury or recoverable damages.728
However, some recent decisions indicate that while consumers will likely still have difficulty
ultimately prevailing in most claims absent demonstrated actual identity theft and resulting financial
Merchant Data Services Corp. and Citicorp Payments Services, Inc., Case No. 4:13-CV-2226-JAR (U.S. District Court, Easter
District of Missouri).
728 Consumer liability for fraudulent credit card charges is limited by federal statutes, such as The Electronic Fund Transfers
Act (EFTA), see EFTA, Pub. L. No. 95-630 (Title XX § 2001), 92 Stat. 3728 (Nov. 10, 1978), codified at 15 U.S.C. §1693 et seq.
See also Truth in Lending Act (TILA), 15 U.S. C. §1643. Moreover, at least some of the card brands have a “zero liability policy”
under which the card issuer will not hold the cardholder responsible for unauthorized purchases under many circumstances.
MasterCard is reportedly extending its zero liability policy in the United States to include all PIN based and ATM transactions,
Reuters, MasterCard extends zero liability policy to ATM transactions, May 28, 2014,
http://www.businessinsurance.com/article/20140528/NEWS07/140529863?tags=
-158-
losses, defendants may not be able to obtain early pre-discovery dismissals of consumer claims as
readily as they were in the past. Many of the early battles in consumer actions against breached
entities focused on whether the consumers had the requisite injury in fact to establish standing and
survive a motion to dismiss, as well as a legally cognizable injury under the applicable state law.
Often, consumers do not actually sustain identity theft, and thus there have been a growing number
of somewhat varying decisions as to what constitutes sufficient injury, and increasing claims of
violation of consumer protection statutes under which there are assessments per violation without
regard to actual economic loss to the consumer. These issues and a number of these cases are
discussed in the section on Privacy Litigation: Current Issues below.
Yet another obstacle to consumer claims is the prospect that the consumer’s application for class
certification may be denied. The losses claimed by an individual consumer will generally be
minimal. On the other hand, certification of a class of thousands, or millions, of affected consumers
can multiply such losses and thereby create the incentive for plaintiffs’ lawyers to pursue litigation.
Moreover, consumer plaintiffs may have difficulty obtaining certification of their lawsuits as class
actions due to the highly individualized proof of loss and of causation of loss by the breach in issue
required for each plaintiff, and the difficulties in demonstrating that questions of fact and law
common to class members predominate over questions affecting only individual members.729
ii. Bank Claims
Added to the list of potential third-party claims are efforts by banks and credit unions that sustained
losses as a result of their customers’ payment cards being canceled and replaced, and of fraudulent
charges they absorbed, to recover such financial losses from the entity breached. While often
consumers cannot demonstrate actual financial loss if they did not sustain or pay unauthorized
charges, banks have been increasing pressure on state and federal lawmakers as well as on courts to
allow them the right to reimbursement from breached entities for the costs banks, particularly
payment card issuing banks sustain from absorbing fraudulent charges and reissuing debit and credit
cards.
While initially the legal basis for efforts by banks and other financial institutions to recover their
costs from a breached company were very limited, efforts are underway to provide routes for legal
recourse. As noted above, Washington State passed legislation that provides for liability of a credit
or debit card processor or business to a financial institution if the processor or business fails to take
reasonable steps to guard against unauthorized access to account information that is in its
possession, and such failure is found to be the proximate cause of a breach. Similarly and as also
discussed above, the Minnesota Plastic Card Security Act provides that financial institutions may
recover from a company that accepts payment cards if the company retains card security code data,
PIN verification code numbers or the full contents of any track of magnetic stripe data for longer
729 As confirmed by the U.S. Supreme Court in Wal-Mart Stores, Inc. v. Dukes, 131 S.Ct. 2541 (Jun. 20, 2011), there must be
a certain degree of commonality among members of a plaintiff class, which requires more than an alleged violation of the same law
(reversing class certification, noting that millions of employment decisions were in issue, and holding that “[c]ommonality requires
the plaintiff to demonstrate that class members ‘have suffered the same injury’” and that the common contention “must be of such a
nature that it is capable of class wide resolution, which means that determination of its truth or falsity will resolve an issue that is
central to the validity of each one of the claims in one strike” and noting that the trial court is required to undertake a “rigorous
analysis”).
See, as an example of the difficulties of overcoming the hurdles to class certification, Stollenwerk v. TriWest Healthcare
Alliance, No. 03-0185 (D. Ariz., Jun. 10, 2008).
-159-
than 48 hours after authorization of a transaction and there has been a security breach exposing
payment card data. (See Section III.3.b. above, on Incorporation of Payment Card Industry Data
Security Standards into State Law, discussing laws enacted by several states that provide, in certain
circumstances involving breaches of payment card information, that the breached entity is directly
liable to the financial institution that issued the payment cards for certain costs sustained by the
financial institution as a result of the breach).
Many issuing banks forgo litigation, relying for recovery on the contractual indemnification
provisions in their payment card processing agreements with entities in the chain of accepting,
processing, and paying payment card charges. (See Section III.3 on PCI-DSS above). However,
some financial institutions who may not be fully reimbursed by that contractual system have sought
recovery in litigation directly from breached merchants, payment processors, and others in the chain
of payment card processing who may have contributed to the breach and resultant financial losses to
banks and credit unions. One of the trends arising from recent mega retail breaches involving the
compromise of many millions of payment cards is an increasing number of lawsuits by credit
unions and banks against breached entities seeking direct recovery for losses sustained as a result of
the breach.730
While there are few decisions yet, several cases illustrate the courts’ approach to whether such
claims by banks constitute cognizable injuries under common law, and which causes of action
courts are likely to recognize and which they tend to dismiss.
In one of the earliest of these cases,731 the First Circuit Court of Appeals held that, under
Massachusetts law, banks issuing credit and debit cards to customers who subsequently had that
card information stolen from a merchant’s computer systems and used for fraudulent transactions,
stated a claim against the store operator and the bank serving as its “processing bank” for the store’s
payment transactions. The banks claimed that both the merchant and its processing banks were
negligent in failing to follow PCI-DSS security protocol and in delaying notice after the breaches
had been discovered, and that as a result they had sustained financial losses from reimbursing the
customers for fraudulent charges, monitoring their accounts, and cancelling and reissuing payment
cards. Their complaint included claims for negligence, breach of contract, and unfair or deceptive
practices, and also sought to assert a claim for conversion. The First Circuit upheld the denial of the
dismissal of the negligent misrepresentation claim that was based on the argument that by accepting
and processing credit card transactions, the merchant and its processing bank impliedly represented
that they would comply with MasterCard and Visa data security requirements, although it noted that
“the present claim survives, but on life support.” Similarly, the claim for unfair or deceptive trade
practices survived dismissal, but primarily based on the lack of discovery of the defendant’s
conduct in issue and with reference to the merchant’s argument for dismissal having to “await
discovery and perhaps a summary judgment motion.” However, the dismissal of the tort-based
730 See, e.g., Community Bank of Trenton v. Shnuck Markets, Inc., Case No. 3:14-cv-01361 (U.S. District Court for the
Southern District of Illinois); In Re Target Corporation Customer Data Security Breach Litigation, Financial Institutions Cases,
Case 0:14-md-02522-PAM (United States District Court, District of Minnesota); First Choice Federal Credit Union, et al. v. The
Home Depot, Inc., Case 1:14-cv-02975-AT (U.S. District Court for the Northern District of Georgia, Atlanta Division); see also
Winsouth Credit Union v. Mapco Express, Inc. and Delek US Holdings, Inc., Case 3:14-cv-01573 (U.S. District Court, Middle
District of Tennessee) (alleging in a complaint filed July 31, 2014 damages from a breach that include costs of cancelling and
reissuing customers’ cards, reimbursing or reversing fraudulent charges; lost interest and transaction fees; customer service,
monitoring and fraud prevention expenses, and lost customers due to damage to reputation).
731 In Re TJX Cos. Retail Sec. Breach Litig., 564 F.3d 489 (1st Cir. 2009) (as amended on rehearing in part May 5, 2009).
-160-
negligence claim was upheld on the grounds that Massachusetts, like so many states, holds that
“purely economic losses are unrecoverable in tort and strict liability actions in the absence of
personal injury or property damage.” Efforts by one bank to claim “property damage” based on
property interest in the payment card information failed on the grounds that it was not a result of
physical destruction of property. The dismissal of the breach of contract claim was also upheld, as
while the merchant and its processing bank had agreements with Visa and MasterCard to comply
with certain security procedures, the claimant banks were not parties to those contracts and did not
demonstrate that they were third-party beneficiaries of those contracts. The First Circuit also
upheld the denial of the addition of a claim for conversion, although in wording that arguably left
the door open for it to be more successfully pleaded in another matter. Subsequent to the First
Circuit’s decision, the remaining parties settled their claims and the District of Massachusetts
dismissed the case. 732
In another case,733 the Supreme Judicial Court of Massachusetts demonstrated that even if a claim
survives a motion to dismiss, it may not survive a motion for summary judgment. The court upheld
two lower court decisions dismissing claims by credit unions and their insurer for damages arising
from an alleged data security breach in which third parties obtained and fraudulently used debit and
credit card information of cards issued by the credit unions, for which fraudulent charges the credit
unions reimbursed their customers and the credit unions’ insurer then reimbursed the credit unions.
Like the First Circuit, the Court upheld dismissal of the third-party beneficiary contract claims
because the plaintiffs could not show that they were intended beneficiaries, and upheld dismissal of
the negligence claims under the economic loss doctrine. With regard to claims for fraud and
negligent misrepresentation, which were based on allegations that in accepting credit and debit
cards for payment the defendants represented that they were in compliance with Visa and
MasterCard regulations prohibiting them from storing data, the court upheld summary judgment
dismissing those claims after finding that the plaintiffs had never seen the defendants’ agreements
with Visa and MasterCard and thus they could not establish that the defendants’ representations
induced them to become or remain card issuers. The court also found that the plaintiffs could not
establish that they would have altered their participation in the card system after becoming aware of
the defendants’ breach. Additionally, the court found that any reliance on the alleged
misrepresentations would have been unreasonable.
Banks continue to pursue litigation against companies that have suffered data breach of debit and
credit card information. After Heartland Payment Systems, a processor of debit and credit card
transactions, reported that debit and credit card data had been stolen from its system, a number of
issuing banks that paid fraudulent transactions and replaced credit cards of customers filed lawsuits
against Heartland and its acquiring banks in federal court in Texas.734 The plaintiffs asserted claims
for negligence, negligence per se, negligent and intentional misrepresentation, violation of
consumer protection statutes, and breach of contract. The U.S. District Court for the Southern
District of Texas decided Heartland’s motion to dismiss the claims of the Financial Institution
plaintiffs, which were nine issuer banks (banks that provided the credit/payment cards to
732 See also Community Bank of Trenton v. Schnuck Markets, Inc., No. 3:14-cv-01361 (S.D. Ill. 2014) (Schnuck argued that
the economic loss doctrine barred recovery by the banks, where there was no contract allowing for recovery. The case was dismissed
without prejudice in March 2015).
733 Cumis Ins. Soc’y, Inc. et al. v. BJ’s Wholesale Club, et al., 918 N.E.2d 36 (Mass. 2009).
734 In re Heartland Payment Sys., Inc. Customer Data Sec. Breach Litig., Case No. 09-MD-2046-LHR (S.D. Tex.).
-161-
consumers) that alleged that the data breach resulted from a failure by Heartland to follow industry
security standards (PCI-DSS), resulting in the issuing banks incurring significant expenses
replacing payment cards and reimbursing fraudulent transactions. The court initially granted the
motion to dismiss in part and denied it in part, holding that (1) the claims for negligence and
violation of New Jersey, New York and Washington states’ consumer protection laws were
dismissed with prejudice; (2) the claims for breach of contract, breach of implied contract, express
misrepresentation, negligent misrepresentation based on nondisclosure, and violation of California,
Colorado, Illinois and Texas consumer protection statutes were dismissed without prejudice and
with leave to amend; and (3) the motion to dismiss the claims brought under the Florida Deceptive
and Unfair Trade Practices Act was denied.735 A later decision held that the Amended Complaint
failed, and the District Court dismissed the action in its entirety.736 Attempts by credit card issuer
banks affected by the Heartland breach to obtain additional recoveries continued, and the card
issuing banks appealed the dismissal of their claims. This resulted in a decision by the Fifth Circuit
reversing the dismissal and remanding the case for further proceedings, on the grounds that the law
of the applicable jurisdiction (New Jersey) did not bar a negligence claim by the banks against the
breached card processor, Heartland, although part of the basis for the decision was that the record
was not clear whether Heartland’s contracts with its banks would require it to comply with the Visa
and MasterCard rules and regulations providing contractual dispute resolution and compensation
mechanisms for losses, and whether it had contracts directly with Visa and MasterCard that would
govern.737 On February 26, 2015, the Financial Institution Plaintiffs and Defendant Heartland
Payment Systems, Inc. filed a stipulation of dismissal. On March 3, 2015, the Court entered an
order granting the Stipulation of Dismissal and it appears final judgment will enter.
Recently, the large losses arising from the Target retail breach has also generated litigation directly
by banks and credit unions against the breached entity for costs such as card replacement and fraud
losses and monitoring, including one Minnesota credit union reportedly relying upon the Minnesota
statute 738 that provides for liability by a breached entity to financial institutions that issued a
payment card (e.g., issuing banks) for certain costs of reasonable actions undertaken by them in the
event a breached company doing business in Minnesota retained certain card data in violation of the
Act, apparently for losses allegedly not subject to the PCI recovery program.739 In December 2014,
a federal judge denied in part Target’s motion to dismiss the financial institution’s claims, ruling
(under Minnesota law) that plausible claims had been alleged against Target. The court denied the
dismissal of the claim for negligence, noting that “although the third party-hackers activities caused
harm, Target played a key role in allowing the hard to occurred” based on allegations that Target
735 In re Heartland Payment Sys., Inc. Customer Data Sec. Breach Litig./Fin. Inst. Track Litig., 834 F. Supp. 2d 566 (S.D.
Tex. 2011).
736 In re Heartland Payment Sys., Inc. Customer Data Sec. Breach Litig./Fin. Inst. Track Litig., No. H-10-171, 2012 WL
896256 (S.D. Tex. Mar. 14, 2012), rev’d in part, Lone Star National Bank v. Heartland Payment Systems, Inc., 729 F.3d 421 (5th
Cir. 2013).
737 Lone Star National Bank v. Heartland Payment Systems, Inc., 729 F.3d 421 (5th Cir. 2013).
738 Minn. Stat. § 325E.64.
739 The dozens of lawsuits filed against Target by consumers, shareholders, banks and credit unions have been consolidated
before a U.S. District Court sitting in St. Paul, Minnesota, In Re Target Corporation Customer Data Security Breach Litigation, Case
No. 0:14-md-02522-PAM. See also Tracy Kitten, Bank Files Unique Suit Against Target: Umqua Bank Alleges Violations of
Minnesota Statute, Bank Info Security, March 17, 2014, http://www.bankinfosecurity.com/bank-files-unique-suit-against-target-a-
6639/p-2 ; David Morrison, Price is Right for Credit Union to Join Target Data Breach Lawsuits, Credit Union Times, March 26,
2014, http://www.cutimes.com/2014/03/23/price-is-right-for-credit-union-to-join-target-dat; Advisen, 100 lawyers in a room Target
case draws the suits to St. Paul, May 15, 2014, http://fpn.advisen.com/articles/article218095276-888238653.html?user=.
-162-
purposely disabled one of the security features that would have prevented the harm, and that
plausibly is alleged to have caused foreseeable harm to the plaintiff financial institutions. The court
also allowed to proceed the claims for violation of Minnesota’s Plastic Card Security Act and
negligence per se. It dismissed the negligent misrepresentation claim, but with leave to replead.740
Target attempts with MasterCard to settle the matter failed when financial institutions refused to
support the settlement negotiated.741
Banks have alleged a number of theories to try to obtain recovery from breached merchants,
including attempts to allege that they are equitably subrogated to claims that consumers may have,
but mostly without success.742 While banks have struggled to avoid dismissal of common law
claims, the legislation passed in states such as Washington, Nevada and Minnesota, discussed
above, is providing banks with statutory grounds for seeking damages even where common law
grounds may fail. However, it is still to be seen whether any of these financial institutions will
ultimately prevail in a direct action against a breached entity.
iii. Other Third-Party Claims
As breaches continue, an increasing range of potential third-party claims can be expected, by
individuals and entities purportedly affected by breaches, as well as by regulators. Lawsuits arising
from data breaches are no longer just by consumers, or even by their affected issuing banks.
In the wake of recent large data breaches, claims have been made by an increasingly widening range
of types of claimants alleging both traditional and novel theories of liability. Lawsuits have been
filed by regulators743, by disaffected shareholders744, and as identified in this paper by an array of
740 In re Target Corp. Customer Data Sec. Breach Litig., 2014 U.S. Dist. LEXIS 167802 (D. Minn. Dec. 2, 2014); In re Target
Corp. Customer Data Sec. Breach Litig., 2014 U.S. Dist. LEXIS 175768 (D. Minn. Dec. 18, 2014).
741 The reaction of the plaintiff financial institutions to the settlement between Target and MasterCard demonstrates the
difficulties in reaching such settlements and the frustration that some financial institutions experience (particularly smaller banks and
lending institutions that may face higher per costs and a lesser share of recoveries), when the card Brands dictate the terms of
settlements with companies that have incurred a breach. In the Target litigation, reportedly MasterCard sent banks on April 16, 2015
an estimate on how much damage each bank had sustained and gave the banks only until May 20th to opt into the settlement under
which they would receive a portion of the recovery to reimburse them for some of their costs incurred as a result of the breach, in
exchange for a release of their claims against Target. Plaintiff financial institutions, angry at their lack of inclusion in the
negotiations and the short deadline to respond, sought a “wide-ranging injunction against the settlement, asking the Court to void
any releases MasterCard has received from putative plaintiffs, to enjoin Target and MasterCard from invoking the jurisdiction of a
court other than this Court to enforce the terms of their settlement, to enjoin MasterCard and Target from communicating with the
putative class absent prior approval of the Court, and to order Target to issue a curative notice incorporating Plaintiffs’ lead counsel’s
criticisms of the settlement.” No. 0:14-md-02522-PAM, Docket No. 414 at 2. The judge, while indicating sympathy with the
financial institutions’ position, denied their request, observing that the law permits a defendant or a non-party to communicate with
and to settle with putative class members “at any time before class certification without Court approval or input as long as those
communications are not misleading or coercive.” Id. However, the settlement failed anyway when Target and MasterCard were
unable to obtain the requisite support of 90% of the financial institutions. See Target data breach settlement with MasterCard falls
through, Advisen FPN, May 25, 2015, http://crnfpn.advsisen.com/articles/article2393696051985652926.html?user=; Joseph Ax,
MasterCard, Target data breach settlement falls apart, Reuters, May 22, 2015, http://www.reuters.com/article/2015/05/22/us-targetmastercard-
settlement-idUSKBN0O71TD20150522.
742 See, e.g., BankNorth, N.A. v. BJ’s Wholesale Club, Inc., 442 F. Supp. 2d 206 (M.D. Pa. 2006) (dismissing the issuing
banks’ suit against a breached merchant to recover for unauthorized charges to customer accounts based on claims of negligent
failure to protect cardholder information and equitable subrogation); Sovereign Bank v. BJ’s Wholesale Club, Inc., 533 F. 3d 162 (3d
Cir. 2008) (dismissing negligence claim by a bank that issued credit cards against the merchant and its payment processor for costs
associated with replacement of customer credit cards and reimbursement for fraudulent purchases).
743 See, e.g., FTC v. Wyndham, supra.
744 See discussion of D&O litigation in Section VI., 1.f, D&O, below.
-163-
businesses affected by payment card breaches against each other. Further, there may soon be an
increase in B2B claims, as businesses that sustain substantial costs and other losses as a result of a
breach seek indemnity from business partners involved in the occurrence, prevention or response to
a breach, although some of that cost shifting may be outside the public forum of litigation as
companies negotiate with business partners (and those businesses’ insurers) to pursue contractual
remedies or tort theories of liability.
Moreover, as insurers pay out on losses sustained by their insureds arising from a data breach, there
is likely to be an increase in subrogation claims against the breached entities’ business partners and
other companies responsible for or contributing to the losses sustained, in order to recoup at least
some of the costs paid, although such claims may face some of the battles in circumventing
contractual and legal limitations on liability that are currently being waged against bank claims. For
example, in Travelers v. Ignition Studio, Inc., No. 1:15-cv-00608 (N.D. Ill. filed January, 21 2015),
Travelers insured a community bank that hosted its website with Ignition Studio, Inc. Following a
data breach, Travelers paid the bank’s claim, and initiated a subrogation action against Ignition,
bank, alleging it bank failed to take reasonable steps to protect against hackers. Ignition filed a
motion to dismiss for failure to state a claim, arguing that the economic loss doctrine precluded
Traveler’s negligence claim, and that the breach of contract claim was identical to the negligence
claim. The case was reportedly settled. 745
One study identified over 86 different causes of action cited in 231 cases arising from unauthorized
disclosure of Personal Information, including a wide variety of tort and contract claims, and alleged
violations of state and federal statutes.746
VI. Insurance Company Exposures
1. Exposure of Companies in the Insurance Industry as Entities Subject to Data
Breaches
While insurers generally focus on the exposures of their insureds, they are themselves in an industry
in which companies have potential exposure to data breaches. Insurance industry companies have
the same vulnerabilities to data breach as other institutions. Some may even have an elevated risk
due to their heavy dependence on computer systems and the nature of the information stored on
their systems. As stated by the New York Department of Financial Services, “The extraordinary
sensitive health, personal and financial information that [people] entrust to their insurance
companies is a virtual is a virtual treasure trove for hackers.”747 In 2015, the NYDFS sharpened its
focus on insurers’ cybersecurity, saying it will, for each insurer, expand the scope of inquiry into
745 Travelers, Web Design Co. Settle Bank Data Breach Battle, Law360, April 14, 2015,
http://www.law360.com/cases/54c141b23a288530b5000001.
746 Sasha Romanosky, David A. Hoffman, Alessandro Acquisti, Empirical Analysis of Data Breach Litigation, Temple
University Beasley School of Law Legal Studies Research Paper NO. 2012-29, available at http://ssrn.com/abstract=1986461; see
also In Empirical Analysis of Data Breach Litigation, 11 Journal of Empirical Legal Studies 74 (2014), Sasha Romanosky, David A.
Hoffman, and Alessandro Acquisti , In Empirical Analysis of Data Breach Litigation, 11 Journal of Empirical Legal Studies 74
(2014).
747 See May 28, 2013 NYDFS press release, Governor Cuomo Launches Inquiry Into Cyber Threats at Largest Insurance
Companies, available at http://www.governor.ny.gov/press/05282013-cuomo-launches-inquiry-cyber-threats-insurance-companies.
-164-
protocols, prepare a risk assessment, and conduct a cybersecurity examination.748 The National
Association of Insurance Commissioners (“NAIC”) increased the focus of state insurance regulators
on cybersecurity in April 2015 when, as part of its plan to help the insurance sector develop an
effective cybersecurity framework, the NAIC’s Cybersecurity Task Force adopted 12 principles for
effective cybersecurity insurance regulatory guidance, intended to guide state insurance regulators
in creating regulations protecting the confidential and personally identifiable consumer
information. 749 On a federal level, the Federal Insurance Office (“FIO”) within the U.S.
Department of Treasury also has an interest in the issue of cybersecurity and in encouraging the
cyber security of companies in the insurance industry. 750
First, at risk is their own employee information. As large-scale employers, often of employees
residing in many different states (including Massachusetts with its rigorous data security
requirements), insurers, reinsurers, brokers and companies servicing the insurance industry are
subject to breach of their own employees’ Personal Information, including payroll, personnel,
pension, workers’ compensation and disability claim information.
Second, at risk is the Personal Information insurers have of policy applicants, insureds, claimants
and beneficiaries.751 Liability insurers often have claimant information, ranging from medical
748 See March 26, 2015 NYDFS press release, Superintendent Lawsky Letter to Insurers on Cyber Security, available at
http://www.dfs.ny.gov/about/press2015/pr150326-ltr.pdf.
749 Principles for Effective Cybersecurity: Insurance Regulatory Guidance, available at
http://www.naic.org/documents/committees_ex_cybersecurity_tf_final_principles_for_cybersecurity_guidance.pdf.
750 See Holmer, Mark, Feds Support Insurers Seeking Protection From Cyber Attacks, Claims Journal, April 9, 2015,
reporting on remarks of Director McGRath of the FIO , http://www.claimsjournal.com/news/national/2015/04/09/262735.htm? see
also http://www.treasury.gov/initiatives/fio/reports-and-notices/Pages/default.aspx; FIO’s McRath Announces Upcoming Reports
from Advisory Committee on …Cybersecurity, www.insurereinsure.com/?3ntry+5544.
751 One of the largest reported data breaches is that of Anthem, a health insurer, that reportedly was the subject of a cyber
attack involving access to information it held on as many as 80 million Americans, including current and former members of Anthem
health plans, and even some nonmembers, since Anthem manages paperwork for some independent insurance companies, and its own
employees. Lawsuits were quickly filed. Matthews, Anna Wilde and Yadron, Danny, Helath Insurer Anthem Hit by Hackers, The
Wall Street Journal, Feb. 4, 2015, http://ww.wsj.com/articles/health-insurer-anthem-hit-by-hackers-1423103720; Huddleston, Tom
Jr., Anthem’s big data breach is already sparking lawsuits, Fortune, February 6, 2015, http://fortune.com.2015/02/06 /anthems-bigdata-
breach-is-already-sparking-lawsuits/; https://www.anthemfacts.com.
For example, in September and October 2013, putative class actions were filed against an insurance company following a
July 2013 data breach that was alleged to include approximately 4 million people and involve compromise of their names,
addresses, dates of birth, social security numbers, health insurance data, Medicare and Medicaid data, medical diagnoses, diagnoses
codes, and medical record numbers. Maglio, et. al. v. Advocate Health and Hospitals Corporation, et al., Gen. No. 13 L 538, The
Circuit Court for the Sixteenth Judicial Circuit, Kane County, Illinois. The complaint was dismissed for lack of standing, which
decision was upheld on appeal, as “plaintiffs did not allege that any of their personal information was used in any unauthorized
manner, where they asserted only an increased risk of such, and where their allegations of injury were conclusory and speculative.”
2015 IL App (2d) 140782-U. A similar result was reached in a putative class action arising from a breach of a health insurer
involving stolen laptops in In re: Horizon Healthcare Services Inc. Data Breach Litigation, No. 2:13-cv-07418, United States
District Court, District of New Jersey, in which the court dismissed the complaint, 2015 WL 1472483 (D.N.J., filed March 31, 2015)
(holding no standing where only generalized allegations of harm and rejecting theory of economic injury based on portion of
insurance premium being for data protection); but see Resnick v. AvMed, Inc., 693 F. 3d 1317 (11th Cir. 2012 ) (accepting argument
that portion of premium was for data security and constituted economic harm, but where plaintiffs alleged identify theft). Recent
decisions have been issued in breach litigation involving a putative class action filed on behalf of 1.1 million people who sought
insurance products from Nationwide following an October 2012 reported data breach of PI due to a hack into a portion of the insurer’
computer network. Galaria et. al. v. Nationwide Mutual Insurance Company, No. 2:13-cv-118 (S.D. Ohio, filed Feb. 8, 2013. The
proposed class was alleged to include approximately 1.1 million people who had purchased insurance products or sought a quote
from the insurer defendant. On February 10, 2014 the defendant insurance company’s 12(b)(6) motion to dismiss the action, in part,
on the basis that the plaintiffs failed to allege any cognizable harm from the intrusion or that any third party used any of their personal
information was granted. See Galaria v. Nationwide Mut. Ins. Co., No. 2:13-cv-118, 2014 WL 689703 (S.D. Ohio, Feb. 10, 2014)
(finding even if deprivation of value of personally identifiable information (“PII”) was an injury-in-fact, the plaintiffs failed to allege
-165-
records and financial documents to claimants identified by name and Social Security number which,
if lost or improperly accessed, would be a data breach of Personal Information. Personal lines and
life and health insurers may maintain Personal Information of policyholders and of beneficiaries,
which are also subject to data breaches. Such Personal Information may remain stored by insurers,
reinsurers, brokers, and third-party administrators as well as vendors of such entities, either in paper
or electronic form, for decades.
Insurers are also subject to extensive state and federal regulation that includes requirements for
safeguarding Personal Information, including pursuant to the Gramm-Leach-Bliley Act and
implementing regulations promulgated by state insurance departments, as well as common law
standards for protecting confidential information.752 In addition, the departments of insurance of
several states have issued bulletins and regulations requiring insurers and certain other of their
licensees to send data breach notifications to the departments of insurance, in some cases under
shorter timelines and under different definitions of “breach” than most other U.S. breach
notification requirements. For example, the Connecticut Insurance Department issued Bulletin IC-
25 in 2010 to require its licensees to notify the Department of any information security incident as
soon as the incident is identified, but no later than five calendar days afterward, and requiring
certain uncommon content and regulatory consultation.753 The Washington State Office of the
Insurance Commissioner promulgated a regulation effective June 1, 2013 requiring licensees to
notify the insurance commissioner within two business days after determining that notification must
be sent to consumers or customers pursuant to HIPAA or the Washington State breach notification
requirement (Wash. Admin. Code 19.255.010).754
Insurers are also subject to federal and state regulations of Personal Information and Protected
Health Information that are not specifically directed at the insurance industry, but apply to all
companies that obtain and maintain Personal Information (such as state data breach notification
laws) or, with respect to Protected Health Information, to all entities subject to HIPAA755 Thus, for
example, the broad-ranging Massachusetts Regulation discussed above affects any entity that has
Personal Information of a Massachusetts resident, and thus is likely to affect a significant number of
insurers. It technically applies to liability insurers with Personal Information of Massachusetts
facts supporting their assertion that they were deprived of the value of their PII and therefore lacked standing). The plaintiffs’ motion
for reconsideration and filing of an amended complaint was denied on March 11, 2015, and an appeal has been filed.
752 E.g, in Daly v. Metro. Life Ins. Co., 4 Misc. 3d 887, 782 N.Y.S.2d 530 (2004), a New York state court denied a motion to
dismiss claims brought by a life insurance applicant against a life insurer arising from the purported theft of her personal information
by a janitor who cleaned the insurer’s premises and which resulted in fraudulent use of her personal information to create credit
accounts. The court noted that after completing her application, the applicant had received a Privacy Notice from the insurer
detailing the company’s privacy policy and stating that confidential information would be safeguarded. The court found that the
gravamen of the plaintiff’s claim was that in order to obtain a life insurance policy the plaintiff had to provide sensitive personal
information and the insurer represented that information would be protected and remain confidential. Thus, the court found that the
insurer had a common law duty to protect the confidential personal information provided by the applicant and, in light of questions of
fact concerning precautions taken by the insurer to safeguard that information, it denied summary judgment of claims at that juncture.
753 The Connecticut Bulletin is available at http://www.ct.gov/cid/lib/cid/Bulletin_IC_25_Data_Breach_Notification.pdf.
754 Wash. Admin. Code 284-04-625
755 For instance, in July 2011, Wellpoint Inc. (an Indiana-based insurer) reportedly agreed to pay the State of Indiana $100,000
for failure to promptly notify consumers and the Indiana Attorney General after the Personal Information of thousands of Wellpoint
customers was potentially accessible through an unsecured website. This settlement followed a 2010 lawsuit brought by the Indiana
Attorney General against Wellpoint under Indiana’s data breach notification statute. See Press Release, Attorney General reaches
settlement with WellPoint in consumer data breach, Jul. 5, 2011, http://www.in.gov/portal/news_events/71252.htm.
-166-
claimants and to life insurers that have Personal Information of non-policyholder beneficiaries, as
well as to those with employees or insureds who are Massachusetts residents.
Accordingly, in addition to the exposures insurers face as the issuers of policies that may cover the
costs of data breach incurred by their insureds and claims asserted against insureds arising from data
breaches, insurers and other entities in the insurance industry have their own risk of data breaches.
2. Potential Insurance Coverages for Data Breaches and Privacy Related Claims
The increasing range of costs incurred by entities that sustain a breach and the third-party claims
against them have given rise to efforts by such entities to seek coverage for those costs and claims.
Specialty insurance products have been developed to specifically address data breach and other
cyber related risks, although not all address the full scope of costs and claims. Moreover, entities
that sustain a breach that have not purchased policies directed at providing data breach coverage
often look with varying success and failure to the more traditional types of policies they have in
place for coverage of at least some of the costs, defense expenses and indemnity payments they
incur.
A number of different types of insurance policies have the potential to be implicated in the event of
a data breach and other types of cyber attacks that disrupt business operations and result in costs to
and claims against the entity that sustained the breach or attack – or at least have the potential to be
subject to a request for defense and/or indemnity – depending on factors such as the type of breach
or attack, the relationship of the parties, the nature of the information in issue (Personal
Information, Intellectual Property), the type of costs or damages in issue, the type of policy and, if
for third-party liability, the allegations asserted and the type of damages in issue. As in all requests
for coverage, the determination of coverage turns on policy terms, including both grants of coverage
and exclusions, as well as on the specifics of the claim.
As the risk of data breaches and statutory privacy violations becomes increasingly recognized,
policy definitions and exclusions are being added and tightened to reduce the exposure of policies
not intended to apply to those risks, and sublimits for some types of costs are often included even in
those policies expressly directed at insuring the risks of data breach, network security failures, and
the claims arising from collection and usage of information about individuals. Many insurers
impose application procedures directed at identifying the risks and the security procedures of the
applicant entities, and some impose risk management conditions before agreeing to issue a policy
that provides coverage for these types of claims.
As the field of privacy develops, so do the types of claims made, the effect of data breaches and
privacy violations on individuals and companies, and the information available as to the nature and
source of the cyber attacks and alleged privacy violations. These, in turn, raise new issues and
exposures for insurers and their insureds. Thus, questions are increasing arising as to, e.g., whether
cyber attacks from foreign sources are government-sponsored and potentially subject to terrorism
exclusions, whether attacks result in physical damage or loss of use of tangible property, whether
information collection practices constitute knowing and deliberate conduct, and whether resultant
business losses can be accurately measured and insured, among other issues.
Some of the issues that may be presented by a claim for coverage are identified below, although of
course the issues can vary depending on the claim and the policy wording.
-167-
a. Cyber Risk/Data Breach/Privacy/Network Security Policies
A growing number of insurers are offering policies – or endorsements - specially tailored to
provide coverage for a variety of cyber risks, ranging from breaches of Personal Information, to
cyber extortion, to business interruption and reputational damage arising from cyber attacks, to
claims of wrongful collection, usage or disclosure of information about individuals. Coverage has
also been developed for liability associated with social media, such as posting of a defamatory
comment on a blog. Some of these policies and endorsements are industry-specific, such as cyber
risk insurance designed for technology companies, restaurants, healthcare entities, or financial
institutions. In the current market, coverages are often expanded and new coverages developed,
including express coverage for the Payment Card Industry (PCI) contractual assessments that are
often associated with breaches of Personal Information involving credit card numbers. As data
protection regulations and statutes, with concomitant response requirements, continue to be enacted
and expanded in the U.S., EU, and elsewhere, the market for such specialty products is expanding
and new products are likely to be developed.756
Policies designed to provide data breach coverage do not necessarily restrict themselves to
electronic breaches of statutorily defined Personal Information. These policies may also broadly
encompass coverage for costs and claims arising from other types of data breaches and cyber
attacks, including loss or theft of Personal Information contained in paper records and other types of
confidential information that, while not itself Personal Information, can be used to obtain Personal
Information or interfere with the business operations of a breached company or its clients. In
addition to providing insurance coverage in the event of a breach, many insurers offer breach
prevention services to their clients.
Some of these specialty policies have both first and third-party coverages. First-party coverages in
such policies are generally designed to pay or reimburse an insured that has sustained a breach for
its own costs incurred in addressing a breach, such as notification costs, although some such
policies limit coverage of notification costs to situations in which the insured is legally obligated to
provide notice of data breach under state or federal statutes or to a maximum number of individuals.
Policies directed at providing coverage for data breaches may also provide some coverage for costs
directed at mitigating loss or reducing the likelihood of third-party claims, such as legal advice as to
the company’s notice obligations, credit monitoring offered to those whose Personal Information is
compromised, and forensic investigation as to the cause of the breach. Some policies offer firstparty
coverage for business interruption losses related to data breaches, even in the absence of
physical damage to tangible property. Liability coverages for defense costs and losses arising from
a claim by a third party for damages arising from a data breach are also generally the subject of
756 “It is estimated that more than 50 insurers domiciled mainly in the US and the Lloyd’s of London marketplace provide
dedicated cyber products and solutions today. Buyers are overwhelmingly concentrated in the US with little take-up to date
internationally. Annual premium spend at the end of 2014 was estimated to be in excess of $2 billion with the potential to grow to $5
billion. Total capacity (the maximum amount of insurance available to any single buyer) is currently at about $300,000,000.” See
Testimony of Ben Beeson, Lockton, Hearing, “Examining the Evolving Cyber Insurance Marketplace,” Senate Committee on
Commerce, Science, and Transportation, March 19, 2015, available at
http://www.commerce.senate.gov/public/?a=Files.Serve&File_id=68d2a98e-ba98-4aca-a034-503d67ab6604.
-168-
express coverages under such policies. Some cyber risk policies now also integrate coverage for
online media liability.
However, even policies directed at providing coverage for data breaches of Personal Information
and other privacy exposures vary in the scope of coverages provided and often have sublimits for
certain types of costs or damages, and exclusions for others. Issues can arise as to whether there is
coverage of costs incurred by an insured that are not legally required but are undertaken to preserve
an insured company’s reputation or reduce the likelihood of a third-party claim; of contractual
indemnity obligations; of contractual fines and penalties as well as fines and penalties imposed by
regulatory authorities; of breaches due to insured/employee dishonesty; of business interruption
loss; of losses due to reputational harm; and of other types of claims or costs.
Moreover, the focus of such specialty policies is no longer just on data breaches and traditional outof-
pocket costs. There is increasing recognition of the exposures presented to companies by
regulatory and legal proceedings asserting wrongful collection, usage and disclosure of information
about individuals. Such information is often one of the most valued assets of companies, and a key
component of targeted marketing, but recent increasing regulatory scrutiny from states and
countries around the globe on company practices and disclosures of their collection and usage of
such information have made both insurers and insureds consider the insurability of the exposures
generated by such practices.
The terms of these policies are still largely untested by the courts, and their terms, conditions and
exclusions are still in flux. However, the cases have begun.
For example, an insurer sued its insured, a payment processing center, in federal district court in
Utah, contending that it had no duty to defend its insured under a in a lawsuit filed against the
insured by one of its customers. The customer fitness center alleged that the payment processor
intentionally refused to return certain credit card and bank account information to the center,
allegedly seeking additional compensation to do so. Because the customer’s complaint asserted
only intentional acts by the insured, the federal district court agreed with the insurer that it had no
duty to defend. The court said that the insured’s cyber policy required an “error, omission or
negligent act”, none of which was alleged against the insured in the underlying lawsuit. The
coverage in issue was a technology errors and omissions form that was part of the cyber liability
policy.757
Also, in a recently filed lawsuit, an insurer is contending that coverage for a data breach claim is
barred by an exclusion for an insured’s “failure to follow minimum required practices” for
cybersecurity. The insurer alleges that the insured’s application for the policy falsely identified
various measures that the insured did not in fact put into effect. The underlying data breach was the
subject of class action litigation against the insured and involved the storage of medical records for
30,000 people on an internet-accessible system without encryption or other protection. Prior to
757 See May 11, 2015 decision in Travelers Prop. Cas. Co. v. Federal Recovery Srvcs., Inc., No. 2:14-CV-170 TS , United
States District Court, District of Utah.
-169-
filing its coverage action, the insurer had, under a reservation of rights, paid for the insured’s costs
to defend the litigation and for a settlement. 758
b. Property Policies – First-Party
First-party property policies, which usually cover physical damage to real and personal property and
may (depending on their terms) also provide coverage for resulting business interruption, may be
scrutinized by insureds looking for potential insurance coverage, particularly those who sustain not
only a data breach, but also business interruption losses, or costs for replacement of a computer
system or data storage unit as a result of a breach.
However, such claims generally fail in the absence of some indication of physical damage to the
computer system involved, or an express provision for coverage of replacement costs for loss of
electronic data (which at times is offered, although usually on a sublimited basis). Such policies
generally cover “direct physical loss or damage” to insured property caused by a covered cause of
loss. “Physical” is generally construed to mean “tangible.”759 Case law generally maintains that
electronic data is not tangible property.760
Further, policy exclusions often specifically exclude or limit coverage of electronic data and other
“valuable papers and records.” Business interruption coverage is generally required to result from
damage to or destruction of property caused by a loss otherwise covered under the policy, and thus
if there is no physical loss or damage to tangible property in a data breach, the resultant business
interruption losses are also generally not covered under a traditional property policy.
Non-coverage of a claim under a policy, though, cannot always be assumed. If a computer becomes
unusable due to the installation of malware, a policyholder may be able to seek recovery under a
coverage for loss of use of tangible property that is not physically injured.761 There can also be
claims involving destruction or corruption of electronic data on the system of the insured due to
viruses which may be covered under the limited electronic data additional coverage provided by
758 See Columbia Cas. Co. v. Cottage Health System, No.: 2:15-cv-03432 (C.D. Cal., filed May 7, 2015). While this paper
discussed developments only as of June 2015, before this paper was closed in July 2015, the court in this matter granted the
policyholder defendant’s motion to dismiss, on procedural grounds and without prejudice, noting that the policy required disputes
under or in connection with the policy to first be submitted to mediation before commencing. If mediation is unsuccessful, the case
may be recommenced. See July 17, 2015 decision.
759 See, e.g., Florists’ Mut. Ins. Co. v. Ludy Greenhouse Mfg. Corp., 521 F.Supp. 2d 661, 680 (S.D. Ohio 2007); Philadelphia
Parking Auth. v. Fed. Ins. Co., 385 F. Supp. 2d 280, 288 (S.D.N.Y. 2005).
760 See, e.g., Ward Gen. Servs., Inc. v. Employers Fire Ins. Co., 114 Cal. App. 4th 548, 556-57 (Cal. App. 4 Dist. 2003); Se.
Mental Healthcare Ctr., Inc. v. Pac. Ins. Co., LTD, 439 F. Supp. 2d 831, 838-839 (W.D. Tenn. 2006); Am. Online, Inc. v. St. Paul
Mercury Ins. Co., 347 F.3d 89, 93-98 (4th Cir. 2003); State Auto Prop. & Cas. Ins. Co. v. Midwest Computers & More, 147 F. Supp.
2d 1113 (W.D. Okla. 2001). Courts reaching a different conclusion have done so where the data is permanently lost to its owner, not
merely improperly accessed. See Computer Corner, Inc. v. Fireman’s Fund Ins. Co., 46 P.3d 1264 (N.M. 2002) (holding that loss of
the pre-existing electronic data was tangible property damage covered by CGL policy where computer store repairing customer’s
computer permanently lost all the data); Am. Guar. & Liab. Ins. Co. v. Ingram Micro, Inc., No. 99-185, 2000 WL 726789, 2000 U.S.
Dist. LEXIS 7299 (D. Ariz. Apr. 18, 2000) (holding that computer data permanently lost during a power outage constituted “direct
physical loss or damage from any cause” covered by first-party insurance policy); NMS Servs. Inc. v. Hartford, 62 Fed. Appx. 511
(4th Cir. 2003) (characterizing the erasure of vital computer files and databases as direct physical loss or damage to property for
purposes of business income coverage).
761 See, e.g., Eyeblaster, Inc. v. Fedl. Ins. Co., 613 F.3d 797 (8th Cir. 2010).
-170-
some property policy forms.762 Further, there can be endorsements and other manuscript provisions
added to more traditional business property forms that expressly provide some additional limited
coverage for impairment of data systems and papers and other losses implicated in a data breach
claim. Should there be potential coverage of any portion of a loss under a property policy, loss
mitigation provisions may also be targeted by policyholders as a basis for requests for coverage of
loss mitigation costs.
c. Fidelity / Commercial Crime Insurance
In the 1990 film Ghost, one of the characters, who works at a financial institution, sets up a dummy
account to facilitate a money-laundering scheme. In the event of a hypothetical real-world scenario
where an insider steals customer account data in order to siphon money out of customers’ accounts
– and in the absence of a Patrick Swayze to change the password and thwart the crime – the
financial institution might be able to bring a claim under its Fidelity and Crime insurance policy.
Such policies generally protect organizations from the loss of money, securities, or inventory
resulting from employee crime. “Common Fidelity/Crime insurance claims allege employee
dishonesty, embezzlement, forgery, robbery, safe burglary, computer fraud, wire transfer fraud,
counterfeiting, and other criminal acts.”763
Many data breaches involve theft and other criminal conduct by employees, e.g., theft of laptops or
other computer equipment containing Personal Information or other confidential data. Thus,
depending on its terms and exclusions, the company’s fidelity insurance may be triggered.
Moreover, some fidelity or crime insurance policies may expressly provide for computer crime
coverage in the form of a computer fraud endorsement, while others may contain exclusions that
limit or preclude such coverage. Whether such an endorsement would provide coverage to the
insured company for its losses and claim expenses arising from a data breach will depend on the
policy terms, including if there is a loss of electronic data exclusion, and the jurisdiction considering
the issue of coverage.764
762 See, e.g., Lambrecht & Assocs. Ins. v. State Farm Lloyds, 119 S.W.3d 16 (Tex. App. 2003) (holding that a property policy
covered loss of business income due to damage to software and electronic data by a virus, where the section of the policy defining
coverage for loss of income included “electronic media and records,” defined to include electronically stored data); see also Se.
Mental Health Ctr., Inc. v. Pac. Ins. Co., Ltd., 439 F. Supp. 2d 831,837-39 (W.D. Tenn. 2006) (finding corruption of a commercial
insured’s pharmacy computer after a storm and power outage constituted “direct physical loss of or damage to property” under
business interruption policy).
763 Hossein Bidgoli, Handbook of Information Security, 820 (John Wiley and Sons, 2006).
764 For example, in Retail Ventures, Inc. v. Nat’l. Union Fire Ins. Co. of Pittsburgh, PA, No. 2:06-CV-00443 (S.D. Ohio Mar.
30, 2009), aff’d, Nos. 10-4576, 10-4608, 691 F.3d 821 (6th Cir. Aug. 23, 2012, decided under Ohio law), coverage was found to be
available for a data breach under a “Computer & Funds Transfer Fraud” endorsement of a commercial crime policy. There, a hacker
fraudulently accessed a national retail company’s computer system and stole data for approximately 1.4 million customers, including
credit card and checking account information. As a result of the breach, among other costs, the U.S. Secret Service initiated an
investigation; the company paid the cost of reissuance of credit cards for customers whose account information was fraudulently
used; the Ohio Attorney General brought suit; and four class action lawsuits were brought by customers. The insurer argued, in part,
that (1) the theft of the customers’ data did not result in a “direct loss” to the store under the endorsement language, which only
covered “loss . . . resulting directly from” theft of insured property, and (2) the following exclusion was applicable: “Coverage does
not apply to any loss of proprietary information, Trade Secrets, Confidential Processing Methods, or other confidential information of
any kind.” The district court, however, disagreed with both points. It determined that the “direct loss” language of the endorsement
required only application of the traditional proximate cause standard, and found that there was a “sufficient link between the
computer hacker’s infiltration of [the company’s] computer system and [the company’s] financial loss to require coverage . . . .”
Second, the district court found the exclusion inapplicable, in part, because the information obtained in the hacking theft did not
constitute “proprietary information” or even “other confidential information of any kind” within the meaning of the exclusion. On
-171-
d. CGL – Third-Party Claims
An insured entity subjected to a lawsuit in connection with a data breach it suffers may tender the
defense of that suit under its commercial general liability (“CGL”) policy. While privacy and data
security are developing areas of the law, there are a few judicial decisions indicating the likely
issues on which a coverage dispute will focus when a claim for coverage is made under a CGL
policy. However, in response to attempts to obtain coverage (or at least a defense) for breach
related claims under CGL policy language developed before the prevalence of data breaches,
recently new endorsements have been issued by ISO,765 to amend policies and add provisions
expressly directed at precluding or limiting the application of CGL policies to data breach and other
types of cyber claims. Some insurers have developed manuscript policy forms of their own with
provisions that preclude or in some cases affirmatively provide coverage for data breaches or other
types of cyber risks. Thus, case law that is based on versions of CGL policies that do not have such
amendments is probably not a good indicator of how a court would decide a claim for coverage
under a CGL policy that does incorporate such amendments.
appeal, the Court of Appeals for the Sixth Circuit recently affirmed. See Retail Ventures, Inc., 691 F.3d 821 (6th Cir. Aug. 23, 2012)
(finding that the district court correctly applied the proximate cause standard, and that “stored data consisting of customer credit card
and checking account information would not come within the plain and ordinary meaning of ‘proprietary information’”). However,
the policy in issue apparently did not include an electronic data exclusion or other terms that, if present, might well have led to a
different result.
Results can vary depending on the facts as well as the policy wording. In another case involving a computer systems fraud
rider to a fidelity policy, , Universal Am. Corp. v. National Union Firs Ins. Co. of Pittsburgh, PA, Docket No. 1:12-cv-03010-ODE,
filed in Supreme Court of New York, New York County, a decision granting summary judgment to an insurer was upheld in a
situation in there was fraudulent claim information entered by criminals enrolling people into the medical plan without their
knowledge, resulting in payment for health services not rendered, but the entries were by authorized users. Univ. Am. v. National
Union, aff’d 110A.D.3d 434 (1st Dept. 2013). (While this paper addresses developments only as of June 1, 2015, we note that prior
to it being issued, the New York Court of Appeals affirmed this decision, 2015 N.Y. Slip. Op. 05516 (June 25, 2015). See also
Universal v. National Union filed March 5, 2015, No. 14-12969 (concerning coverage under a Fraud and Alteration Endorsement to a
of a claim arising when thieves stole access to the access ID and password of employee of the insured allowing access to the
insured’s bank accounts, and withdrew funds. The court distinguished this from an electronic funds transfer for a covered fund
transfer made by check, draft or bill of exchange. The court also noted that although the thieves used the stolen access ID and
password to access the insured’s bank account, it was not equivalent to the fraudulent signing of another’s name within the meaning
of the Policy and its definition of forgery. See Eleventh Circuit decision in Metro Brokers Inc. v. Transportation Ins. Co., Eleventh
Circuit opinion filed March 5, 2015, No. 14-12969 in the Non-Argument Calendar (decision not published).
765 ISO is the Insurance Services Organization, Inc., which provides policy language, statistical information, and other services
to member property and casualty insurers. Its policy forms are filed for approval with U.S. state insurance departments, for use by
admitted insurers. The ISO exclusions made available in 2014 are: (1) CG 21 06 05 14 (Exclusion – Access Or Disclosure Of
Confidential Or Personal Information And Data-Related Liability – With Bodily Injury Exception) — excludes coverage, under
Coverages A and B, for injury or damage arising out of any access to or disclosure of any person’s or organization’s confidential or
personal information, including patents, trade secrets, processing methods, customer lists, financial information, credit card
information, health information or any other type of nonpublic information. The endorsement also provides that the exclusion will
apply even if damages are claimed for notification costs, credit monitor expenses, forensic expenses, public relations expenses or any
other loss, cost or expense incurred by the named insured or others with respect to that which is subject to the exclusion. This
endorsement also includes a limited bodily injury exception arising out of the loss of, loss of use of, damage to, corruption of,
inability to access, or inability to manipulate electronic data, (2) CG 21 07 05 14 (Exclusion – Access Or Disclosure Of Confidential
Or Personal Information And Data-Related Liability – Limited Bodily Injury Exception Not Included) — which is very similar to CG
21 06 but does not include the bodily injury exception described above, and (3) CG 21 08 05 14 (Exclusion – Access Or Disclosure
Of Confidential Or Personal Information (Coverage B Only) — exclusion with respect to any access to or disclosure of any person’s
or organization’s confidential or personal information is limited to personal and advertising injury. See “ISO Comments on CGL
Endorsements for Data Breach Liability Exclusions,” Insurance Journal, July 18, 2014, available at
http://www.insurancejournal.com/news/east/2014/07/18/332655.htm.
-172-
While CGL coverage issues have recently become a battleground,766 the field is not likely to be a
static one. Insurers are amending policy forms, and policyholders will likely continue to attempt to
find loopholes in CGL policies to trigger at least a duty to defend data breach claims in situations
not contemplated by insurers or intended to be covered by such policies. Any success by
policyholders will likely result in insurers again responding by drafting and including in policies
additional exclusions and limitations on coverage directed at preventing any unintended coverage
from being found.
i. Coverage A – Bodily Injury and Property Damage
Coverage A of a CGL policy typically provides that “we will pay those sums that the insured
becomes legally obligated to pay as damages because of ‘bodily injury’ or ‘property damage’ to
which this insurance applies.” “Property damage” is typically defined as “physical injury to
tangible property, including all resulting loss of use of that property,” and “loss of use of tangible
property that is not physically injured.”767
Generally in data breach cases, the focus of analysis as to whether there is coverage, or at least
sufficient allegations to trigger a duty to defend, under Coverage A is on its “property damage”
prong. Because of the required component of “tangible property,” it is usually considered unlikely
that lawsuits related to a typical breach of electronic data security would be covered under Coverage
A.768 As in the first-party property policy context, case law generally maintains that electronic data
is not tangible property.769 Additionally, ISO’s 2004 form and other CGL forms include in the
definition of “property damage” the provision that “for the purpose of this insurance, electronic data
is not tangible property.”770
766 See cases identified in footnotes in this section, and in the section below about Privacy Litigation.
767 This is standard policy language in recent ISO form policies (see CG 00 01 12 04). While there is variance in language
among different insurers’ CGL policies, the ISO language is in widespread use and there are judicial decisions dealing directly with
ISO wordings.
768 If tangible property is actually stolen, however, such as a CD containing personal information, it is possible that a court
may find the “property damage” requirement satisfied (depending upon the precise definition of “property damage” in the policy at
issue), at least for purposes of a duty to defend, although exclusions may nonetheless operate to preclude coverage. See, e.g.,
Nationwide Ins. Co. v. Cent. Laborers’ Pension Fund, No. 11-cv-618, 2012 WL 734193 (S.D. Il. Mar. 6, 2012) (employee of an
accounting firm left a laptop with a CD in her automobile containing personal information of approximately 30,000 participants and
beneficiaries of several pension funds that the accounting firm was performing audit work for; following theft of the CD, and claims
by the pension funds against the employee to recover costs incurred as a result of the theft such as credit monitoring, the employee
submitted a claim for coverage under her homeowner’s policy, which provided coverage “[i]f a . . . suit is brought against an
‘insured’ for damages because of . . . ‘property damage’ caused by an ‘occurrence’ to which this coverage applies,” and defined
“property damage” as “physical injury to, destruction of, or loss of use of tangible property”; the district court found, under Illinois
law, and for purposes of a duty to defend, that the property damage requirement was satisfied because the employee suffered a “loss
of use of tangible property,” but nonetheless found coverage excluded because the policy did not cover “property damage to property
rented to, occupied or used by or in the care of the insured”), aff’d, 704 F.3d 522 (8th Cir. 2013) (finding that the exclusion for “in
care of” the insured applied, as well as alternatively an exclusion for “property damage arising out of or in connection with a business
engaged in by an insured”).
769 But see, e.g., Eyeblaster, Inc. v. Fed. Ins. Co., 613 F.3d 797, 801-02 (8th Cir. 2010) (underlying allegations of loss of use of
a computer – e.g., that the computer “froze,” was “taken over and could not operate,” and was otherwise “no longer usable” due to
software installed by the insured – found sufficient to satisfy the “loss of use of tangible property that is not physically injured” prong
of the definition of “property damage”).
770 The ISO definition of “property damage” also defines “electronic data” for purposes of applying the policy: “As used in
this definition, electronic data means information, facts or programs stored as or on, created or used on, transmitted to or from
-173-
In addition, the 2004 ISO form (and many other CGL forms) include an Electronic Data Exclusion,
according to which “this insurance does not apply to… damages arising out of the loss of, loss of
use of, damage to, corruption of, inability to access, or inability to manipulate electronic data.”
Under policies containing such an exclusion, for there to be any coverage there would need to be
damages caused by physical injury to, or the loss of use of, “tangible property,” which must be
something other than electronic data. However, there may be data breaches involving damage other
than to electronic data for which insureds may be able to satisfy the “tangible property” requirement
as well as the “occurrence” requirement, and demonstrate either physical injury to that property or
loss of use of the property containing the data, such as malware attacks that cause damage to
computer hardware.
In 2013, ISO announced it was issuing an endorsement amending the Electronic Data Exclusion
which was optional, 771 and in 2014 it issued “mandatory” endorsements for its filed CGL forms
effective May 2014. It relabeled the Electronic Data endorsement (traditionally exclusion p to CGL
Coverage A), to entitle it “Access or Disclosure of Confidential or Personal Information and Datarelated
Liability.” In one version, it carved out from the exclusion for “loss of, loss of use of,
damage to, corruption of, inability to access , or inability to manipulate electronic data” those
damages that are because of “bodily injury”; another version is without that option. It also added a
prong to the overall exclusion for access to or disclosure of a person’s or organization’s confidential
or personal information. 772
Further, while analyses of whether Coverage A applies have focused on the property damage aspect
of that Coverage Part, Coverage A also applies to “bodily injury.” The recent spate of consumer
third-party claims has often included an emotional distress component. Thus, if a policy or
governing law defines “bodily injury” as including emotional distress even when there is no
physical injury, there potentially could be a claim for coverage for that aspect of the alleged
damages. However, while the “tangible property” barrier would not apply to such a claim, the
insured would still have to demonstrate that the “bodily injury” was caused by an “occurrence,” and
that the Electronic Data Exclusion in whatever form it is present in the policy did not apply, and
circumvent any other provisions that may preclude coverage for the claim. The potential for
coverage may be more likely for data breaches and other cyber incidents directly causing
demonstrable bodily injury, such as those involving computer-controlled medical equipment that
impact medical care of individuals, rather than for the typical electronic data breach involving
Personal Information.
computer software, including systems and applications software, hard or floppy disks, CD-ROMS, tapes, drives, cells, data
processing devices or any other media which are used with electronically controlled equipment.”
771 As discussed above, ISO exclusion CG 21 07 05 14 is expressly inapplicable to bodily injury claims. The proposal for this
exclusion discussed the intent as follows: “The exclusion is being revised to make it inapplicable to bodily injury claims, meaning
that only consequential property damage resulting from an electronic data loss is excluded. So, for example, loss of production on a
computerized manufacturing assembly line caused by damage to the software that runs it would be excluded from CGL coverage.
Injury to a patient in a hospital caused by the accidental corruption of electronic medical records would not be excluded.” See
Changes to the CGL Coverage Form, International Risk Management Institute, Inc., Feb. 2013.
772 See ISO forms CG 21 06 05 14 (Exclusion - Access or Disclosure of Confidential or Personal Information and Data-
Related Liability – With Limited Bodily Injury Exception) and G 21 07 05 14 (Exclusion – Access or Disclosure of Confidential or
Personal Information and Data-Related Liability – Limited Bodily Injury Exception Not Included). The exclusions expressly provide
that they apply “even if damages are claimed for notification costs, credit monitoring expenses, forensic expense, public relations
expenses or any other loss, cost or expense incurred by you or others arising out of that which is described … above.”
-174-
ii. Coverage B – Personal and Advertising Injury
Attempts at seeking coverage, or at least obtaining a defense, under CGL policies have been
asserted under Coverage B, Personal and Advertising Injury. Results have varied depending on
jurisdiction and claim.
Personal and Advertising Injury coverage under Coverage B is limited to injuries arising out of
certain enumerated offenses.773 Standard versions of Coverage B provides (until recent
amendments to ISO forms): “we will pay those sums that the insured becomes legally obligated to
pay as damages because of ‘personal and advertising injury’ to which this insurance applies,” and
the policy’s definition of personal and advertising injury generally lists the enumerated offenses for
which coverage is provided. Although “personal injury” and “advertising injury” used to be
separately defined as two different sets of enumerated offenses within Coverage B, the industry
began merging the terms into one consolidated set of enumerated offenses in 1998.774 Among those
enumerated offenses is typically “injury … arising out of … oral or written publication, in any
manner, of material that violates a person’s right of privacy.” This is the offense that is often
alleged to apply when a claim for coverage for a data breach is made.
In response to efforts to obtain coverage under Coverage B based on this prong of the definition of
personal and advertising injury, recently some insurers have amended the definition to delete this
prong, in an effort to avoid costly coverage disputes. 775 Moreover, effective May 2014, ISO issued
an endorsement including an exclusion applicable to Coverage B – Personal and Advertising Injury,
which provides that the policy does not apply to “’Personal and Advertising Injury’ arising out of
any access to or disclosure of any persons or organization’s confidential or personal information,
including patents, trade secrets, processing methods, customer lists, financial information, credit
card information, health information or any other type of non-public information….”776
However, there are still policy forms without those recently introduced amendments, and claims
that could be submitted under those forms. Thus, case law based on versions of CGL policies that
do not include the new amendments can still be relevant to many coverage disputes arising out of
requests for coverage for a breach claim. To successfully tender a data breach claim under a CGL
policy that has the prong of Coverage B that includes “injury … arising out of … oral or written
publication, in any manner, of material that violates a person’s right of privacy” (and does not
include an exclusion for access or disclosure of personal or confidential information) then, an
insured would have to demonstrate, among other things, at least a potential that the data breach in
773 This is in contrast to Coverage A, which is typically triggered by an accidental occurrence. Accord, e.g., Stonelight Tile,
Inc. v. Ca. Ins. Guarantee Ass’n, 58 Cal. Rptr. 3d 74, 89 (Cal. Ct. App. 2007) (“Personal injury liability is a term of art that covers
certain enumerated offenses. Unlike liability coverage for property damage or bodily injury, personal injury coverage is not based on
an accidental occurrence.”).
774 See CGL Policy Handbook, § 9.01
775 See discussion above as to ISO from CG 21 08 05 14 that, for Coverage B only, excludes coverage for personal and
advertising injury. Also available and used in some CGL policies are ISO revisions to its 2013 CGL form that include the option of
an endorsement that deletes the prong of “oral or written publication, in any manner, of material that violates a person’s right of
privacy” from the list of covered offenses in Coverage B. See ISO form CG 24 13 04 13, Amendment of Personal and Advertising
Injury Definition, effective April 2013; see also, Chris Boggs, ISO’s CGL Changes for 2013 – Part III, Claims Journal, Apr. 9,
2013, www.claimsjournal.com/new/national/2013/04/09/226615.htm; Changes to the CGL Coverage Form, IRMI, Feb. 2013,
supra; Ted A. Kinney, 2013 Change in the Commercial General Liability Program.
776 See ISO forms CG 21 06 05 14 and G 21 07 05 14, discussed above with regard to Coverage A.
-175-
issue constituted a “publication” that violated the data owner’s “right of privacy.” The standard
ISO insurance form including this prong does not define the terms “publication” or “right of
privacy.” Courts ruling on the applicability of Coverage B to privacy claims have found some types
of personal data, but not others, to be within the data owner’s “right of privacy,” and the result can
vary depending on the information and the jurisdiction’s law that applies, as well as the specific
policy’s provisions and exclusions. Thus, some courts have found privacy rights implicated for
purposes of Coverage B where the issue was an insured’s improper access and use of certain types
of information that are statutorily protected, such as access and use of credit reports in violation of
the Fair Credit Reporting Act (FCRA expressly states that it is intended to protect consumers’ right
to privacy).777 Similarly, the personal data at issue in data breach scenarios is sometimes also
protected by statutes designed to keep that data private. However, to the extent that the right to
privacy is based on a statute, there are often other exclusions that serve to preclude coverage.778
Moreover, to the extent that a claim is based on a common law or constitutional right to privacy,
under some states’ law, only information that is of an embarrassing nature and published under
egregious circumstances is considered to be in violation of a right to privacy.779
Even apart from the content of the information involved, the application of a “publication”
requirement under Coverage B presents a significant hurdle in data breach cases, particularly those
involving theft of information by a third party from the breached insured. Decisions in some
jurisdictions have found there to be sufficient issue of publication under some fact situations that
involve violations of privacy rights to at least trigger a duty to defend in situations that, among other
things, have involved insured’s alleged distribution of the Personal Information in issue; however,
777 See Pietras v. Sentry Ins. Co., No. 06 C 3576, 2007 WL 715759, 2007 U.S. Dist. LEXIS 67013 (N.D. Ill. Mar. 6, 2007)
(holding under Illinois law that the insurer had a duty to provide a defense); American Family Mutual Ins. Co. v. C.M.A. Mortgage,
Inc., 2008 WL 906230 (S.D. Ind. Mar. 31, 2008) (holding under Indiana law that a claim involving improper use of credit reports in
violation of FCRA states a potentially covered claim and thus triggers the insurer’s duty to defend) (order rescinded in part due to
docketing error, 2008 WL 5069825); Zurich Am. Ins. Co. v. Fieldstone Mortgage Co., No. 06-cv-2055, 2007 WL 3268460 (Md. Dist.
Ct. Oct. 26, 2007) (holding under Maryland law that a FCRA claim based upon improper access and use of others’ credit information
triggered a duty to defend).
778 As mentioned below, to the extent statutes create a “right of privacy” in the type of Personal Information in issue, CGL
policies typically also include an exclusion applicable to Coverage B for Violation of Information Law that may preclude coverage
for claims based on violations of such statutes. See, e.g., OneBeacon America Ins. Co. v. Urban Outfitters, et al., 21 F. Supp. 3d 426
(E.D. Pa., May 15, 2014)(held that statutory violation exclusion relieved insurer of duty to defend two underlying matters in which
zip codes were collected for marketing purposes), National Union Fire Ins. Co. of Pittsburgh, Pa. v. Coinstar, Inc., 2014 WL 868584
(W.D. Wash., Feb. 28, 2014)(holding exclusion applicable that barred coverage for injury arising out of a violation of statutorilycreated
right to privacy). Such an exclusion, however, may by express exception or court interpretation not apply if the privacy
right exists independent of the statute. See, e.g., Hartford Casualty v. Corcino & Associates, No. CV-13-3728, 2013 WL 5687527 at
*5 (C.D. Cal. Oct. 7, 2013) (the subject statutes “do not create new privacy rights and because the Policy exclusion by its terms “does
not apply to liability for damages that the insured would have in absence of such state or federal act,” the relief sought under these
statutes can reasonably be interpreted to fall outside of Hartford's Policy exclusion.”).
779 See, e.g., Allstate Ins. Co. v. Ginsberg, 863 So.2d 156 (Fl. 2003) (finding absence of personal injury coverage because
underlying claims did not allege common law violation of privacy); Lextron, Inc. v. Travelers Cas. and Sur. Co. of Am., 267 F. Supp.
2d 1041, 1047 (D. Colo. 2003) (looking to the Restatement (Second) of Torts for guidance); A & B Ingredients, Inc. v. Hartford Fire
Ins. Co., No. 08-6264, 2010 WL 5094419 (D.N.J. Dec. 8, 2010) (finding absence of personal and advertising injury coverage on the
basis of a broad statutory exclusion and a finding that the jurisdiction in which the underlying claims arose apparently did not
recognize common law privacy violations in that context); Ananda Church of Self Realization v. Everest Nat. Ins. Co., No. C038570,
2003 WL 205144, 2003 Ca. App. Unpub. LEXIS 1095 (Cal. Ct. App. Jan. 31, 2003) (unpublished) (finding absence of Coverage B
coverage, in part, on the basis that the type of information at issue, while confidential, were not facts that “the average person would
find offensive or objectionable”); Ruiz v. Gap, Inc., 540 F. Supp. 2d 1121 (N.D. Cal. 2008), aff’d, 380 Fed. Appx. 689 (9th Cir. 2010)
(holding that the employer’s possible negligence (i.e., in allowing the computers containing unencrypted personal information of job
applicants to be stolen) did not rise to the level of egregiousness required). See also State Farm Fire and Cas. Co. v. Nat’l Research
Center for Coll. and Univ. Admissions, 445 F.3d 1100, 1103 (8th Cir. 2006) (deciding under Missouri law and defining “privacy” as
“isolation, seclusion, or freedom from unauthorized oversight or observation.”)
-176-
others have held there to be no coverage as a matter of law in instances where there is no
publication by an insured. Thus, in one well publicized coverage case arising out of the 2011 Sony
PlayStation breach, a court found that the “publication” must be by an insured and publication by a
third party hacker does not fall within the scope of “publication” under Coverage B. 780
In Fair Credit Reporting Act cases several courts took the view that “publication” can occur when
information is revealed by the insured to others, including the owner of the information.781 One
court, relying on a dictionary, found “publication” to mean “to produce or release for
780 One of the most publicized cases on the issue of whether there is coverage under a CGL policy for a data breach is the
coverage litigation arising out of the 2011 Sony PlayStation data breach. Hackers stole the PII of PlayStation users, and the users in
turn filed approximately 60 lawsuits against Sony, including consumer class actions. Sony sought coverage under its Coverage B of
its tower of CGL policies, and there was a resulting a declaratory judgment action to determine the CGL insurers’ coverage
obligations. On February 21, 2014, Judge Oing of the New York State Supreme Court, New York County, ruled that the CGL
insurers did not have a duty to defend Sony Corporation in lawsuits relating to the data breach. The court found that coverage under
the prong of “personal and advertising injury” coverage for publication in violation of a right to privacy requires that the insured
“commits or perpetrates the act of publicizing the information” and in the data breach in issue it was the Sony, who published the
Personal Information in issue. Zurich Am. Ins. Co. v. Sony Corp. of Am., et al., Index No. 651982/2011 (Supreme Court of the State
of New York, County of New York, Feb. 21, 2014). Sony appealed the decision and oral argument was held.in February 2015. In
April 2015, and before any decision was issued by the appellate court, the parties entered into a confidential settlement and all claims
were dismissed. On April 9th, 2014, Sony filed an appeal of Judge Oing’s decision. See Young Ha, “Sony, Zurich Reach Settlement
in PlayStation Data Breach Case in New York,” Insurance Journal, May 1, 2015, available at
http://www.insurancejournal.com/news/east/2015/05/01/366600.htm.
Another recent decision supporting this view is the Connecticut Supreme Court affirmation of Recall Total Info. Mgmt. Inc.
v. Fed. Ins. Co., No. X07CV095031734S, 2012 WL 469988, at *6-7 (Conn. Super. Ct. Jan. 17, 2012), aff’d 147 Conn. App. 450, 83
A.3d 664 (Conn. App. Ct. 2014), aff’d No. SC 19291 (Conn. Sup. Ct. May 26, 2015) (130 computer data tapes, containing personal
information for more than 500,000 employees of the insured, fell from the back of a transport truck and were then removed by an
unknown person and never recovered; the court found “publication” for purposes of Coverage B did not occur because there was “no
evidence of communication to a third party,” finding “the loss and the subsequent theft of the tapes . . . is not the offense, publication
. . . that the policy contemplates to trigger personal injury coverage.”) (emphasis added). Other cases supporting the “no publication”
position include: OneBeacon America Ins. Co. v. Urban Outfitters, et al., 21 F. Supp. 3d 426 (E.D. Pa., May 15, 2014) (no
publication in connection with one underlying matter in which zip codes are collected as part of credit card transaction as between
retailer and customer and not broadly disseminated thereafter); Butts v. Royal Vendors, Inc., 202 W.Va. 448, 504 S.E.2d 911 (W. Va.
1998) (per curiam) (employee filed civil action against his employer for wrongful inducement after the employee’s physician made
certain statements in alleged breach of the patient’s privacy; employer then sought coverage under its CGL policy that provided
coverage for “oral or written publication of material that violates a person’s right of privacy”; court found that no coverage existed
under this section of the policy because there was no allegation that the insured affirmatively disseminated any statements in
violation of the employee’s privacy; rather, the complaint alleged that the employer “induced” a third party – i.e., the employee’s
treating physician – to do so; the court specifically stated that the Coverage B publication offense was “not written to cover
publication by a third party”); see also Harrow Prods., Inc. v. Liberty Mut. Ins. Co., 64 F.3d 1015, 1025 (6th Cir. 1995) (stating that
“each enumerated tort in the personal injury clause requires an intentional act” under a policy that included coverage for “publication
. . . in violation of an individual’s right of privacy”); Gregory v. Tennessee Gas Pipeline Co., 948 F.2d 203, 209 (5th Cir. 1991)
(stating that “[e]ach of the enumerated risks specifically assumed requires active, intentional conduct by the insured” in relation to a
policy that included coverage for “oral or written publication of material that violates a person’s right of privacy”); Buell Indus., Inc.
v. Greater New York Mut. Ins. Co., 259 Conn. 527, 562, 791 A.2d 489, 510-11 (Conn. 2002) (stating that a policy’s “personal injury
provisions were intended to reach only intentional acts by the insured” in relation to a policy that included coverage for “a
publication . . . in violation of an individual’s right of privacy”); Cnty. of Columbia v. Cont’l Ins. Co., 83 N.Y.2d 618, 634 N.E.2d
946 (N.Y. 1994) (stating that “the coverage under the personal injury endorsement provision in question was intended to reach only
purposeful acts undertaken by the insured or its agents” under a personal injury endorsement that provided coverage for “publication”
that constituted an invasion of an individual’s right of privacy”).
But see Travelers Indem. Co. of Am. v. Portal Healthcare Solutions, LLC, 35 F. Supp. 3d 765, 770-71 (E.D. Va.
2014)(holding Travelers had a duty to defend Portal in a case because “publication” means that information is “placed before the
public” means the confidential medical records maintained by Portal were published when they simply were available to be viewed
online by anyone). Although this is not a Coverage B publication case, the core issue of “publication” is similar.
781 See Zurich v. Fieldstone, supra, 2007 WL 3268460 at *5; see also, e.g. Pietras v. Sentry Ins. Co., No. 06 C 35762007 WL
715759 (N.D. Ill. Mar. 6, 2007) (holding that violation of a law prohibiting unsolicited pre-approved loan advertising mailings
violated the claimants right to have one’s private information maintained as private and that “publication” under Illinois law included
communication to as few as a single person).
-177-
distribution.”782 In contrast, courts in other jurisdictions analyzing the application of Coverage B to
a violation of FACTA reached a different conclusion with regard to “publication” on the grounds
that it is not publication where credit card information is improperly printed in full, but is provided
only to the cardholder and thus not “in any way made generally known, announced publicly,
disseminated to the public, or released for distribution.”783 However, in a case construing
“publication” in the context of an employer subjecting his employee to audio surveillance without
informing the employee in violation of the Wiretapping and Electronic Surveillance Act, that
surveillance was found to constitute “publication.”784
Overall, the limited case law and legal authorities on the issue indicate that “publication” within the
context of Coverage B requires that the insured have affirmatively disseminated the information in
issue to others, rather than have that information stolen from it, for there to be any potential for the
“publication” prong of Coverage B to apply. Thus, while the term “publication” has been found
satisfied in the Coverage B context in instances involving affirmative acts by the insured, so far
there is a dearth of authority indicating that the term “publication” may be satisfied on the basis of
passive, non-affirmative conduct by the insured in the data breach context. As a result, an entity
seeking coverage under Coverage B for a typical data breach involving third party theft of
information is likely have an uphill battle triggering coverage obligations under Coverage B, as a
data breach does not generally involve any affirmative acts of dissemination on the part of the
insured, although that is an issue being litigated.785
Thus, in the event of a request for coverage under Coverage B of a third-party claim based upon
improper access to Personal Information due to a data breach, the focus is likely to be whether there
782 Id. See also LensCrafters, Inc. v. Liberty Mut. Fire Ins. Co., No. C-04-1001, 2005 WL 146896 (N.D. Cal. Jan. 20, 2005)
(involving alleged disclosure of private medical information); Moore v. Hudson Ins. Co., No. B189810, 2007 WL 172119, at *6 (Cal.
Ct. App. Jan. 24, 2007) (unpublished) (discussing scope of dissemination required).
783 Whole Enchilada, Inc. v. Travelers, 581 F. Supp. 2d 677, 698 (W. Dist. Pa. 2008); see also Ticknor v. Rouse’s Enters.,
LLC, 2014 WL 668930 (E.D. La. Feb. 20, 2014) (finding grocery store operator’s alleged failure to truncate expiration dates when
issuing receipts for credit card transactions in violation of FACTA did not amount to a “publication” within the meaning of the
store’s CGL’s personal and advertising injury coverage provision because the store’s actions did not involve mass distribution of
material to the general public or an intrusion into an individual’s right to be left alone as receipts were provided only to customers
who initiated credit card transactions); Creative Hospitality Ventures, Inc. v. U.S. Liab. Ins. Co., No. 08-cv-22302 (S.D. Fla. Mar. 23,
2011) (restaurant printed more than five digits of customers’ credit card numbers on printed receipts, along with expiration dates, in
alleged violation of FACTA; court found no “publication” for purposes of Coverage B had occurred because the underlying
complaint lacked allegations of any “dissemination of information to the public,” or even any “allegation that any FACTA-violation
receipt was provided to anyone other than the cardholder”), aff’d, 444 Fed. Appx. 370, 376 (9th Cir. 2011) (“In sum, providing a
customer a contemporaneous record of a retail transaction involves no dissemination of information to the general public and does
not constitute publication within the meaning of Essex’s Policy”).
In a case involving a different statute, a federal district court in Washington similarly found no “publication” when the
Video Privacy Protection Act, the statute at issue, required the collection of certain personal information. National Union Fire Ins.
Co. of Pittsburgh, Pa. v. Coinstar, Inc., 2014 WL 3891275 (W.D. Wash., Aug. 7, 2014).
784 Bowyer v. Hi-Lad Inc., 609 S.E.2d 895, 912 (W.Va. 2004) (insured argued that the term “publication” was ambiguous and
should be construed against the insurer to cover an employee’s underlying claim that the insured “used the surveillance system to
capture his oral communications, and then publish that audio material through speakers to the officers and employees” of the
insured’s business; the court held that there was “nothing in the policy indicating that the word publication necessarily means
transmitting the intercepted communications to a third party, as is required of material in the defamation context. And, even were we
to assume publication does require communicating to a third-party, the surveillance monitoring system apparently functioned in such
a way that anyone in the manager’s office or in [the hotel owner’s] home had the ability to listen in on employee conversations”).
785 Currently pending are several lawsuits concerning requests by policyholders for coverage, or at least a defense, under
Coverage B for claims arising from breach-related events. See, e.g., Travelers Indem. Co. of Conn. v. P.F. Chang’s China Bistro,
Inc., No. 2:14-cv-01458 (filed Oct. 2, 2014 in Connecticut federal district court). See also case discussions in the section below
about Privacy Litigation.
-178-
was a violation of the data owner’s “right of privacy,” whether there was “publication” by the
insured, whether covered “damages” are sought, and which jurisdiction’s law applies.
Variations in Coverage B policy wording can also affect whether a court is likely to find coverage
for a data breach under Coverage B. In a case involving claims brought under the Electronic
Communications Privacy Act and Computer Fraud and Abuse Act in connection with the collection
of information regarding the underlying plaintiffs’ online activity for eventual dissemination to
third-party advertisers, one court construed a policy that had Coverage B wording different from the
wording found in the ISO form. That policy defined “personal injury offense” to include “Making
known to any person or organization written or spoken material that violates a person’s right to
privacy.” This took the place of the phrase “oral or written publication, in any manner” found in the
ISO form.786 Under that non-ISO definition, the court found the defendant’s passage of information
to its parent company and the defendant’s employees sharing of the information among themselves
to constitute “making known to any person or organization.” (The holding was reversed on appeal
but not on this point.)787
Further hurdles faced by insureds seeking coverage under a CGL policy for claims arising from a
data breach, even if they overcome the significant thresholds to coverage contained in the Coverage
B insuring provisions, include that there are typically a number of policy exclusions applicable to
Coverage B that can operate to exclude coverage. For example, even apart from the new exclusions
introduced in 2013 and 2014 discussed above, the standard ISO form contains an exclusion for
“personal injury and advertising injury” arising out of violation of any “statute, ordinance or
regulation . . . that addresses, prohibits or limits the . . . sending, transmitting, communicating or
distribution of material or information.”788 Further, even if a Coverage B statutory violation
786 Some courts have distinguished between the terms “publication” and “making known” for purposes of Coverage B
coverage. Compare Motorists Mut. Ins. Co. v. Dandy-Jim, Inc., 182 Ohio App. 3d 311, 319, 912 N.E.2d 659, 655 (Ohio App. Ct.
2009) (distinguishing “publication” from “making known” for Coverage B purposes), and Zurich Am. Ins. Co. v. Fieldstone Mortg.
Co., No. CCB-06-2055, 2007 WL 3268460, at *5 (D. Md. Oct. 26, 2007) (same), with State Farm Gen. Ins. Co. v. JT’s Frames, Inc.,
181 Cal. App. 4th 429, 104 Cal. Rptr. 3d 573 (Cal. Ct. App. 2010) (equating the term “publication” to “making known to any person
or organization” for Coverage B purposes).
787 Netscape Commc’ns. Corp. v. Fed. Ins. Co., No. 06-C-00198, 2007 WL 2972924 (N.D. Cal.), rev’d, Netscape Commc’ns
Corp. v. Fed. Ins. Co., 343 Fed. Appx. 271 (9th Cir. 2009). The Ninth Circuit found the policy’s language regarding “any person or
organization” to be dispositive. However, the Ninth Circuit disagreed with the lower court regarding the applicability of an exclusion
to Coverage B. The policy excluded coverage for personal injury offenses relating to defined “online activities,” including the
provision of Internet access. While the lower court found that the exclusion barred coverage because the claims involved the use of
software to assist with downloading files, the Ninth Circuit, reading the exclusion narrowly, reasoned that the software itself does not
provide Internet access, and thus the exclusion did not apply.
788 This exclusion was slightly modified and expanded in ISO’s latest 2013 filing, and now, among other things, lists not only
the TCPA, CAN-SPAN Act of 2003, but also the Fair Credit Reporting Act. A variation of this exclusion was construed in Creative
Hospitality Ventures, Inc. v. U.S. Liab. Ins. Co., 655 F. Supp. 2d 1316 (S.D. Fla. 2009) (Rosenbaum, U.S.M.J.), adopted in part,
ruling reserved in part, 655 F. Supp. 2d 1316 (S.D. Fla. 2009). There certain underlying claims alleging FACTA credit card
violations against a restaurant were excluded from personal and advertising injury coverage under the policy’s “Distribution Of
Material In Violation of Statutes” exclusion (that exclusion excluded coverage for personal and advertising injury “arising directly or
indirectly out of any action or omission that violates or is alleged to violate . . .[a]ny statute, ordinance or regulation . . . that prohibits
or limits the sending, transmitting, communicating or distribution of material or information”). It was held that because FACTA is a
“statute that limits the information that . . . an electronically printed receipt . . . may include . . . FACTA qualifies as a statute that
‘prohibits and limits the ... communicating or distribution of material or information,’ within the ordinary meaning of the terms of
this exclusion.” It should be noted that the Court of Appeals for the Eleventh Circuit issued a related decision (as to another
restaurant), and held that a restaurant’s issuance of a credit card receipt to a customer does not constitute “publication” within the
meaning of the clause “publication . . . of material that violates a person’s right of privacy.” See Creative Hospitality Ventures, Inc.
v. U.S. Liab. Ins. Co., No. 11-11781, 2011 WL 4509919 (11th Cir. Sept. 30, 2011). The court reasoned that such a transaction
involves “no dissemination of information to the general public.” Id. at *5. As a result, the Ninth Circuit did not need to reach
whether any exclusion was applicable because coverage was not triggered due to the absence of any “publication” by the insured.
-179-
exclusion does not include in its provisions that “alleged” violations are also precluded from
coverage, at least two district courts in the TCPA context have found that allegations alone in the
underlying complaint of such violations may be sufficient for coverage to be excluded (as opposed
to requiring an adjudication or admission of such violation for the exclusion to trigger).789 Other
Coverage B exclusions that can potentially come into play upon a data breach include ones for
“personal and advertising injury” arising out of the criminal act of the insured (which could come
into play when employee theft is in issue); arising out of intellectual property rights; committed by
insureds in media and Internet type businesses; arising out of an electronic chat room or bulletin
board the insured hosts, owns or controls; arising out of breach of contract; and other exclusions
that may be more general in nature but apply to the specific claim in issue, or that may be
specifically manuscripted for the insured in issue.
iii. The “Damages” Hurdle
Yet another hurdle for attempts to obtain coverage of a third-party data breach claim under a CGL
Policy is the requirement under both Coverage A and Coverage B that the claim be for “sums that
the insured is legally obligated to pay as damages.” As discussed above, often consumers have not
sustained out-of-pocket losses, and issues include whether there are any covered damages to which
the insurance applies even if there is found to be a covered occurrence or offense, if only statutory
fines or penalties are involved. As “damages” is not generally a defined term in CGL policies, the
issue of what constitutes covered damages can be a contested issue that can differ based on the law
of the applicable jurisdiction.
The issue of “damages” is often raised in other types of privacy related cases than those arising
from traditional data breaches, particularly those involving statutory violations and monetary
assessments, such as the “ZIP code litigations” ( putative class action lawsuits where customers
allege that retailers improperly and unnecessarily recorded customers’ ZIP codes during credit card
transactions in violation of applicable state statutes)790, or in cases alleging violations of statutes
such as the TCPA (Telephone Consumer Protection Act)791 for which coverage is sought under
789 See Collective Brands, Inc. v. Nat’l. Union Fire Ins. Co. of Pittsburgh, P.A., No. 11-4097-JTM, 2013 WL 66071 (D. Kan.
Jan. 4, 2013) (finding that nothing in the exclusion required a formal adjudication and that it was sufficient if the liability arose from
excluded statutory violations for the exclusion to apply); see also Interline Brands, Inc. v. Chartis Specialty Ins. Co., No. 3:11-cv-
731-J-25JRK (M.D. Fla. Nov. 21, 2012) (“The Court cannot find legal precedence to rewrite the insurance contract to necessitate
there being an ‘adjudged violation’ for the exclusion to apply”). Interline Brands, Inc. is currently on appeal in the Court of Appeals
for the Eleventh Circuit, and oral argument was held during the first week of March, 2014.
790 See Section II.1.a. above, on the Expanding Definitions of Personal Information, and Section VII.4.a below, discussing the
ZIP code litigation in California and Massachusetts under those states respective laws
For example, Michaels Stores, Inc. was faced with such ZIP Code claims in California. Various customers filed six
putative class actions against the retailer, and ultimately the only remaining claim was a violation of the California Song-Beverly Act.
Michaels sought coverage under its CGL policy, and coverage litigation ensued. See Arch Ins. Co. v. Michaels Stores, Inc., 37-2011-
00097053-CU-IC-CTL (Cal. Super. Ct., San Diego County). The parties each cross-moved for summary judgment regarding the
CGL insurer’s duty to defend and indemnify. The court ultimately granted summary judgment in favor of the insurer, holding that
the underlying lawsuits did not seek “damages” within the meaning of the CGL policy. As the policy did not define “damages,” the
court applied the “common” definition; “damages” are “compensation recovered by a party for a loss or detriment it has suffered
through the acts of another.” The court disagreed that the Song-Beverly Act’s statutory penalties were compensatory in nature, and
found such damages were penalties designed to “deter misconduct and harm”. Since such penalties did not fall under the definition
of “damages” used by the court, it held that there was no coverage under the CGL policy. See Arch Ins. Co. v. Michaels Stores, Inc.,
37-2011-00097053-CU-IC-CTL (Cal. Super. Ct., San Diego County, Dec. 20, 2013).
791 See discussion of TCPA above in Section III. discussing the U.S. Regulatory and Statutory Landscape, and Section VII on
Privacy Practices Lawsuits.
-180-
policies that do not have a TCPA exclusion. Requests for coverage of statutory assessments are
often met with resistance by insurers who consider them to be tantamount to civil penalties, and the
battle ground generally turns on whether the under the particular statute in issue the sums required
to be paid under the statute for violations are punitive or penalties and thus uninsurable under the
applicable jurisdiction’ law, or remedial and compensatory and thus insurable “damages.”
A related issue is whether the insured incurred payments voluntarily, as opposed to being legally
obligated to do so as may be required under a general liability policy’s insuring agreement. In
addition, some policies expressly require an insured to obtain the insurer’s consent before incurring
costs and expenses and failure to obtain the insurer’s consent can vitiate coverage for costs that may
otherwise have been subject to payment by the insurer under the policy.792
e. Professional Liability/E&O
Most professionals and entities engaged in providing services to others have errors and omissions
(E&O) liability policies in place that they look to for a defense and indemnity when a claim is
asserted against them by their clients. When a data breach at least arguably occurs within the scope
of covered services, particularly when it involves data of its client, such an insured may look to its
professional liability/E&O insurer to at least provide a defense to any third-party claims arising
from the data breach. Thus, for example, a law firm, engineering firm or technology services firm
that improperly disposes of or loses client files or is otherwise subject to a data breach – or a firm
that is involved in issues relating to planning, designing or implementing a client’s software
program that is involved in a breach – and is thus subject to client claims, may try to seek coverage
under its professional liability/E&O policies.
Professional liability and other E&O policies, however, may contain electronic data or software
design exclusions, although some may have exceptions for such services that are incidental to the
Some courts have held that TCPA damage of $500 per violation constitute penal or punitive damages, see, e.g., U.S. Fax
Law Center, Inc. v. iHire, Inc., 362 F. Supp. 2d 1248, 1253 (D. Colo. 2005), aff’d, 476 F. 3d 1112 (10th Cir. 2007); Kruse v.
McKenna, 178 P. 3d 1198, 1201 (Colo. 2008) (en banc); Kaplan v. Democrat & Chronicle, 266 A.D. 2d 848, 698 N.Y.S. 2d799, 800
(App. Div. 1999) (mem). Other courts have disagreed, some reasoning that the cost of an unsolicited fax (loss of paper and ink,
annoyance and inconvenience) is still a cost and thus a compensable harm, represented by the liquidated sum of $500 per violation,
and is an incentive for private enforcement rather than punitive. Standard Mut. Ins. Co. v. Lav, 989 N.E. 2d 591 (Ill. 2013).
TCPA claims raise other coverage issues as well, many of which are different than those arising from data breach claims.
One issue is whether the conduct in issue, such as sending faxes, is intentional with knowledge it would result I the sue of recipient’s
paper, toner and time, and thus is intentional and not an “occurrence” under Coverage A. See, e.g., Nationwide Mut. Ins. Co. v.
David Randall Assocs., Inc., 551 Fed. Appx. 638, 640 (3rd Cir. 2014); Melrose Hotel Co. v. St. Paul Fire & Marine Ins. Co., 432 F.
Supp. 2d 488, 507-09 (2006), aff’d sub nom. Subclass 2 of the Master Class of Plaintiffs Defined & Certified in the January 30, 2006
and July 28, 2006 Orders of the Circuit Court of Cook County, Illinois in the Litigation Captioned Travel 100 Group Inc. v. Melrose
Hotel Co., 503 F.3d 339, 340 (3rd Cir. 2007); see also G. M. Sign. Inc, 2014 IL App (2d) 121276 (Mar. 24, 2104). Courts have also
considered whether TCPA claims fall under Coverage B provisions for publications in violation of a right to privacy. See, e.g.,
Owners Ins.Co. v. European Auto Works, Inc., 695 F. 3d 814 (8th Cir. 2012) and case law cited therein.
792 See First Commonwealth Bank, et al. v. St. Paul Mercury Ins. Co., No. 14-19, 2014 WL 4978383 (W.D. Pa., Oct. 6, 2014).
In this case filed by a bank against one if its insurers, the insurer moved to dismiss the bank’s claim arising from the malwarefacilitated
theft of $3.5 million from a customer account. The insurer contended that the bank had voluntarily made a payment to
make the customer whole and failed to get the insurer’s approval. The federal district court for the Western District of Pennsylvania
denied the insurer’s motion, holding payment was required by state statute and therefore not voluntary Although the issue of the
insurer’s consent was raised in the insurer’s motion, it was not explicitly discussed in the opinion. The court’s decision that the
insured’s payment was not in fact voluntary seems to indicate that the court believed no insurer consent was needed for a mandatory
payment by the insured.
-181-
“professional services” covered and thus trigger a duty to defend some data breach claims asserted
against an insured that arguably fall within the exception. On the other hand, in recent years, many
professional liability policies include (or have as an option to be purchased) add-on coverage by
way of endorsements or additional coverage parts that are directed at providing data breach or other
cyber risk coverage, including the first party costs sustained by the insured in responding to a
breach.
Some E&O policies are expressly designed to provide coverage for cyber risk claims. For example,
many E&O policies issued to technology companies recognize that such insureds are engaged in
activities likely to make them more prone than companies in other industries to involvement in
electronic data breaches, either as direct targets or as vendors to others. Thus, policies available to
such technology companies may also expressly include coverages encompassing data breach
claims. Cyber risks are also increasing professional liability and other errors and omissions
exposures in ways, particularly for insurance brokers and for entities involved in providing network
security or other network services: there will likely be an increasing number of claims to be
addressed that professionals failed to adequately advise their clients about cyber risks. As cyber
risks become increasingly known as a significant risk to businesses that can result in substantial
costs and claims, entities sustaining a costly cyber attack or other privacy-related claim will be
looking for others to share those costs with it. If insurance for the types of costs and losses was
available in the market, but not discussed with an entity as a potential part of its insurance program,
that may make the entity’s broker a target. When a vendor is involved, that entity and its indemnity
agreements and insurance will also be scrutinized as a source of recovery. Thus, regardless of the
applicability of policy limitations and exclusions of coverage, companies in the insurance industry
will have the increased cost of dealing with a growing frequency of claims to address.
Often the coverage issues include whether the claim is within the scope of covered services,
whether the insured’s error that caused the alleged damage or financial injury in question falls under
policy’s definition of “wrongful act,” whether there are alleged to be “damages” covered by the
policy, whether contractual liability exclusions apply to indemnity claims, and whether there is an
exclusion directed at data breach or other electronic claims.793
f. D&O
As large publicized data breaches and other cyber incidents involving publicly traded companies
often result in drops in companies’ stock prices or other large financial losses, companies and their
directors and officers who are faced with such a data breach or other type of cyber attack or incident
793 For example, the “wrongful act” coverage requirement has been found (under some states’ law) to include “intentional,
non-negligent acts but to exclude intentionally wrongful conduct." See Eyeblaster, Inc. v. Fed. Ins. Co., 613 F.3d 797, 804 (8th Cir.
2010) (under Minnesota law). In Eyeblaster, Inc., a computer user sued Eyeblaster, Inc., alleging that Eyeblaster injured his
computer, software, and data after he visited an Eyeblaster website. The E&O policy at issue obligated Eyeblaster’s insurer “to pay
loss for financial injury caused by a wrongful act that results in the failure of Eyeblaster’s product to perform its intended function or
to serve its intended purposes.” The insurer conceded that the underlying claim sufficiently alleged “financial injury.” Nonetheless,
the insurer argued (and the district court agreed) that coverage was non-existent because Eyeblaster had acted intentionally, and thus
no “wrongful act” within the meaning of the policy had occurred (“wrongful act” was defined under the policy as “an error, an
unintentional omission, or a negligent act”). On appeal, the Court of Appeals for the Eight Circuit reversed, finding that although
Eyeblaster had acted intentionally in placing its software in the underlying complainant’s computer, there was “no evidence that the
allegations . . . spoke of intentional acts that were either negligent or wrongful.” Thus, the court found that the underlying complaint
had sufficiently alleged a “wrongful act” on the part of Eyeblaster within the meaning of the policy, and consequently found a duty to
defend had been triggered.
-182-
may well also face the type of securities/D&O claims that frequently accompany a significant and
unexpected fall in stock prices and allegations of failure to disclose a material risk. For example,
following the Heartland data breach, shareholders pursued securities fraud litigation against
Heartland on the basis that it had misrepresented the state of its computer security. The Heartland
suit was ultimately dismissed. 794 So too was a shareholder action against Wyndham Hotels. 795
Shareholder litigation against Target remains pending. 796 Notwithstanding the dismissals, such
cases filed to date show the potential for shareholder litigation against companies that are victims of
data breaches. These all demonstrate that such companies, their Boards and their D&O insurers,
may face D&O and securities claims among the potential litigations that can arise, particularly if the
cyber attack in issue results in substantial costs to the company.797
Further, with the increasing issuance by state and federal agencies of data security regulations
requiring the institution of data security protocols by companies, some of which expressly require
board review of data protection plans and procedures, there may be an increase in D&O claims for
all types of companies within their purview. For example, in addition to the accountability placed
794 In re Heartland Payment Sys., Inc. Sec. Litig., Civ. No. 09-1043 (D.N.J. Dec. 7, 2009). The court found that the securities
fraud claims failed to meet the heightened pleading standards provided by the Private Securities Litigation Reform Act of 1995
(PSLRA). The court explained that the PSLRA requires fraud to be pleaded with particularity, and also requires plaintiffs to state
with particularity facts giving rise to a strong inference that the defendant acted with the required state of mind. Citing the Supreme
Court’s decision in Tellabs, Inc. v. Makor Issues & Rights Ltd., 551 U.S. 308 (2007), the court stated that a complaint will adequately
allege state of mind only if a reasonable person would deem the inference of scienter to be at least as strong as any inference of nonfraudulent
intent. The court found that the plaintiffs had failed to meet this heightened pleading requirement. In particular, the court
found that the defendant’s statements regarding its computer security, when examined in context, were not misleading. The court
also found that the plaintiffs had failed to allege that the defendant knew or should have known that its statements were false. Having
found that the complaint failed to adequately allege two of the elements of its fraud claims, the court dismissed the complaint with
prejudice.
795 See Dennis Palkon, Derivatively on Behalf of Wyndham World Wide Corporation v. Stephen P. Holmes, et al., Case 2:14-
cv-01234, 2014 WL 5341880 at *6 (D.N.J., Oct. 20, 2014). The court dismissed a derivative shareholder lawsuit alleging the
individual defendant directors and officers aggravated damage to the company from data breaches by, among other things, failing to
timely disclose the breaches in the company’s financial filings, and failing to implement appropriate controls to detect and prevent
repetitive data breaches; the court held that plaintiffs failed to overcome the presumption afforded the defendants under the Delaware
business judgment rule. In reviewing the actions of the Wyndham Board, the court laid out a blue print for how a Board can conduct
itself to successfully defend against such claims arising from a data breach (the court noted, among other things, that Board members
had discussed the cyber attacks sustained by the company at fourteen meetings, and at every quarterly meeting the General Counsel
gave a presentation regarding the breaches and/or the company’s data security generally, and the Audit Committee discussed such
issued in at least sixteen committee meetings).
796 See, e.g., Robert Kulla, Derivatively on behalf of Target Corporation v. Gregg W. Steinhafel et al., Case 0:14-cv-00203-
SRN-JSM, United States District Court, District of Minnesota, filed January 21, 2014 (a shareholder derivative action against certain
Target officers and directors alleging they were responsible for the data breach sustained by Target because of, among other things,
alleged failure to take reasonable steps to maintain customer personal and financial information is a secure manner, and failure to
provide adequate and prompt notice to consumers); Maureen Collier, Derivatively and on behalf of Target Corporation v. Target
Corporation (United States District Court, District of Minnesota (a shareholder derivative action, alleging false and misleading
statement about the data breach). The shareholder derivative actions filed against Target in Minnesota have been consolidated and
are pending as of May 2015.
797 However, in the current era of frequent news stories of data breaches, a report of even a large breach does not necessarily
result in a drop in stock prices, at least not until there are reports of substantial costs to the company that arguably affect earnings, and
often an initial drop is followed by a quick recovery. A recent article in the Harvard Business Review found that “even the most
significant recent breaches had very little impact on the company’s stock price,” available at https://hbr.org/2015/03/why-databreaches-
dont-hurt-stock-prices. Similarly, “actual expenses … amount to less than 1% of each company’s annual revenues. After
reimbursement from insurance and minus tax deductions, the losses are even less,” according to an analysis from a fellow at the
Columbia School of International and Public Affairs, available at http://fortune.com/2015/03/27/how-much-do-data-breachesactually-
cost-big-companies-shockingly-little/. There may also be potential longer-term economic effects to consider. See, e.g.,
Locke Lord LLP Privacy and Cybersecurity Newsletter - May 2015, “Economic Impact from a Company’s Data Breach – No Big
Deal? Not So Fast!”, available at http://www.lockelord.com/newsandevents/publications/2015/05/economic-impact.
-183-
on boards by the Sarbanes-Oxley Act of 2002, the federal Red Flags Rule discussed above
specifically requires that the board of directors, a board committee, or a designated employee at the
level of senior management be involved in the oversight, development and administration of the
required identity theft prevention program. In addition, as discussed above, in October 2011, the
SEC Division of Corporation Finance released a Disclosure Guidance stating that public companies
may need to disclose their exposure to cyber security risks and incidents as potential material
information subject to securities law disclosure requirements and accounting standards.798 This also
potentially provides grounds for claims alleging inadequate disclosure against directors and officers
as well as public entities.
If a data breach leads to a suit by the owners of the compromised data – or by shareholders if the
breach leads to a large loss to the insured company – against the allegedly responsible directors or
officers, those directors and officers may look to their D&O policies to see if there is coverage
(mindful, of course, of any exclusions that may apply). Similarly, in the event of a securities action,
the targeted company will likely look to any entity coverage provided by such policies.
An indication of some of the coverage issues that can arise is demonstrated by coverage litigation
concerning the request for coverage by a drug testing company that faced substantial defense costs
arising from a federal HIPAA investigation, which sought coverage for its defense cost under its
D&O policy. Reportedly, a primary issue was whether the defense costs for a Department of
Justice investigation were subject to a regulatory sublimit, or entitled to the full policy limits for
third party claims.799
As to exclusions, it is possible, for instance, that the D&O policy at issue may exclude claims
arising from violations of privacy rights or cyber events, thus potentially limiting the scope of
available coverage in the event of a data breach.800
Thus, D&O policies can be potentially exposed to at least requests for coverage, in the event of
large breaches, especially those involving publicly traded companies.
798 Securities and Exchange Commission, Division of Corporation Finance, CF Disclosure Guidance: Topic No. 2,
Cybersecurity, Oct. 13, 2011, available at http://www.sec.gov/divisions/corpfin/guidance/cfguidance-topic2.htm. See section above
on U.S. Regulatory and Statutory Landscape, and discussion on SEC Guidances.
799 Millennium Laboratories Inc. v. Allied World Assurance Company (U.S.) Inc., case no. 3:12-cv-02280 (United States
District Court for the Southern District of California); see also Millennium Laboratories, Inc. v. Allied World Assurance Company,
case no. 1:2014mc00009 (United States District Court for the Eastern District of California). . See Linda Chiem, Millennium Says
Insurer Must Cover HIPAA Probe Defense, http://www.law360.com/articles/541526/print?section+pivacy.
800 See, e.g., Resource Bank v. Progressive Cas. Ins. Co., 503 F. Supp. 2d 789, 795-97 (E.D. Va. 2007) (insured sought
coverage under its D&O policy for two class action lawsuits alleging that the insured violated the Telephone Consumer Protection
Act by sending unsolicited facsimile advertisements; court held coverage was excluded, in part, on the basis of the policy’s Bodily
Injury and Property Damage Exclusion that excluded coverage for claims of “invasion of privacy”). But see First Bank of Del., Inc.
v. Fidelity and Deposit Co. of Maryland, No. N11C-08-221, 2013 WL 5858794 (Del. Super. Ct., Oct. 30, 2013) (bank sought
coverage under Electronic Risk Liability portion of D&O policy for costs incurred from a web server hacking; court said a policy
exclusion regarding “unauthorized use of, or unauthorized access to electronic data” technically applied, but found for the insured
because the exclusion was so broad as to render coverage illusory.)
-184-
g. Kidnap and Ransom/Cyber Extortion
Corporations and individuals operating in high-risk areas around the world often carry kidnap and
ransom coverage. The policies typically provide indemnity in connection with ransom payments
and personal accident losses caused by kidnapping incidents. Such policies may also cover
extortion, including extortion related to a threatened introduction or activation of a computer virus
to the insured’s computer system unless a ransom is paid. Depending on the policy’s scope of
coverage, including how the policy defines “virus,” such coverage may extend to a hacker’s
threatened use of software to capture private data.
With the increase in threats of cyber extortion in recent years, policies specifically directed at cyber
extortion are now available and often offered in conjunction with specialty policy products directed
at providing coverage for network security and related risks.
VII. Privacy Litigation in the U.S.: Current Issues
In the last few years there has been a dramatic increase in litigation alleging violations of data
protection, breach response and other statutory and common law rights and obligations concerning
the collection, usage, disclosure and protection of information about individuals. Most of the court
decisions to date focus on whether plaintiffs can survive an early motion to dismiss, and satisfy
threshold issues such as standing, causation, and demonstrating a legally cognizable injury under
applicable law. There have been mixed results (and significant legal expenses incurred by parties
on both sides of the issues). Jurisdiction counts on these issues.
1. Article III Standing
Consumer lawsuits based on data breach or allegations of improper data access, collection, use or
disclosure are typically pleaded as class actions and are therefore initiated in, or removed to, federal
court pursuant to the Class Action Fairness Act.801 Once in federal court, the lawsuit must comply
with the requirement of Article III of the Constitution that there be an actual “case or controversy”
between the parties. Among the requirements for a “case or controversy” is that the plaintiff has
suffered an injury in fact that is actual or imminent, not conjectural or hypothetical.802 In the
absence of such an injury, the case is subject to dismissal based on a lack of standing.
Consumer claims based on the exposure of Personal Information have met mixed success at
clearing the federal standing hurdle, and the number of decisions addressing the issue is growing
yearly. Many lower courts have dismissed consumer claims arising from data breaches for a lack of
standing, finding the alleged injury to be indefinite and speculative, although frequently allowing
repleading by plaintiffs to provide them with an opportunity to try to cure defects in pleading if they
801 28 U.S.C. § 1332(d). The Class Action Fairness Act (“CAFA”) grants federal courts jurisdiction over class action lawsuits
even in the absence of complete diversity between the parties, if certain other conditions are met. Defendants are not required to
proffer affirmative evidence with a removal petition under CAFA; rather, a removal notice need only plausibly allege the amount in
controversy. Dart Cherokee Basin Operating Co. LLC v. Owens, No. 13-719, 574 U.S. ___ (2014).. A named plaintiff in a putative
class action who stipulates, prior to certification of the class, that the class will not seek damages that exceed the $5 million amount
in controversy requirements of CAFA does not prevent removal under CAFA. Standard Fire Ins. Co. v. Knowles, 133 S. Ct. 1345
(2013).
802 See, e.g., Friends of the Earth, Inc. v. Laidlaw Envtl. Servs., Inc., 528 U.S. 167, 180-81 (2000) (citing Lujan v. Defenders of
Wildlife, 504 U.S. 555, 560 (1992) (plaintiff must establish an injury-in-fact that is “concrete and particularized”)).
-185-
have a basis for doing so.803 A number of courts have found standing in consumer lawsuits at least
at the early stage of motions to dismiss complaints, although that can often be dependent on the
jurisdiction of the court hearing the case. 804 Some of the actions which initially survive a standing
challenge later fail to survive post-discovery motions for summary judgment.805
Some federal appellate courts have found the standing requirement to be satisfied by allegations of
an increased risk of future harm in the context of breaches of Personal Information.806 Other federal
appellate courts, however, have found that the “risk of future harm” presented by data breaches
involving exposure of Personal Information is too speculative, and have held that persons whose
information “may” have been accessed does not have standing, particularly in the absence of
evidence suggesting that the data has been, or will ever be, misused.807
803 See, e.g. Galaria v. Nationwide Mut. Ins. Co., 998 F. Supp. 2d 646, 655-60 (S.D. Ohio 2014); In re Sci. Applications Int’l
Corp. (SAIC) Backup Tape Data Theft Litig., 45 F. Supp. 3d 14, 26-28 (D.D.C. 2014); see also, Lewert v. P.F. Chang’s China
Bistro, Inc., No. 14-cv-4787, 2014 U.S. Dist. LEXIS 171142, at *7-8 (N.D. Ill. Dec. 10, 2014) (unauthorized credit charges flowing
from the breach where the credit card company declines the charges or reimburses the card holder); In Re Google Android Consumer
Privacy Litigation, 2013 WL 1283236 (N.D. Cal. Mach 26, 2013) (holding allegations were insufficient, but allowing plaintiffs to
amend their complaint, which survived a motion to dismiss at least as to some claims), 2014 WL 988889 (N.D. Cal. March 10, 2014)
(granting in part and denying in part Google’s motion to dismiss the Second Amended Class Action Complaint, finding sufficient
facts alleged to show standing based on diminished battery life and fraudulent misrepresentations under California Unfair
Competition Law); Low v. LinkedIn Corp., No. 11-CV-01468, 2011 WL 5509848 (N.D. Cal. Nov. 11, 2011) (holding that plaintiff in
class action claiming that LinkedIn disclosed Personal Information to third parties and marketing companies alleged harm that was
too abstract and hypothetical to support Article III standing); Key v. DSW, Inc., 454 F. Supp. 2d 684 (S.D. Ohio 2006); Randolph v.
ING Life Ins. & Annuity Co., 486 F. Supp.2d 1 (D.D.C. 2007), aff’d, 973 A.2d 702, 708 (D.C. Cir. 2009); Giordano v. Wachovia Sec.
LLC, No. 06-476, 2006 WL 2177036, 2006 U.S. Dist. LEXIS 52266 (D.N.J. Jul. 31, 2006); Bell v. Acxiom, No. 06-485, 2006 WL
2850042, 2006 U.S. Dist. LEXIS 72477 (E.D. Ark., Oct. 3, 2006).
804 See, e.g., In re Target Corp. Customer Data Sec. Breach Litig., 2014 U.S. Dist. LEXIS 175768, at *6-7 (D. Minn. Dec. 18,
2014) (finding that plaintiffs had alleged a concrete and particularized injury, traceable to Target’s conduct, based on allegations of
“unlawful charges, restricted or blocked access to bank accounts, inability to pay other bills, and late payment or new card fees,” and
therefore had standing); In re Adobe Sys. Privacy Litig., 2014 U.S. Dist. LEXIS 124126, at *27-28 (N.D. Cal. Sept. 4, 2014); Moyer
v. Michaels Stores, Inc., 2014 U.S. Dist. LEXIS 96588, at *19 (N.D. Ill. July 14, 2014); In re Sony Gaming Networks & Customer
Data Sec. Breach Litig., 996 F. Supp. 2d 942, 962 (S.D. Cal. 2014); Claridge v. RockYou, 785 F. Supp. 2d 855 (N.D. Cal. 2011)
(declining to dismiss, for lack of standing, plaintiffs’ claim that they traded email and social media login credentials for access to
applications, and that they lost the value of those credentials when the data was stolen by a hacker). But see, e.g., cases cited in
footnotes 752, et seq., below.
805 In re iPhone Application Litig. 844 F.Supp.2d 1040 (N.D. Cal. 2012) (finding that plaintiffs sufficiently alleged injury in
fact wherein plaintiffs experienced and encountered diminished storage and battery life, unexpected and unreasonable risk to the
security of sensitive personal information, and detrimavmetntal reliance on Apple’s representations regarding privacy protection
afforded to users of iDevice apps.). The court found it compelling that plaintiffs identified specific types of personal information
collected, such as home and workplace locations, gender, and age when determining sufficient harm. However, in a subsequent
decision on summary judgment, the court found that the plaintiffs had failed to establish material issues of material fact concerning
their standing under Article III, including with regard to the claims of violation of the California Unfair Competition Law and the
California Consumers Legal Remedies Act, and granted Apple summary judgment. 2013 WL 6212591 (N.D. Cal. Nov. 25, 2013).
806 See Krottner v. Starbucks Corp., 628 F.3d 1139 (9th Cir. 2010) (finding that plaintiffs had pleaded a “credible threat” of
“real and immediate harm” stemming from the theft of a laptop containing their Personal Information); Pisciotta v. Old Nat’l
Bancorp, 499 F.3d 629 (7th Cir. 2007) (holding that plaintiffs’ allegation of increased risk of identity theft was sufficient to confer
constitutional standing, despite the plaintiffs’ failure to plead financial loss or actual incidents of identity theft). These decisions
were issued prior to the Supreme Court’s decision on standing in Clapper v. Amnesty International USA, 568 U.S. ___, 133 S. Ct.
1138, 185 L. Ed. 2d 264 (2013), discussed below. See also discussion in section on “Privacy Issues Arising Out of Behavioral
Advertising and Online Tracking,” above.
807 See, e.g., Katz v. Pershing, LLC, 672 F.3d 64, 79 (1st Cir. 2012) (plaintiff’s purchase of “identity theft insurance and credit
monitoring services to guard against a possibility, remote at best, that her nonpublic personal information might someday be pilfered”
was a “purely theoretical possibility” that did “not rise to the level of a reasonably impending threat.”); Reilly v. Ceridian Corp., 664
F.3d 38, 42 (3d Cir. 2011) (concluding that “allegations of hypothetical, future injury are insufficient to establish standing” and
affirming dismissal of a complaint in which the putative class members alleged that, as a result of a data breach involving personal
information, they had an increased risk of identity theft, incurred costs to monitor their credit activity, and suffered emotional
-186-
The 2013 United States Supreme Court decision in Clapper v. Amnesty International USA808 is
often the basis for motions to dismiss plaintiffs’ complaints in the privacy and data breach context.
It is often cited in the battle cry against plaintiffs attempting to establish claims arising from data
breach and other consumer privacy claims, although time has shown it to be a less definitive
decision than initially acclaimed. In Clapper, the Supreme Court rejected a challenge to the
constitutionality of a federal electronic surveillance statute, and held that fears of government
interception of electronic communications were simply too speculative to confer legal standing on a
plaintiffs’ group to bring suit. According to the Court, standing exists only where an injury is
“concrete, particularized, and actual or imminent.” The Court noted that “our standing inquiry has
been especially rigorous” when the challenge is an action of another branch of the federal
government, and did note that in other instances it has found standing based on a “substantial risk”
that harm will occur which may prompt plaintiffs to reasonably incur costs to mitigate or avoid that
harm.809 Nevertheless, when the standing principles discussed in Clapper are applied to privacy
and data breach cases, the absence of concrete harm in those circumstances often becomes a
significant obstacle for plaintiffs bringing suit for such claims – although enterprising plaintiffs are
espousing new theories to demonstrate more immediate impacts flowing from data breaches and are
working hard to try to convince courts to use the prong of “imminent” harm to overcome the hurdle
presented by lack of “actual” harm. 810 811
distress), cert. denied, 132 S. Ct. 2395 (U.S. 2012). See also Lambert v. Hartman, 517 F.3d 433, 437 (6th Cir. 2008); In re LinkedIn
User Privacy Litig., No. 5:12-CV-03088 EJD, 2013 WL 844291 (N.D. Cal. Mar. 6, 2013) (allegations of economic harm were
insufficient to satisfy standing requirement); Amburgy v. Express Scripts, Inc., 671 F. Supp. 2d 1046, 1053 (E.D. Mo. 2009);
Hammond v. Bank of New York Mellon Corp., No. 08 Civ. 6060, 2010 WL 2643307, at *7 (S.D.N.Y. Jun. 25, 2010) (“The Court
concludes that Plaintiffs lack standing because their claims are future-oriented, hypothetical, and conjectural. There is no ‘case or
controversy.’”).
808 568 U.S. ___, 133 S. Ct. 1138, 185 L. Ed. 2d 264 (2013). See, US Supreme Court Limits Standing to Sue in a Case that
will Likely Impact Privacy and Data Breach Plaintiffs, Edwards Wildman Client Advisory,
www.edwardswildman.com/newsstand/detail.aspx?news=3595 (now Locke Lord).
809 Another recent case, Am. Civil Liberties Union v. Clapper, 959 F. Supp. 2d 724 (S.D.N.Y. 2013), may provide a
foreshadowing of how courts balance privacy concerns with those of national security. There, the ACLU alleged that the NSA’s
bulk metadata collection program violated the Fourth Amendment. The court concluded that the NSA’s collection of metadata
related to ACLU’s phone calls constituted actual injury to establish standing, but held that the collection of all phone metadata was
authorized by FISA, and that the metadata collection program did not violate the Fourth Amendment.
810 See, e.g., Strautins v. Trustwave Holdings, Inc., No. 12 C 09115, 2014 WL 960816 (N.D. Ill. Mar. 12, 2014)
(citing Clapper, plaintiffs lacked standing where allegations were “insufficient to show that she and others face a ‘certainly
impending’ risk of identity theft”). Lower courts have reached different results in applying Clapper to privacy and data breach cases
(supra, notes 3 and 4) and there is some contradictory law in other contexts. For example, in June 2012, the U.S. Supreme Court,
after hearing oral argument, elected not to consider the Ninth Circuit’s decision in First American Financial v. Edwards, in which the
Ninth Circuit had held that statutory damages could be sufficient to confer Article III standing (injury in fact) for plaintiffs. 610 F.3d
514 (9th Cir. 2010), cert. dismissed as “improvidently granted,” 132 S. Ct. 2536, 183 L. Ed. 2d 611 (2012). Similarly, the court in In
re LinkedIn User Privacy Litigation, No. 5:12-cv-03088-EJD, slip op. 100 (N.D. Cal. Mar. 31, 2014), refused to dismiss a putative
class action against LinkedIn in which the named plaintiff, a premium subscriber to the website, alleged the company made
misrepresentations about its privacy policy. She alleged that, had she known about LinkedIn’s “lax security practices”, she would
have either attempted to purchase premium service at a lower service or not purchased it at all. Id. at 3. The court had previously
held that plaintiff lacked standing based on her allegations that “1) she did not receive the benefit of her bargain with LinkedIn, and
2) she now faces an increased risk of future harm as a result of the 2012 hacking incident.” Id. at 6. The court, however, held that
the allegations in the second amended complaint were sufficient to confer standing under California’s Unfair Competition Law
(“UCL”) because plaintiff alleged that she read and relied on the LinkedIn’s privacy statement. Id. In doing so, the court held that
“the representation in LinkedIn’s Privacy Policy falls within the scope of the labeling/advertising cases” subject to the UCL. Id. at 9.
LinkedIn has agreed a class settlement of the litigation. By contrast, in Galaria v. Nationwide Mut. Ins. Co., No. 2:13-CV-118, 2014
WL 689703 (S.D. Ohio Feb. 10, 2014), plaintiffs in putative class-action suit sued after hackers stole personally identifiable
information from the defendants, but did not allege that the personal identifiable information was misused or that their identity was
stolen as a result of the hacking. The court held that neither the increased risk that plaintiffs would be victims of identity theft at
some indeterminate point in the future, nor the expenses to mitigate risk of identity theft, nor the loss of privacy, constituted injury
-187-
The Supreme Court recently granted certiorari in another important Article III standing case that
will likely affect the landscape of data breach cases filed in federal courts based on statutory claims.
In Robins v. Spokeo, Inc.,812 the Ninth Circuit had addressed the question of whether the alleged
violation of a statutory right confers standing, even though the plaintiff is not alleged to have
suffered any tangible harm. The appellate court answered that question in the affirmative for
purposes of the plaintiff’s FCRA claim, holding that the “alleged violations of Robins’s statutory
rights are sufficient to satisfy the injury-in-fact requirement of Article III.”813 On April 27, 2015, the
Supreme Court granted certiorari, suggesting that it intends to clarify the standard for Article III
standing in actions in which the only damages are statutory ones, which is often the situation in data
breach and other privacy class actions.814
2. Cognizable Injuries
Even if a consumer claim is deemed to satisfy the standing requirement, it may still be dismissed, or
subject to an unfavorable ruling on summary judgment, if it fails to allege a cognizable injury under
state law. In other words, the court may acknowledge that the plaintiff pleaded a sufficient injury to
satisfy Article III’s standing requirements, but conclude that applicable state law simply does not
provide a remedy for such an injury.815
An example of the battlefield over whether consumer class actions arising from a data breach can
sufficiently assert a valid cause of action for legally cognizable damages is the litigation arising
sufficient to confer standing. Similarly, in In re Google Inc. Cookie Placement Consumer Privacy Litig., CV 12-2358-SLR, 2013
WL 5582866 (D. Del. Oct. 9, 2013), the court dismissed claims against Google alleging it violated the Electronic Consumer Privacy
Act (“ECPA”) and other federal and state statutes by tricking internet browsers to accept “cookies” that could be used to track users’
activities in order to, inter alia, display targeted advertising. The court held that the putative class did not allege injury-in-fact
sufficient to establish Article III standing because, although the plaintiffs’ personal information has value, the plaintiffs failed to
allege how Google’s collection of that information affected its value. Going beyond this, the court said plaintiffs’ case would also
fail on the merits because Uniform-Resource Locators (“URLs”) are descriptors, rather than communication, within the meaning of
the ECPA. In other cases, however, courts have held that allegations that impact on battery consumption caused by data collection
and aggregation practices are sufficient to allege standing at the pleading stage. See, e.g., In re Google Android Consumer Privacy
Litig., No. 11-MD-02264, 2013 WL 1283236 (N.D. Cal. Mar. 26, 2013); Goodman v. HTC Am., Inc., No. C11-1793, 2012 WL
2412070 (W.D. Wash. June 26, 2012) (allegation that defendant’s location tracking practices shortened battery charges and battery
life were sufficient to state claim under California's Unfair Competition Law. Cal. Bus. & Prof. Code § 17200). The court in In re
Google Android Consumer Privacy Litig. later held that claims related to impacts on battery life were insufficient to allege economic
damages under the Consumer Fraud and Abuse Act, but allowed other claims to proceed based on such allegations. See In re Google
Android Consumer Privacy Litig., No. 11-MD-02264, slip op. 78, at 7 (N.D. Cal. Mar. 10, 2014) (unpublished).
811 While this paper discusses developments as of June 2015, as we were about to finalize it in July, one of the first post-
Clapper federal appellate court decisions was issue on standing, and so note it here. In Remijas v. Neiman Marcus Group, LLC,,
No. 14-3122 (7th Cir. July 20, 2015), the Seventh Circuit Court of Appeals addressed the issue of standing in a case involving a
hack of data breach of customer credit cards, and reversed the district court dismissal for lack of standing of the consumer claims,
noting in circumstances of a criminal hacking in which there are known to be fraudulent charges on some of the affected customer’s
accounts the plaintiffs’ allegations of hackers deliberating targeting Neiman Marcus in order to obtain customer credit card
information were sufficient to establish standing, even though the fraudulent charges were reimbursed and identities had not (yet)
been reported stolen. He court founds that “the injuries associated with resolving fraudulent charges and protecting oneself against
future identity theft” were sufficient to suffice as injuries under Article III, and it was “certainly plausible for pleading purposes” that
their injuries were “fairly traceable” to the data breach at Neiman Marcus.
812 742 F.3d 409 (9th Cir. 2014), cert. granted, 135 S. Ct. 323 (U.S. Apr. 27, 2015) (No. 13-1339).
813 Id. at 413-14,
814 135 S. Ct. 323 (U.S. Apr. 27, 2015) (No. 13-1339).
815 See, e.g., In re IPhone Application Litigation, supra; Pisciotta v. Old Nat. Bancorp., 499 F.3d 629 (7th Cir. 2007); Ruiz v.
Gap, Inc., 622 F. Supp. 2d 908, 918 (N.D. Cal. 2009); McLoughlin v. People’s United Bank, Inc., No. 3:08-cv-00944, 2009 WL
2843269, at *8 (D.Conn. Aug. 31, 2009).
-188-
from the well-publicized Sony PlaySation breach. Sony faced consumer class action suits after a
criminal intrusion into its video game network.816 The court dismissed the original complaint
against Sony entities, although with leave to amend. In doing so, the court observed that “future
harm may be regarded as a cognizable loss sufficient to satisfy Article III’s injury-in-fact
requirement,” but plaintiffs still must allege sufficient cognizable injury, as “the mere ‘damage of
future harm, unaccompanied by present damage, will not support a negligence action’” and the
allegations as pled were also not sufficient for violation of California consumer protection laws
because an increased risk alone is not sufficient.817 Plaintiffs thereafter filed an amended
complaint. While the court granted Sony dismissal of most of the claims including those for
negligence and negligent misrepresentations, it did find that the allegations were sufficient for
standing with regard to claims under California consumer protection statutes based on allegations
that Sony had omitted material information regarding the security of its online services at the time
that consumers purchased their consoles, thus sufficiently alleging a loss of money or property as a
result of unfair business practices. The court also noted that plaintiffs had made sufficient
allegations of affirmative misrepresentations in the company’s user agreements and privacy policy
regarding “reasonable security” and “industry standards encryption, as well a fraud based omissions
and thus denied dismissal of claims under California consumer protection statutes.818
Defendants have obtained favorable holdings in other recent data breach litigations as well,
although trends are difficult to predict and often depend on jurisdiction, as well as facts.819
An earlier federal data breach case illustrates the obstacles that both plaintiffs and defendants face,
and the potential impact of jurisdiction and the particular facts and circumstances of the breach,
although the result may have been different if considered by a federal court after the decision in
Clapper v. Amnesty International U.S., supra, on the issue of when injury is too speculative. In
litigation arising from the Hannaford Brothers stores breach, a federal district court in Maine, in a
decision later reversed in part, initially dismissed a consumer class action due to lack of cognizable
injury, concluding that consumers who did not have a fraudulent charge actually posted to their
account cannot recover.820 When the district court certified questions to the state’s highest appellate
court as to what constitutes a cognizable injury under Maine common law, the Maine Supreme
816 In re Sony Gaming Networks & Customer Data Sec. Breach Litig., No. MDL-11MD2258 AJB-MDD, 2012 WL 4849054
(S.D. Cal. Oct. 11, 2012).
817 Id.
818 In re Sony Gaming Networks & Customer Data Sec. Breach Litig., No. MDL 11MD2258, 2014 WL 223677 (S.D. Cal. Jan.
21, 2014). See Stephen Prignano and Matthew Murphy, What is next in consumer data breach litigation?, Inside Counsel, May 8,
2014, http://www.insidecounsel.com/2014/08/01/whats-next-in-consumer-data-breach-litigation-mini.
819 See, e.g., Willingham v. Global Payments, Inc., No. 12-cv-01157, 2013 WL 440702 (N.D. Ga., Feb. 5, 2013) (Magistrate
Judge recommended dismissal with prejudice of consumer claims against breached entity; case subsequently discontinued); Galaria
v. Nationwide Mut. Ins. Co., 998 F. Supp. 2d 646 (S.D. Ohio, Feb. 10, 2014) (dismissing putative class action brought against an
insurance company stemming from theft of personal information from its network, and rejecting that plaintiffs who did not have
actual identity theft were injured; court held increased risk of further injury was too speculative to confer standing, and discussed the
existing case law arising from data breaches).
820 In re Hannaford Bros. Co. Customer Data Security Breach Litigation, 613 F. Supp. 2d 108 (D. Me. 2009), rev’d, Anderson
v. Hannaford Bros. Co., 659 F.3d 151 (1st Cir. 2011). The court allowed the case to proceed as to a single named plaintiff who had
allegedly suffered a fraudulent charge that had allegedly not been removed from her account and which she had to pay. See also
Rowe v. Unicare Life and Health Ins. Co., No. 09-C-2286, 2010 U.S. Dist. LEXIS 1576 (N.D. Ill. Jan. 5, 2010), in which the court,
citing the liberal pleading requirements of Illinois law, declined to dismiss common law and statutory claims related to the
inadvertent disclosure of the plaintiffs’ Personal Information on the Internet although there were no allegations of theft of Personal
Information.
-189-
Court held that, under Maine law, in the absence of physical harm or economic loss or identity theft,
time and effort alone spent in a reasonable effort to avoid harm do not constitute a cognizable injury
for purposes of negligence or implied contract.821 The federal court accordingly entered judgment
in favor of the defendant. Plaintiffs appealed that decision to the United States Court of Appeals for
the First Circuit, which overturned the lower federal court decision as to certain categories of
alleged damages, at least insofar as holding allegations were sufficient to withstand a motion to
dismiss. The First Circuit held that consumer claims for reimbursement of the cost of identity theft
insurance and of fees for replacement of credit and debit cards following a breach of their personal
information can be a cognizable injury, under certain circumstances.822 The court determined that
certain categories of costs incurred by the plaintiffs were “reasonably foreseeable mitigation costs”
and thus constitute a cognizable harm (under Maine law). The court held, however, that not all
mitigation costs in all circumstances would be recoverable but, rather, that plaintiffs need to show
that the efforts to mitigate were reasonable, and that those efforts constitute a legal injury “such as
actual money lost, rather than time or effort expended.”823 The First Circuit made a distinction
between breaches involving inadvertently misplaced or lost data that has not been accessed or
misused by third parties, from a large-scale criminal operation in which credit or debit card
information was deliberately taken by sophisticated thieves to use the information to their financial
advantage and had resulted in reports of actual fraudulent use of many (1,800) of the stolen card
accounts, and held that: “[i]t was foreseeable, on these facts, that a customer, knowing that her
credit or debit card data had been compromised and that thousands of fraudulent charges had
resulted from the same security breach, would replace the card to mitigate against misuse of the
card data. … Similarly, it was foreseeable that a customer who had experienced unauthorized
charges to her account … would reasonably purchase insurance to protect against the consequences
of data misuse.”824 The court also noted that “the principle of reasonableness” imposes a boundary
on recovery of costs by claimants and noted, by way of example, that where neither the plaintiff nor
those similarly situated have experienced fraudulent charges resulting from theft or loss of data, the
purchase of credit monitoring services may be unreasonable and not recoverable. It also affirmed
the lower court’s holding that there can be situations in which there is no foreseeable loss as a
matter of law. Therefore, the court upheld the district court’s finding that damages such as loss of
award points and change fees for pre-authorized credit transactions are not foreseeable and not
compensable.825
821 In re Hannaford Bros. Co. Customer Data Security Breach Litig., 2010 Me. 93, 4 A.D.3d 492, 497 (Me. 2010) (“Unless the
plaintiffs’ loss of time reflects a corresponding loss of earnings or earning opportunities, it is not a cognizable injury under Maine law
of negligence”).
822 Anderson v. Hannaford Bros. Co., 659 F.3d 151 (1st Cir. 2011).
823 Id.
824 Anderson v. Hannaford Bros. Co., 659 F.3d at 164-65. For an analysis of the First Circuit’s decision in Hannaford Bros.
Co., see Edwards Wildman Palmer LLP Client Advisory, “Foreseeable and Reasonable Mitigation Costs Can Constitute Cognizable
Injury from a Data Breach – At Least Under Maine Law,” Oct. 2011,
http://www.edwardswildman.com/newsstand/detail.aspx?news=2659 (now at www.lockelord.com).
825 On remand, the district court denied plaintiffs’ motion for class certification, finding that plaintiffs had failed to meet the
predominance requirements of Fed. R. Civ. P. 23(b)(3). In re Hannaford Bros. Co. Customer Data Sec. Breach Litig., No. 2:08-MD-
1954, 2013 WL 1182733 (D. Me. Mar. 20, 2013). The court in its decision denying class certification noted the plaintiffs’ failure to
provide expert testimony supporting its theory of class-wide damages, which meant that common issues would not predominate with
regard to damages.
-190-
A similar result was reached in a federal decision construing the Federal Trade Commission Act
(“FTCA”) as it found that lost time and expense constitutes “substantial injury” under FTCA.826
However, jurisdiction and factual allegations count, and other courts have held that a claim for
credit monitoring costs following a theft of a laptop or other computer hardware containing
Personal Information, without evidence that the information had been accessed or used, is alone not
sufficient to sustain a claim for negligence under applicable common law.827
For their lawsuits to survive, plaintiffs asserting claims of financial injury as a result of a data
breach must sufficiently allege not only cognizable legal injury, but also adequately allege facts
supporting causation between a breach due to improper conduct by a plaintiff (such as failure to
provide reasonable security) and the injury (identity theft and associated financial loss). Creative
theories include that a portion of the amounts paid for products or services included security, and
thus in the absence of providing security the defendant is unjustly enriched by that portion of the
amount paid. 828 While survival of a motion to dismiss does not mean that the plaintiffs would
826 FTC v. Neovi, Inc., 598 F. Supp. 2d 1104, 1105 (S.D. Cal. 2008) (finding that the affected consumers “often spent a
considerable amount of time and resources contesting the checks at their banks, protecting their accounts, and attempting to get their
money back” and that “the time consumers spent in these efforts was valuable”), aff’d, 604 F.3d 1150 (9th Cir. 2010).
827 See, e.g. In Re Horizon Healthcare Services Inc. Data Breach Litigation, Civil Action No. 13-7418 (CCC) U.S. Dist.
Court, D. N.J. (March 31, 2015) (dismissing complaint where two password protected lap tops with PI were stolen, and plaintiffs’
alleged only generalized imminent harm and no actual identity theft); Reilly v. Ceridian Corp., 664 F.3d 38, 46 (3d Cir. 2011), cert.
denied, 132 S. Ct. 2395 (2012) (where computer system containing personal information was stolen, the “costs to protect against an
alleged increased risk of identity theft is not enough to demonstrate a “concrete and particularized” or “actual or imminent” injury;
Hammond v. The Bank of N.Y. Mellon Corp., No. 1:08-CV-06060 (S.D.N.Y. Jun. 25, 2010) (granting defendant’s motion for
summary judgment on claims of negligence, breach of fiduciary duty, breach of implied contract, and state consumer protection law
concerning the theft from the defendant of computer backup tapes containing Personal Information of the plaintiffs; the court held
that the plaintiffs lacked Article III standing because their claims of increased risk of future harm were “future-oriented, hypothetical,
and conjectural,” and thus there was no case or controversy); Ruiz v. Gap, Inc. and Vangent Inc., 622 F. Supp. 2d 908 (N.D. Cal.
2009), aff’d, 380 Fed. App’x. 689 (9th. Cir. 2010) (granting defendants’ motion for summary judgment of claims for negligence and
breach of contract seeking compensation for credit monitoring services, which claims arose from theft of laptop computers from the
offices of a vendor of Gap that processed job applications, resulting in loss of Personal Information of plaintiffs; the court held that
under California law, the increased risk of future theft did not rise to the level of appreciable harm necessary to assert a negligence
claim, and plaintiff’s assertion that his credit monitoring costs were a compensable attempt to mitigate damages failed because he had
no damages to mitigate since he had never been a victim of identity theft); Caudle v. Towers, Perrin, Forster & Crosby, Inc., 580 F.
Supp. 2d 273 (S.D.N.Y. 2008) (dismissing the claim for negligence and breach of fiduciary duty brought by an employee against his
employer’s vendor who lost a laptop; however, the court did allow to go forward the claim for breach of contract to allow discovery
on the issue of whether the employee was a third-party beneficiary of the contract between his employer and the vendor; the plaintiff
had withdrawn his other claims for misrepresentation and breach of privacy); Shafran v. Harley Davidson, Inc., No. 07-CV-01365,
2008 WL 763177 (S.D.N.Y. Mar. 20, 2008) (granting motion to dismiss claims for future credit monitoring arising from loss of a
laptop containing Personal Information, and noting that “[c]ourts have uniformly ruled that the time and expense of credit monitoring
to combat an increased risk of future identity theft is not, in itself, an injury that the law is prepared to remedy”). These decisions
also identify case law in other jurisdictions addressing the issue of what is a legally cognizable injury of an individual whose Personal
Information was breached, but who has not sustained actual identity theft or financial loss; Stollenwerk v. Tri-West Health Care
Alliance, 254 Fed. Appx. 664 (9th Cir. 2007) (upholding summary judgment against plaintiffs whose Personal Information was
contained on a stolen hard drive, and denying credit monitoring costs as damages, as the plaintiffs did not claim any actual misuse of
their Personal Information). See also Randolph v. ING Life Ins. and Annuity Co., 486 F. Supp. 2d 1 (D.D.C. 2007) (finding that the
plaintiffs’ increased risk of identity theft and the cost of protecting against identify theft, following the theft of a laptop containing
their private Personal Information, did not rise to the level of an “injury in fact” for constitutional standing purposes).
828 For example, in Resnick v. AvMed, Inc., No. 10-cv-24513 (S.D. Fla. Filed Dec. 17, 2010), plaintiffs alleged that AvMed
failed to secure personal plaintiffs’ personal information including protected health information on company laptops, two of which
were stolen from the AvMed’s office conference room that contained personal information for over one million customers. Although
the district court initially dismissed the case based on a lack of cognizable injury, the Eleventh Circuit reversed, holding that plaintiffs
in issue had sufficiently alleged a nexus between the data theft and their identity theft; the Eleventh Circuit also upheld a claim for
unjust enrichment which did not have a causation requirement, based on allegations that the plaintiffs had conferred a monetary
benefit on AvMed in the form of monthly premiums that included an element for administrative costs of data management and
security that they allegedly failed to implement. 693 F. 3d 1317 (11th Cir. 2012). But see In Re Horizon Healthcare Services, Inc.
Data Breach Litigation, Civil Action No. : 13-7418 (CCC), U.S. District Court, D.N.J. (March 31, 2015) (dismissing a complaint
-191-
ultimately prevail at the end of the case, the costs of litigation and associated risks can generate
substantial settlements.829
In the face of the challenges in establishing common law damages, consumer attorneys have been
pressing statutory claims as an avenue for recovery.830 Companies facing potential data breach
cases, therefore, must navigate a briar patch of federal and state data breach, privacy and consumer
protection laws, many of which include a private right of action.831 As demonstrated by the cases
cited in this paper, recent court decisions demonstrate a trend of alleging unjust enrichment based
on plaintiffs’ financial payments to defendants for services that included data protection, and for
violation of state unfair trade practice or consumer protection statutes. 832 For example, one
decision found that class action plaintiffs’ claims survived at least this initial hurdle under
California’s Consumers Legal Remedies Act when they alleged that the defendant disclosed highly
sensitive Personal Information; the statute requires only that a consumer has suffered “any
damage,” defined by California decisions as a “low but nonetheless palpable threshold of
damage.”833 Another federal court decision interpreting California law, however, limited plaintiffs’
potential recovery by requiring that all members of a class action establish a pecuniary loss resulting
from a data breach or privacy violation.834 Moreover, given the differences in state statutes,
against a provider of health insurance products and services that sustained a breach when a thief stole two password protected laptop
computers containing personal information of members, and distinguishing holding in Av Med by noting that in the case before it
plaintiffs generalized allegations of imminent harm were insufficient and rejecting the argument that a portion of the insurance
premiums paid were for security and thus there were economic injuries, noting that unlike in AvMed the plaintiffs did not alleged they
were careful in sustaining their sensitive information or that they sustained identify theft or even phishing).
829 Following the Eleventh Circuit decision in Reznick v. AvMed, supra, allowing the case to proceed, the parties settled the
claim for $3 million. See order Granting Motion for final Approval of Class Action Settlement Agreement, No. 10-cv-24513-JLK. (S.
D. of Fla. Feb. 28, 2014).
830 In the case of Jewel v. Nat'l Sec. Agency, 673 F.3d 902, 908 (9th Cir. 2011), quoting Lujan v. Defenders of Wildlife, 504
U.S. 555, 578 (1992), observed, “a concrete ‘injury required by Art. III may exist solely by virtue of statutes creating legal rights, the
invasion of which creates standing.’”).
831 See U.S. CHAMBER INSTITUTE FOR LEGAL REFORM, THE NEW LAWSUIT ECOSYSTEM 102-03 (Oct. 2013) (citing DANIEL J.
SOLOVE & PAUL M. SCHWARTZ, PRIVACY FUNDAMENTALS 150, 172-74 (Int’l Ass’n of Privacy Prof’ls, 2nd ed. 2013).
832 Some state statutes rely on interpretations of the federal FTCA, and apply its reasoning. See, e.g., Conn. Gen. Stat. § 42-
110(b); 5 Me. Rev. Stat. Ann. Tit. 5, § 207(1).
833 Doe 1 v. AOL LLC, 719 F. Supp. 2d 1102 (N.D. Cal. 2010); see also Pineda v. Williams-Sonoma Stores, Inc., 51 Cal. 4th
524 (Cal. 2011) (holding that requesting and recording a cardholder’s ZIP Code violates California’s Song-Beverly Credit Card Act);
Gaos v. Google Inc., No. 5:10-CV-4809, 2012 WL 1094646 (N.D. Cal. Mar. 29, 2012) (holding that plaintiff sufficiently alleged
injury in fact under the Stored Communications Act (part of the Electronic Communications Privacy Act) against Google Inc.); it was
consolidated with In Re Google Referrer Header Privacy Litigation, 5:10-cv-4809-EJD (N.D. Cal. ). The court granted final
approval of a class settlement on March 31, 2015.
834 In re Google Inc. Street View Electronic Comm. Litig., 794 F. Supp. 2d 1067 (N.D. Cal. 2011), affirmed, Joffee v. Google,
Inc., 746 F.3d 920 (as amended Dec. 27, 2013), plaintiffs case argued that Google used sophisticated equipment not available to the
public when taking photographs to be incorporated in its Google Maps and Google Earth programs in order to determine what
websites were being visited by users whose data had been collected. They claimed that Google violated wiretapping statutes and
laws, and that Google’s actions constituted an unfair and deceptive trade practice in violation of California law. The court denied
Google’s motion to dismiss the federal wiretapping claim, granted the motion to dismiss the state wiretap claims and granted the
motion to dismiss the unfair and deceptive trade practice claim. Moreover, the recent decision of Comcast Corp. v. Behrend, 133
S. Ct. 1426, 185 L. Ed. 2d 515 (2013), reminds lower courts to employ “a rigorous analysis” of the commonality requirements of a
putative class, including that damages can be measured class wide. In Harris v. comScore, Inc., 292 F.R.D. 579, 589 (N.D. Ill. 2013),
however, the court recently certified a class, rejecting comScore’s argument that “issue of whether each individual plaintiff suffered
damage or loss from comScore's actions precludes certification.” The court held this argument had “no applicability to the ECPA or
SCA claims, both of which provide for statutory damages.” Id.
-192-
plaintiffs may recast certain claims under more favorable state statutes.835 As noted above, in April
2015, the U.S. Supreme Court decided to hear a case in which it will decide whether statutory
damages alone are sufficient to confer standing on cases in federal courts. 836
3. Breach-Related Lawsuits
As demonstrated by cases and studies identified above, large breaches of consumer Personal
Information are often followed by data breach litigation. Moreover, data breaches have not been
isolated to a particular sector of the economy; instead, they plague every industry, including
banking, education, entertainment, health, and retail industries, among others.837 Such suits
continue to be filed, despite the significant hurdles that plaintiffs face in terms of establishing
standing and legally cognizable claims for recoverable damages discussed above.
As generally a single consumer’s claim is not financially significant enough to support litigation,
whether consumer litigation in the U.S. is pursued often turns on whether a plaintiffs’ attorney will
be able to obtain class certification for all – or a large number of – consumers affected by a breach.
Thus, in addition to the hurdles of standing and damages, another obstacle that plaintiff consumers’
and their counsel face is class action certification. The hurdles to be overcome in certifying a class,
which requires demonstrating that common issues predominate, in the data breach context, are
discussed in the class certification decision arising out of the Hannaford breach.838 There, plaintiffs
moved to certify a class consisting of customers who incurred out-of-pocket costs in mitigating
efforts in response to the breach. The court reviewed the factors necessary for plaintiffs to
demonstrate to obtain class certification, and denied certification, on the grounds that common
questions of as to damages did not predominate, particularly with regard to the impact of the breach
in issue on individual proposed members of the class and the costs they incurred. The court noted
that the fact that damages may have to be ascertained on an individual basis is not alone sufficient
to defeat class certification, but noted that while plaintiffs contended they could demonstrate total
damages sustained by the class by statistical proof, their lack of an expert opinion on their ability to
835 See, e.g., Grigsby v. Valve Corp., No. C12-0553JLR (W.D. Wash. Mar. 18, 2013). In Grigsby, the plaintiff had previously
alleged violations of California state statutes, including the California Consumer Legal Remedies Act, the California Unfair Business
Practices Act, and the California Song-Beverly Consumer Warranty Act. The court held that the plaintiff had failed to state a claim
upon which relief may be granted, but allowed the plaintiff 30 days to amend the complaint. In response, the plaintiff alleged
violations of the Washington Consumer Protection Act, alleging that he and other class members would not purchase Valve’s
services or would have done so at a different price if they had they known that Valve was not reasonably protecting its customers’
Personal Information, as promised.
836 See Robins v. Spokeo, Inc., supra.
837 See, e.g., Bell v. Blizzard Entertainment, Inc., No. 12-09475 (C.D. Cal. 2012) (class action lawsuit against video game
company); In re Sony Gaming Networks & Customer Data Sec. Breach Litig., No. MDL-11MD2258, 2012 WL 4849054 (S.D. Cal.
Oct. 11, 2012) (class action against entertainment company); Faircloth v. Adventist Health System/Sunbelt, Inc., No. 6:13-cv-00572
(M.D. Fla. filed Apr. 9, 2013) (class action filed against hospital); In re Hannaford Bros. Co. Customer Data Security Breach
Litigation, 613 F. Supp. 2d 108 (D. Me. 2009) (grocery chain), rev’d, Anderson v. Hannaford Bros. Co., 659 F.3d 151 (1st Cir.
2011); In re Zappos.com, Inc. Customer Data Sec. Breach Litig., 893 F. Supp. 2d 1058 (D. Nev. 2012) (online retailer); In re
LinkedIn User Privacy Litig., No. 5:12-CV-03088 EJD, 2013 WL 844291 (N.D. Cal. Mar. 6, 2013) (online social networking site);
Resnick v. AvMed, Inc., 693 F.3d 1317, 1324 (11th Cir. 2012) (class action against company that delivers healthcare services through
health plans and government sponsored managed care plans). See also Section V.b. above, The Industries, Assets, and Types of Data
Most Frequently Compromised.
838 In re Hannaford Bros. Co. Customer Data Sec. Breach Litig., No. 2:08-MD-1954, 293 F.R.D. 21 (U. S. Dist. Ct, D. Me.,
Mar. 20, 2013). Another decision initially denying class certification in the data breach context, under a state class action statute,
Tabata v. Charleston Area Medical Center, 2013 WL 8210917 (W. Va. Cir. Ct., June 24, 2013), was recently reversed and remanded
in a decision that states it is to be narrowly construed and is not to be considered an indicator of ultimate success of the claims,
Tabata v. Charleston, Supreme Court of Appeals of West Virginia, Case No. 12-076 (May 28, 2014).
-193-
prove total damage was fatal to class certification. The decision does not rule out the possibility of
class certification if the proper demonstration is made, but does demonstrate the uphill battle
plaintiffs face in obtaining class certification. 839
That said, the current spate of litigation over the past few years is not likely to end soon, and some
entities faced with the costs of breach litigation decide to settle with the putative class or otherwise
privately resolve the dispute, resulting in further costs to a breached entity.840
Consumer claims are not the only legal proceedings faced by a breached entity. Cases involving
breaches of tens of millions of payment cards often result in claims being asserted by a wide range
of entities affected by the breach apart from consumers. The number and variety of lawsuits that
can be faced by a breached entity was demonstrated by the TJX breach, one of the earlier and larger
retail security breaches, in which hackers stole data relating to over 45 million credit and debit cards
used at TJX stores841, as well as the Heartland Processing Systems breach discussed above, and
more recently by the late 2013 Target breach also discussed above. These demonstrate that
particularly in mega breaches, resulting proceedings will include not only consumer lawsuits, but
also potentially suits by shareholders and investors, banks and card brands if payment cards are
involved, suits by and against vendors and others entities involved, and an array of regulatory
investigations by state attorneys general and any federal or other government entities with oversight
authority over the type of entity that sustained the breach.
As discussed above in the sections on third party losses arising from data breaches and the Payment
Card Industry, financial institutions issuing payment cards that are the subject of a breach have
asserted claims and commenced litigation for their losses arising from fraudulent charges on stolen
credit cards, although they also face a number of substantial hurdles in establishing legally
cognizable claims.
Plaintiffs in data breach cases continue to develop new strategies and theories of liability,
particularly in the face of the obstacles presented by standing and damages issues.842 Newer
theories include claims for misrepresentation based on inaccuracies in notice letters as to the breach
or any continuing risks it presents, in communications from call centers established by the breached
entity, and in statements regarding security practices in privacy policies.843 Taking a different
839 See recent decision in Baum v. Keystone Mercy Health Plan, No. 3876, 1250 EDA 2015 (Phila. C.P. March 25, 2015) ,
rejecting class certification in a data breach case.
840 For example, dozens of suits were filed against Target Corp. as a result of the data breach reported in late 2013 in which
data related to tens of millions of credit cards was potentially compromised by hackers. See, e.g., Ala. State Empl. Credit Union v.
Target Corp., No. 13-cv-952 (M.D. Ala. filed Dec. 30, 2013); First Choice Fed. Credit Union v. Target Corp., No. 14-146 (W.D. Pa.
Filed Jan. 31. 2014); Council v. Target Corp., No. 13-CV-03479, 2014 WL 859326 (D. Colo. Mar. 5, 2014). Most of the lawsuits
against Target were consolidated in a Multi-District Litigation venued in St. Paul, Minnesota, Target’s home state, in In Re Target
Corporation Customer Data Breach Litigation, Case No. 02522, United States District Court, Minnesota, although those by banking
institutions were separated and continued in Minnesota.
841 In re TJX Companies Retail Sec. Breach Litig., 584 F. Supp. 2d 395, 397-98 (D. Mass. 2008).
842 Plaintiffs have also had some success in resurrecting older theories. See, e.g., Lone Star Nat. Bank, N.A. v. Heartland
Payment Sys., Inc., 729 F.3d 421 (5th Cir. 2013) (economic loss doctrine did not bar banks’ negligence claim).
843 See, e.g., Steinberg v. CVS Caremark Corp., No. 11-2428, 2012 WL 507807 (E.D. Pa. Feb. 16, 2012) (in which plaintiff
sued under Pennsylvania’s Consumer Protection Act, claiming that defendant materially misrepresented its privacy policies on data
handling; the court dismissed the suit and held that, among other things, the plaintiff did not suffer cognizable loss did not allege
justifiable reliance on defendant’s representations). See also Worix v. Medassets, Inc., No. 11 C 8088, 2012 WL 1419257 (N.D. Ill.
Apr. 24, 2012) (dismissing plaintiff’s causes of action under the Stored Communications Act, HIPAA, and the Illinois Personal
-194-
approach, one academic commented on the potential use of product liability law and claims of
product defect in privacy-related claims, particularly in the area of social media.844 While many of
these theories have yet to be fully tried and tested, as breaches continue and this area of law
develops, plaintiffs’ lawyers will undoubtedly search for sympathetic jurisdictions and explore new
theories of recovery, and defendants’ lawyers in turn will push back and assert new defenses.
4. Privacy Practices Lawsuits
A number of cases have recently targeted the business practices of companies in collecting and
using information, and the companies’ disclosures (or lack of disclosures) of their collection and
usage of information about individuals, rather than the failure to protect that information from
breach. These cases involve various data collection practices implicating privacy concerns and
issues of compliance with state and federal statutes and regulations that are only now being tested in
the courts. For instance, plaintiffs are increasingly challenging data collection practices of various
retailers at points of sale, the recording of telephone conversations between customers and service
personnel, and challenging the collection of information of various types from smartphones and
other mobile computing devices by application developers. While many of these challenges have
been dismissed by the courts for various reasons, others are beginning to gain traction.
The types of claims discussed here are illustrative of the growing trend of privacy-related lawsuits
based on business practices and state statutes. While many of these are based on California state
statutes, that jurisdiction tends to be the precursor to new privacy claim trends.
a. Point of Sale Data Collection Practices
Two states have recently been at the forefront of challenges to the collection of data by retailers at
the point of sale – California and Massachusetts. The highest appellate courts of both states have
now weighed in on the fairly common practice by retailers of collecting and often recording ZIP
code information from customers in the process of making merchandise purchases using credit
cards at store premises, finding that certain aspects of ZIP code collection and recording practices
can violate some states’ statutes limiting retailers’ rights to request and record personal information
during a credit card transactions, with some exceptions. However, they (and particularly California)
are also now the battle ground for attempts to expand the application of the limitations on point of
sale collection of customer personal information to other practices and types of information.
In California, the state Supreme Court considered a challenge to a retailer’s practice of collecting
ZIP code information from customers at the point of sale.845 Lower courts that had considered the
Information Act, but allowing discovery as to allegations under the State Consumer Fraud Act in a situation in which a computer hard
drive containing personal information was stolen); In Re Michaels Pin Pad Litigation, 2011 WL 5878373 (N.D. Ill. Nov. 23, 2011)
(also dismissing statutory claims, as well as negligence claim, although allowing consumer claim of breach of implied contract to
proceed). One state court has held that HIPAA does not preempt state law negligence claims against a health care provider that
improperly disclosed a patient’s medical records, further opening the door for state law claims based on unauthorized use or
disclosure of health care records. Byrne v. Avery Ctr. for Obstetrics & Gynecology, P.C., 314 Conn. 433 (Conn. 2014).
844 James Grimmelmann, Privacy as Product Safety, 19 Widener L. Symp. J. 793 (2010).
845 Pineda v. Williams-Sonoma Stores, Inc., 51 Cal. 4th 524, 246 P.3d 612, 120 Cal. Rptr. 3d 531 (2011). See also Dardarian
v. Office Maxx North America, Inc., 875 F. Supp. 2d 1084 (2012) (holding that the Pineda opinion applied retrospectively). See
California Supreme Court’s Zip Code Decision Exposes Retailers to New Litigation Hazard, Statutory Fines, Edwards Wildman
Client Advisory, April 2011, www.edwardswildman.com/newsstand/detail.aspx?news+2302.
-195-
case did not find that the collection of ZIP code information constituted “personal identification
information” prohibited from collection by the Song-Beverly Credit Card Act of 1971 (“Song-
Beverly Act”),846 which limits the right to request or require a customer to provide personal
information, defined as including addresses, as a condition to accepting a credit card as payment, if
the information is unnecessary to the credit card transaction. Based in part on the availability of
software that allows a retailer to obtain a customer’s full address by using the name and ZIP code,
collection of ZIP codes was found to violate the statutes if unnecessary to complete the credit card
transaction. The California Supreme Court reversed lower court holdings, holding that ZIP code
information constitutes “personally identifiable information” under the Song-Beverly Act, and
thereby opening the door to a plethora of consumer class actions against retailers.847
On the other hand, a California district court refused to certify a class in a suit against Wal-Mart,
where the putative class alleged that Wal-Mart had collected phone numbers at the point of sale in
violation of the Song-Beverly Act.848 The court made a distinction between business use cards and
consumer use, and observed that the California Court of Appeal has held that “purpose for which
the card was issued, rather than the way in which the card was used, was the relevant inquiry in
classifying” a credit card under the Song-Beverly Act, which only applies to natural persons and not
to businesses. Thus, before liability could be established with respect to each class member,
individualized proof regarding whether each class member’s credit card was issued as a consumer
or as a business card would have to be produced.
So far, attempts in California to expand the scope of prohibitions on collection of such personal
information on line have been unsuccessful849, but the California Supreme Court is shortly to decide
whether the Song-Beverly Act prohibits collection of personal identification information even after
the credit card is returned to the customer and it would not be objectively reasonable or the
customer to consider the collection to be part of the card transaction. 850
The Supreme Judicial Court of Massachusetts has also addressed the practice of retailers of
requesting ZIP code information at the point of sale.851 Although interpreting a Massachusetts
statutory scheme somewhat different than California’s Song-Beverly Act, the court reached a
similar result. In the Massachusetts case, the plaintiff challenged a retailer’s practice of obtaining
846 Cal. Civ. Code § 1747.08.
847 There are exceptions, however, such as where a company requests ZIP code information to prevent fraud, such as during
transactions at gas pumps. See, e.g., Flores v. Chevron, U.S.A., Inc., 217 Cal. App. 4th 337 (2013).
848 Leebove v. Wal-Mart Stores, Inc., No. 13-01024 R(SHx), slip op. 37 (C.D. Cal. Oct. 4, 2013) (Real, J.)
849 Ambers v. Beverages & More, Inc., , B257487, Court of Appeal of the State of California, Second Appellate District, May
4, 2015, (affirming dismissal of complaint and holding that the Song- Beverly Credit Card Act does not apply to an online purchase
of merchandise, and that the pertinent transaction in issue was on line even though the customer later picked up his merchandise at a
bricks and mortar store, although its holding states it is “under the circumstances presented.”
850 See Tammie Davis v. Devanlay Retail Group, No. 13-15063, United States District Court for the Ninth Circuit (May 5,
2015) (certifying to the California Supreme Court a question concerning whether the California Song-Beverly Credit Card Act
prohibits a retainer from requesting at the point of sale a customer’s personal information even after the customer has paid with a
credit card so that it could not be reasonably construed as being requested as part of the credit card transaction); Capp v. Nordstrom,
No.2:13-cv-0660-MCE-AC , U.S. District Court, E.D. Ca. October 22, 2013 (denying defendant retailer’s motion to dismiss action
alleging wrongful collection of email addresses in connection with a credit card transaction that was then purportedly used to send
unsolicited marketing material).
851 Tyler v. Michaels Stores, Inc., 464 Mass. 492, 984 N.E.2d 737 (Mar. 11, 2013). See Massachusetts Supreme Judicial
Court Expands Zip Code Privacy Protection in Tyler v. Michaels Stores, Edwards Wildman Client Advisory, March 2013,
www.edwardswildman.com/newsstand/detail.aspx?news=3620.
-196-
ZIP code information at the point of sale. Under the Massachusetts Unfair and Deceptive Business
Practices Statute, a business entity that accepts a credit card for a transaction may not “write, cause
to be written or require that a credit card holder write personal identification information, not
required by the credit card issuer, on the credit card transaction form.”852 The court found that ZIP
code information is, in fact, personal identification information within the meaning of the statute.
According to the court, even though ZIP code information does not directly identify the consumer,
it is possible to combine ZIP code information with other sources to obtain the customer’s address
and telephone number. Since the court found that the purpose of the statute was not merely to
protect against identity fraud, but served the larger purpose of safeguarding consumer privacy, the
court concluded that ZIP code information fell within the meaning of personal identification
information that the statute was designed to protect. The court did also point out, however, that the
mere collection of ZIP code information would not be enough to establish a claim under the statute.
Thus, if ZIP code information were merely collected, but not used for any purpose thereafter, a
cause of action for damages would not lie. Instead, the court required a plaintiff to show some harm
flowing from the data collection which is more than just a violation of the statute itself. This, the
court noted, could be shown in circumstances where a merchant uses personal identifying
information to send unwanted solicitations to a consumer, or where a merchant sells personal
identifying information to a third party. Finally, the Massachusetts Supreme Judicial Court made
clear that the Unfair and Deceptive Business Practices Statute would apply to prevent writing
personal identification information on a credit card transaction form whether the “writing” takes
place in a paper or electronic format. In this regard, the court noted that electronic transactions are
now pervasive and the legislature did not intend to limit the reach of the statute to antiquated forms
of business transactions. This decision resulted in dozens of cases being filed in Massachusetts. 853
There have been varied outcomes of such data collection lawsuits, often dependent on the particular
practices of the retailer in issue. Some have entered class settlements to resolve such claims, 854 but
some retailers have been able to obtain dismissals.855
The District Court for the District of Columbia reached a different result, dismissing a complaint,
with prejudice, after it found the retailers did not violate District of Columbia law856 by asking for
customers’ ZIP codes at the point of sale.857 The court observed, in interpreting the applicable
statute, that “a ZIP code cannot be considered the ‘address’ of the ‘cardholder’ since a ZIP code, at
best, merely indicates an area in which multiple addresses may be located,”858 Moreover, the
852 Mass. Gen. Laws c. 93, § 105(a).
853 See Recent Upsurge of Massachusetts Class Actions on Merchant Zip Code Collection, October 2, 2013,
http://www.lockelord.com/newsandevents/publications/2013/10/recent-upsurge-of-massachusetts-class-actions-on-merchant-zip-code
854 See, e.g., Tyler v. Michaels Stores, Inc., No. 1:11-cv-10920 (D. Mass. Jan. 17, 2014).
855 See, e.g., Alberts v. Payless, Civ. Action NO. 13-12262 –GAO (September 29, 2014) (dismissing action). For failure to
meet the requirement in Massachusetts General Law ch. 93A, § 9(3) requiring a presuit written demand). See also Newsham, Jack,
Mass. retailers ask for ZIP coe, and lawsuits follow, The Boston Globe, January 19, 2015,
https//www.bostonglobe.com/business/2015/01/19/asking-for-zip-code-getting-lawsuit… (stating that “In the past two years, at least
25 retailers have been sued for requesting ZIP code information from Massachusetts customers. Most of the lawsuit have been settled
or withdrawn, but the practice of asking customers for their postal codes…has cost retailers millions of dollars in settlements and
attorneys fees.”).
856 Plaintiffs alleged violations of the D.C. Use of Consumer Identification Information Act (“CII Act”), D.C. Code §§ 47-
3151, et seq., and the D.C. Consumer Protection Procedures Act (“DCCPPA”), D.C. Code §§ 28-3901 et seq.
857 Hancock v. Urban Outfitters, Inc., No. 13-939, slip op. at 13 (D.D.C. Mar. 14, 2014). On appeal.
858 Id. at 7.
-197-
defendants recorded ZIP codes in the point of sale register, rather than into the credit card swipe
machine. Thus, “the defendants took steps specially designed to adhere to the law by affirmatively
separating the ZIP code information from the credit card information” and thus failed to plead a
requisite element of a violation of the statute.859 In reaching this decision, the court specifically
noted that similar cases in Massachusetts and California serve to illustrate “restricted nature” of the
statute it was considering, as compared to the statutes in issue in those states. That holding was
appealed. 860
As noted above, ZIP Code collection is not the only PI whose collection at point of sale is under
scrutiny these days. Given the potentially lucrative nature of such class actions should they be
successful, it remains to be seen whether attempts by the plaintiffs’ bar to try to expand the theory
of alleged wrongful collection of personal information under the statutes in issue in the California
and Massachusetts decisions to include requesting personal information at other times in the Point
of Sale interaction between retailer and customer, and to include other types of information than
ZIP codes will continue and whether they will be successful. As demonstrated by decisions
discussed above, results so far have been mixed, but the final results are not yet fully determined.
b. Call Recording Practices
The call recording practices of merchants have also been challenged by plaintiffs as improperly
collecting personal information, particularly under California law.861 In recent years, there has been
a rash of cases alleging that a company’s recording of calls with its customers, usually alleged to be
without notice or consent, violate the California Invasion of Privacy Act (“CIPA”),862 among other
causes of action. The inquiry generally focuses not only on the content of the information, but on
whether the parties had an “objectively reasonable expectation that the conversation is not being
overheard or recorded.”863
Typically, plaintiffs assert such claims and seek class certification, based on allegations that a
company secretly recorded or monitored conversations with customers transacting business by
telephone, and the contention that doing so without customer consent violates CPIA. Such claims
include allegations that, for example, in calls on “consumer-facing” toll-free lines consumers
859 Id. at 9.
860 Id. at 10.
861 See, e.g., Faulkner v. ADT Security Services, Inc., 706 F.3d 1017 (9th Cir. 2012).
862 California Penal Code Section 630, et seq.
863 See Faulkner, 706 F.3d at 1019 (affirming the dismissal of the complaint in an action removed to federal court, but also
remanding for the district court to consider allowing the plaintiff to amend his complaint to make the requisite allegations of
circumstances and particulars of conversation to support that an objectively reasonable expectation of confidentiality would have
attended a communication such as the one in issue). In Young v. Hilton Worldwide, Inc., No. 12-56189, 2014 WL 1087777 (9th Cir.
Mar. 20, 2014), a divided Ninth Circuit recently reinstated a class action against Hilton, in which plaintiffs allege the hotel chain
violated the California Invasion of Privacy Act when it allegedly recorded calls without consent. In doing so, the Court observed that
the order below dismissing the case “purported to do so on grounds that are applicable to § 632 only—namely, because the complaint
failed to allege that the recorded communications were confidential and subject to a reasonable expectation of privacy.” Id. at *1.
The court noted that “[t]he California Supreme Court has unequivocally held that no such requirement applies to § 632.7, and the
district court's failure to recognize this was reversible error. Id. (citing Flanagan v. Flanagan, 27 Cal.4th 766, 776 (2002)
(explaining that § 632.7's “prohibition applies to all communications, not just confidential communications”)). On remand, however,
the district court granted Hilton’s motion for judgment on the pleadings based on grounds specifically applicable to § 632.7, i.e., that
§ 632.7 restricts “third party interception of cellular and cordless telephonic radio transmissions,” and does “not restrict the parties to
a call from recording those calls.” Young v. Hilton Worldwide, Inc., No. 2:12-cv-01788 (C.D. Cal. Jul. 11, 2014).
-198-
revealed personal identification information or confidential financial information, and that neither
the operator or others on the line from the company informed customers that telephone calls were
being recorded by monitoring software. Such suits generally seek statutory damages of, for
example, $5,000 for each violation, plus costs and attorney’s fees, as well as an injunction against
further violations.864
Courts have recently denied class certification in two CIPA cases on grounds that each putative
class member’s expectations of confidentiality would depend on individualized inquiries, such as
that person’s experience with the defendant and whether they received any notice that the calls
would be monitored or recorded.865 These rulings cast doubt on the availability of CIPA as a basis
for successful class action lawsuits going forward, although individual claims for CIPA violations
may remain viable depending on the particular facts alleged.
c. Data Collection Practices by Application Developers
In another growing trend, consumers are challenging the data collection practices of certain
software applications (“apps”) in collecting and recording information about mobile device users.
Often, the target of such claims are large consumer electronics manufacturers, social media sites
and major online retailers who allegedly failed to prevent apps that are sold through their services
from uploading consumer information from plaintiffs’ mobile devices without their consent.866
However, app developers themselves are also becoming targets for these claims.867
As demonstrated by cases cited above in discussions of standing and damages, in such lawsuits,
plaintiffs generally seek to challenge apps that operate as “tracking software,” recording details
about a consumer’s use of their mobile devices. Plaintiffs have also alleged that certain apps access
Personal Information on a user’s mobile device, such as contact address books, and upload that
information to the developer without the user’s knowledge or consent. Some plaintiffs allege that
certain apps install software on their mobile devices that record a user’s interactions with social
864 See, e.g., Ades v. Omni Hotels Management Corp., No. 13-cv-02468 (C.D. Cal. filed Apr. 8, 2013), class cert. granted Sept.
__, 2014).
865 Hataishi v. First Am. Home Buyers Protection Corp., 223 Cal. App. 4th 1454 (2014); Kight v. CashCall, Inc., 231 Cal.
App. 4th 112 (2014).
866 See, e.g., Pirozzi v. Apple, Inc., 2012 WL 6652453 (N.D. Cal. Dec. 20, 2012). In a win for Apple and other manufacturers,
a court held that mobile devices are not facilities through which electronic communication service is provided under the Stored
Communications Act (SCA) and that location data is not “electronic storage” under the SCA. See In re iPhone Application Litig., 844
F. Supp. 2d 1040 (N.D. Cal. 2012). There, a putative class of IPhone and IPad users brought an action against Apple, alleging Apple
and others unlawfully allowed third party apps to collect and use personal information without user consent or knowledge. In the
same decision, the court also dismissed the putative class’s claims for invasion of privacy and trespass, as well as statutory violations
of the Wiretap Act and Computer Fraud and Abuse Act (CFAA). It allowed the state claims under the Consumer Legal Remedies
Act (CLRA) and the Unfair Competition Law (UCL) to remain, but later dismissed those claims, finding plaintiffs had failed to
demonstrate they had relied on any alleged misrepresentations made by Apple. In re iPhone Application Litig., No. 11-md-02250-
LHK, slip op. No. 294, at 13 (N.D. Ca. Nov. 25, 2013). In doing so, the court found that a general issue of material fact existed as to
whether plaintiffs’ claims that they overpaid for their iDevices and that Apple’s alleged actions affected battery list, storage space and
bandwidth constituted an injury, but that as a matter of law, plaintiffs could not establish that these alleged injuries were causally
linked to Apple’s alleged misrepresentations. Id. at 11-13. Google faced similar claims in another case. In re Google Android
Consumer Privacy Litig., No. 11-md-02264, slip op. 78 (Mar. 10, 2014). There, Plaintiffs alleged that apps collected personal data
and shared the data with Google without their knowledge. The court reaffirmed its previous holding that plaintiffs’ allegations of
adverse effects on battery charge and phone performance were sufficient to establish Article III standing. Id. at 5. The Court
concluded, however, that these alleged injuries were insufficient to state a claim under the Computer Fraud and Abuse Act and
partially dismissed claims under the California Unfair Competition Law. Id. at 7-10.
867 See, e.g., Hernandez v. Path, Inc., No. 12-cv-01515, 2012 WL 5194120 (N.D. Cal. Oct. 19, 2012).
-199-
networking sites. Others allege surreptitious tracking of users by tagging digital images and video
with GPS location coordinates, uploading photographs taken by the user on his or her mobile
device, or using a mobile device to track a user’s location. This information, according to the
plaintiffs’ allegations, is then transmitted to the developer by the app and may be stored on the
developer’s servers without encryption, creating a further security risk.
While some of these cases have been dismissed on the ground that plaintiffs failed to show an
economic harm resulting from the practice,868 others have allowed creative allegations of unjust
enrichment and similar claims as demonstrated by one putative class action that was allowed to
proceed.869 In that case, a plaintiff successfully argued that it would cost as much as $12,500 to
remove the tracking software code installed by the app from his mobile device. Accepting those
allegations as true for purposes of resolving the app developer’s motion to dismiss in that case, the
court ruled that such an economic harm, if ultimately proven to be true, would be sufficient to state
a privacy claim against the app developer. (See Section VII.1., Article III. Standing, and Section
VII.2. Cognizable Injuries, for additional case law).
Such lawsuits allege that a host of federal and state statutes were violated by the app developers,
including federal wiretap statutes, computer crime statutes and state privacy statutes, as well as
common law claims. While many causes of action are often dismissed and even those that survive
early dismissal may not survive later summary judgment, as with many of these types of suits they
present a serious financial cost in defending. Moreover, they can generate regulatory scrutiny, with
the risks and costs attendant to regulatory inquiries.
d. Suits Alleging Violations of California’s “Shine the Light” Law
As noted in Section II above, California’s Shine the Light Law870 requires businesses to disclose, at
the request of a customer, how the business has shared consumer information with third parties. To
comply with the law, a business must designate certain contact information to enable consumers to
make requests under the statute. Alternatively, businesses may comply with the law by providing
the consumer with the right to prevent disclosure of his or her personal information to third parties.
A business that provides this alternative need not disclose how it has shared information.
Plaintiffs in several class action lawsuits have recently attempted to establish claims against
businesses that violate the law, but so far without substantial success.871 In these cases, plaintiffs
principally allege that a business violated the law by failing to provide the required contact
information to enable consumers to make requests under the statute. In order to establish a
sufficient injury to satisfy standing requirements, plaintiffs have relied on two theories: first, that
they suffered an economic injury as a result of the violation because the sale of personal
information by the business to third parties reduces the market value of that information to the
868 See, e.g., Pirozzi v. Apple, Inc., 913 F. Sup. 2d 840 (N.D. Cal. Dec. 20, 2012).
869 Hernandez v. Path, Inc., No. 12-cv-01515, 2012 WL 5194120 (N.D. Cal. Oct. 19, 2012).
870 Section 1798.83 of the California Civil Code. See Oft-ignored California Law Spawns New Batch of Class Action –
Companies Dealing with California Consumer Data Need to Audit Practices and Policies, Edwards Wildman Client Advisory,
January 2012, www.edwardswildman.com/newsstand/detail.aspx?=2743.
871 See, e.g., King v. Conde Nast Publications, No. 12-cv-0719, 2012 WL 3186578 (C.D. Cal. Aug. 3, 2012), aff’d, No. 57209
(9th Cir. Feb. 18, 2014); Miller v. Hearst Communications, Inc., 2012 WL 3205241 (C.D. Cal. Aug. 3, 2012); aff’d, 12-57231 (9th
Cir. Feb. 18, 2014); Murray v. Time, Inc., No. C-12-00432, 2012 WL 3634387 (N.D. Cal. Aug. 24, 2012)
-200-
plaintiff;872 and second, that they suffered an “informational injury” because, by failing to provide
the necessary contact information, the business deprived the plaintiffs of information to which they
were statutorily entitled.873
These cases, as do many in the privacy arena, provide a challenge to plaintiffs in establishing
standing and damages, particularly as California’s Shine the Light Law does not prevent businesses
from selling or otherwise sharing customer information, but rather requires businesses to disclose
how information was shared with third parties. Plaintiffs also face an uphill challenge in their
“informational injury” theory. In those claims, plaintiffs must show that they actually made a
request, or would have made a request, for the information provided by the Shine the Light Law if
the business had provided the required contact information. Plaintiffs must allege more than a mere
procedural injury.874
e. Collection of Data Regarding Video Viewing Selections
Plaintiffs have also challenged the data collection practices of online providers of video content. In
a suit that survived a motion to dismiss, but did not completely survive summary judgement, the
issues in both the claims and the defenses were illustrated. Plaintiffs alleged that a video content
provider installed tracking software on the computers of users who visited the provider’s website.875
The software would then track the user’s video selections and transmit that information to third
parties without obtaining the user’s consent. The software would also track a user’s web-browsing
history, even when they were not logged into the provider’s website, and transmit that history to
third parties. According to the complaint, the third parties included social networking sites and
online advertisers. The allegations of the complaint were found to state a claim under the Video
Privacy Protection Act (“VPPA”), which protects the personal information of individuals who rent
video materials. Under the VPPA, a “video tape service provider” may not disclose personally
identifiable information to any third party. Personally identifiable information, for purposes of the
statute, includes the viewing history of those who request or obtain video materials or services.876
In allowing the suit to proceed, the court found that the online content provider qualified as a “video
tape service provider” under the VPPA, even though the content provider did not rent physical
video tapes. According to the court, the statute is not limited by the form in which the video content
is disseminated. Rather, the VPPA is designed to apply to future changes in technology, such as
streaming online video content. The court also ruled that plaintiffs qualified as “subscribers” under
the statute, even though the plaintiffs did not allege that they rented or purchased content from the
service provider. A magistrate judge later denied defendant Hulu LLC’s motion for summary
judgment based on lack of injury, holding that, under the plain language of the VPPA, plaintiffs
must only show wrongful disclosure, and not actual injury, to recover damages.877 More recently,
however, the court partially granted summary judgment to Hulu, finding that disclosures to
comScore, Inc., “a metrics company that analyzes Hulu’s viewing audience and provides reports
872 See, e.g., Boorstein v. Men’s Journal, LLC, No. 12-cv-771, 2012 WL 2152815 (C.D. Cal. Jun. 14, 2012).
873 Id.
874 Id. at 3.
875 In re Hulu Privacy Litig., No. C11-03764, 2012 U.S. Dist. LEXIS 112916, 2012 WL 3282960 (N.D. Cal. Aug. 10, 2012).
876 18 U.S.C. § 2710(b)(1).
877 2013 WL 6773794 (N.D. Cal. Dec. 20, 2013),
-201-
that Hulu uses to get media content and sell advertising”, were anonymous that “hypothetically
could have been linked to video watching,” which was “not enough to establish a VPPA
violation.”878 The court reached this conclusion even though comScore could have used the IDs
provided to access the user’s profile pages in an attempt to identify the user, because there was no
evidence that comScore attempted to do so. In contrast, the court denied Hulu’s motion with
respect to disclosure to Facebook, because there were genuine issues of fact as to whether the data,
including cookies and in some cases IP addresses and Facebook IDs, could tie a video to a user of
Facebook, which would be a prohibited disclosure under VPPA. Hulu’s competitor, Netflix, may
have spared itself a great headache when it opted to settle similar claims that it violated the VPPA
when it allegedly retained and disclosed its customers’ viewing habits.879
f. TCPA
Plaintiffs have also brought suits for violations of the Telephone Consumer Protection Act of 1991
(TCPA), which is designed to restrict unsolicited telephone, fax and text message solicitations.880
(See Section III, U.S. Regulatory and Statutory Framework, subsection on Telephone Consumer
Protection Act, above, for further details of the statute and case law).
There are some practical limits to what will be considered an actionable claim even in this heavily
litigated area known for its plethora of class action litigation.881 In the Emanuel case, for example,
the plaintiff had initiated contact with the defendant to request that a text message he had written
would appear on a scoreboard in a basketball arena. Plaintiff then received a confirmatory text in
response, but alleged that he did not expressly consent to receive the confirmatory text message.
The court dismissed the case, observing that, although the owners of the arena “allegedly failed to
warn Plaintiff that he might receive a response, a ‘common sense’ reading of the TCPA indicates
that, by sending his original message, Plaintiff expressly consented to receiving a confirmatory
text….”882
Several other issues continue to provide fodder for TCPA litigants and disagreement among federal
courts. The first is the extent of a third party’s liability under the TCPA for calls made by another
party. Even where a defendant itself did not place a call, it may be vicariously liable for calls places
878 In re Hulu Privacy Litig., No. 11-03764 LB, 2014 WL 1724344 (N.D. Cal. Apr. 28, 2014).
879 Settlement Agreement at p. 12, In re Netflix Privacy litigation, No. 5:11-cv-379, Dkt. No. 76-1 (N.D. Cal., May 25, 2012).
880 See, e.g., Sterling v. Mercantile Adjustment Bureau, LLC., 11-CV-639, 2014 WL 1224604 (W.D.N.Y. Mar. 25, 2014)
(adopting report and recommendation that calls made by automatic telephone dialing system were made in violation of TCPA);
Satterfield v. Simon & Schuster, Inc., 569 F.3d 946 (9th Cir. 2009) (text messaging was “call” covered under TCPA).
881 Emanuel v. Los Angeles Lakers, Inc., CV 12-9936, 2013 WL 1719035 (C.D. Cal. Apr. 18, 2013)
882 Id. at *3.
-202-
and faxes sent on its behalf, such as those made by third party marketers.883 Courts have held that
vicarious liability may also be shown under principles of apparent authority and ratification.884
A second emerging issue is whether multiple violations of the TCPA contained within a single
communication can provide for multiple recoveries of the statutory damage amount.885 In Lary, the
Eleventh Circuit held that the sender of a fax violated the Act twice – once by sending it to an
“emergency telephone line”, and again because the transmission was an unsolicited advertising fax.
Therefore, the plaintiff was entitled to the statutory damage amount of $500 for each violation
despite the fact that they occurred in connection with the same fax.886
A third issue is whether consent to receive a communication otherwise prohibited by the TCPA, and
what constitutes a revocation.887
Litigation involving these and other issues relating to the interpretation of the TCPA and the
regulations promulgated by the FCC to implement it continue to evolve in light of changing
communications technology, and increasingly aggressive interpretations urged by the TCPA
plaintiffs’ bar and the FCC.
g. Stored Communications Act
Unlike some other federal and state statutes, the federal Stored Communication Act888 (“SCA”)
does not require proof of actual damages in order to establish standing. Under the SCA, an Internet
Service Provider may be liable if it “knowingly” disclosed personal information to a third party.
In an example of how the SCA and its limitations can come into play in privacy related actions, a
court held that Facebook posts were protected by the SCA in a class action based on disclosure
through a Facebook account.889 There, a hospital employee set her Facebook account privacy
settings such that her Facebook “friends” could view her posts. The employee then posted a
statement on her “wall” criticizing first responders to a shooting in Washington, DC. Notably,
those first responders were not employees of the hospital at which she worked. Nevertheless, her
employer temporarily suspended her, claiming the post exhibited a “deliberate disregard for patient
safety,” after a fellow co-worker, who was a “Facebook friend,” printed the page and showed it to
883 Gomez v. Campbell-Ewald Co., 768 F.3d 871, 877-78 (9th Cir. 2014) (TCPA liability extends to third parties under
common law agency principles, reversing summary judgment granted by district court in favor of defendant that engaged third party
marketing service); see In Petition re Joint Petition filed by Dish Network, LLC, 28 FCC Rcd. 6574 (2013); Palm Beach Golf Ctr. v.
Sarris, 781 F.3d 1245, 1256-58 (11th Cir. 2015).
884 Thomas v. Taco Bell Corp., 582 Fed.Appx. 678, 679-80 (9th Cir. 2014) (noting that the FCC ruled in Dish Network, supra,
n. 75, that “it is not appropriate to limit vicarious liability to the circumstances of classical agency (involving actual seller, or right to
control, of the telemarketing call) … Principles of apparent authority and ratification may also provide a basis for vicarious seller
liability for violations of section 227(b)” (citations omitted)).
885 Lary v. Trinity Physician Fin’l & Ins. Servs. 780 F.3d 1101, 1105-06 (11th Cir. 2015).
886 Id.
887 Osario v. State Farm Bank, F.S.B., 746 F.3d 1242, 1254-56 (11th Cir. 2014) (recipients, “in the absence of any contractual
restriction to the contrary, were free to orally revoke any consent previously given to State Farm” to call their number); Gager v.
Dell Fin’l Servs. LLC, 727 F.3d 265, 268-72 (3d Cir. 2013) (“[T]he TCPA provides consumers with the right to revoke their prior
express consent to be contacted on cellular phones by autodialing systems.”).
888 18 U.S.C. §§ 2701–2712.
889 Ehling v. Monmouth-Ocean Hosp. Serv. Corp., 961 F. Supp. 2d 659 (D.N.J. 2013).
-203-
hospital managers. The employee sued for invasion of privacy under SCA. The court dismissed the
claim, holding that, although the posts were covered by the SCA, the employee had voluntarily
given them to her co-worker Facebook friend, who in turn voluntarily gave the post to hospital
management. The court observed that “[t]his may have been a violation of trust, but it was not a
violation of privacy.”890
In another SCA class action, a court granted LinkedIn’s motion to dismiss claims based on
allegations that the networking site violated the SCA by collecting contacts from its users’ external
email accounts.891 Another group of plaintiffs has alleged that computer manufacturer Lenovo
violated the SCA by selling computers preinstalled with software produced by Superfish, Inc.,
which monitors user activity through image-based searched and other functions.892
VIII. Mitigation of Exposures
Much of the discussion in studies and among professionals and insurers addressing privacy-related
claims has turned to scrutinizing past claims for insights regarding practices and procedures that can
be used to help companies reduce the likelihood of incidents and claims, and the resultant costs and
damages.
1. Data Breach Exposures
a. Compliance with Applicable Data Security Requirements
Many of the state and federal data security statutes and regulations discussed above in Section III
are designed to reduce the occurrence of data breaches involving the Personal Information subject
to such restrictions. As such, ensuring compliance with applicable data security requirements
serves to significantly reduce a company’s data breach exposure, both by reducing the likelihood
that a breach will occur, and by limiting the potential consequences if the company’s security is
breached.893 Failure to comply with applicable state or federal data security statutes and
regulations, or with industry-established security requirements such as PCI-DSS, may be used by
consumers and other claimants to demonstrate that the entity whose data was breached is
responsible for the consequences of a breach. Compliance with applicable statutes, regulations, and
industry standards is one of the strongest defenses a breached entity has against claims based on
negligence.
b. Instituting Reasonable Security Procedures
One study of data breaches reported that 78% of breaches were low or very low in difficulty and
none were highly difficult; 76% of network intrusions exploited weak or stolen credentials.894 A
890 Id. at 674.
891 Perkins v. LinkedIn Corp., No. 13-CV 04303, 2014 WL 2751053 (N.D. Cal. Jun. 12, 2014) (granting motion to dismiss,
noting that LinkedIn users consented to the collection of email addresses, therefore collection was authorized).
892 Hunter v. Lenovo (United States) Inc., No. 5:15-cv-00819, (N.D. Cal., complaint filed Feb. 23, 2015), motion to
consolidate pending, In re Lenovo Adware Litigation, MDL No. 2624.
893 As discussed above, these potential consequences include regulatory and enforcement actions, and third party claims, as
well as loss of customer confidence and a resulting loss of business.
894 Verizon, 2013 Data Breach Investigations Report, supra.
-204-
recent study found that privilege abuse continues to be a top characteristic of the internal actor
breach.895 While data security regulations require companies to institute security procedures
designed to reduce the risk of data breach, security plans and procedures must be implemented to be
effective. As discussed below, training employees to adhere to privacy and data security policies
and procedures is critical to the implementation of such policies and procedures, and to the
reduction of data breach exposures.
c. Limiting Access to Personal Information
Studies show that the frequency, scope and cost of breaches can be reduced by limiting the
following: (i) access to Personal Information and other types of confidential information only to
those with a need for that access; (ii) the amount of information collected and stored; and (iii) the
length of time Personal Information is retained, to only that which is necessary. These limitations
are the focus of both data security regulations and risk management protocols.
d. Training/Awareness
Human error (by employees, suppliers or other third parties) has been the reported cause of a large
number of breaches. Employee negligence or maliciousness persists as the root cause of many data
breaches, ranging from loss of laptops or other devices to mishandling of data. Human factors,
including insufficiently robust passwords and poor password management, and computers left
unattended or viewable in public venues, are among the factors contributing to many breaches that
can be mitigated with training. Many breaches still are attributed to participation by insiders.896
Data breaches often occur when companies and their employees fail to consider the risk of data
breaches from routine conduct, or fail to comply with applicable data security requirements.
Resultant claims arise from lack of awareness by companies and their employees of applicable
governmental data security requirements, and their own non-compliance.
Relatively simple measures that can reduce the risk of data breach, many of which may be required
by applicable data security statutes and regulations, include the following:
Educating company executives as to applicable legal requirements governing data
security and the importance of establishing a team of appropriate internal personnel
and external resources to: (i) identify the type and location of protected Personal
Information collected, used, stored and transmitted by the company; (ii) assess the
risks related to such information; and (iii) draft and propose appropriate and
compliant procedures for security;
Ensuring that paper records with Personal Information and other confidential
information are properly disposed of in compliance with applicable requirements and
data security best practices;
895 Verizon, 2015 Data Breach Investigations Report, supra at p. 46. This report noted convenience was one of the top two
motivators; the second was financial gain.
896 Verizon, 2013 Data Breach Investigations Report, supra; Ponemon Institute, LLC, The Human Factor in Data Protection,
Jan. 2012.
-205-
Terminating an employee’s access to computer terminals and company databases
onsite and offsite immediately upon termination of the employee’s employment;
Instituting robust password requirements for access to databases with Personal
Information and other confidential information, and prohibiting password sharing;
Instituting robust password requirements for laptops and PDAs, which are
susceptible to being lost or stolen, and reminding employees not to store the
password with the laptop or PDA;
Encrypting portable devices, and encrypting electronic documents with sensitive
information before transmitting;
Considering data security as an important factor in vendor selection and vendor
management, and requiring data security and privacy measures in vendor contracts.
Implementing the recommendations above, followed by regular updates, evaluation and employee
training, can dramatically reduce data breach exposures at relatively low cost to companies.
2. Risks of Collecting/Using Personal Information Improperly
As noted in the discussions above, increasingly both litigation and regulatory investigations focus
on the business practices of companies in collecting and using information about consumers, and
contentions of inadequate disclosures to consumers of such practices. Information about customers
and prospective customers can be the most important asset of a company, but it presents risks that
are to be taken into account as well. Risks to be considered and balanced include:
Compliance risks. Organizations are increasingly subject to statutes and regulations –
state, federal and international – regarding the use of information, and potentially face
litigation or regulatory sanctions and consent decrees when they are not in compliance.
Moreover, companies increasingly have contractual commitments that include privacy
obligations and compliance with industry standards.
Reputational risks. In addition to legal enforcement, organizations also face reputational
harm when they are subject to legal or regulatory proceedings alleging improper practices or
inadequate security regarding consumer information, or that they failed to comply with their
own announced privacy policies. An organization’s most important assets are usually its
brand and public trust.
Operational risks. While privacy programs are important to protect consumer information,
to be effective they need to be administratively efficient and cost-effective, incorporating the
needs of the business as well as the needs of the consumer. Otherwise, the organization may
be exposed to unwarranted risk, or the cost of operational inefficiency or dysfunction.
Investment risks. The organization must be able to receive an appropriate return on its
investments in information, information technology and information processing programs, in
light of evolving privacy regulations, enforcement and expectations.
Compliance programs need to incorporate these risks and balance the needs of a company with
those of its consumers and business partners. A growing number of companies have a Chief
-206-
Privacy Officer, and particularly large ones may have a data privacy and security committee to
oversee the increasingly complicated and burdensome challenges of compliance in this area,
including educating and training company employees and vendors of their obligations,
implementing privacy by design, and developing a culture that fosters awareness and concern about
data privacy and security.897
Over time, Personal Information management has become vital to a large range of organizations. It
is now increasingly common for companies to develop an information management program, in
pursuit of a holistic approach to the risks and benefits of processing Personal Information.
Common aspects of such programs include maintaining preference lists for direct marketing,
developing appropriate security for human resources data, executing proper contracts to authorize
data flows particularly when they are being transferred from one country to another, and publishing
online privacy notices.
In creating an information management program, each company should have an understanding of
what data it collects, stores, process, uses and transfers, and why, and an understanding of the risks
associated with its practices. Executives overseeing data privacy and security can then help their
organizations develop data privacy policy and practices in an organized way that meets company
goals and preserves flexibility, while taking precautions against foreseeable risks. A challenge in
doing so is to understand and anticipate future changes both in the regulatory environment and in
the company’s business needs.
3. Contract and Vendor Management
Many organizations elect to outsource information processing to an outside vendor or plan to sell
information collected by the company to a third party. As further outlined below, specific
precautions must be taken if a company plans to share personal data with a third-party data
processor.
a. Vendor Contracts
A company is responsible for the actions of vendors with which it contracts to collect, analyze,
catalog, or otherwise provide data management services on the company’s behalf. The claims in a
privacy policy also apply to third parties when they are working with an organization’s data. To
ensure the responsibility and security of data once it is in the hands of a contractor or vendor,
precautions to consider incorporating in written contracts include the following:
Confidentiality provisions.
No further use of shared information.
Identification of use of subcontractors and subcontract provisions for information
privacy and security.
Provisions for disclosure of a breach and notification obligations.
Information security provisions.
897 Privacy by design or “PbD,” a concept developed by Ontario Canada Information & Privacy Commissioner Dr. Ann
Cavoukian, calls for considering ways to protect consumer privacy during the product development process, rather than to address it
as an afterthought. For more information visit www.privacybydesign.ca. The FTC and regulators in the E.U. have approved of Dr.
Cavoukian’s PbD principles and recommended their adoption by industry.
-207-
b. Vendor Due Diligence
A procuring organization may have specific standards and processes for vendor selection. The
following factors should be among those considered when selecting vendors:
Reputation.
Financial condition and insurance.
Information security controls.
Point of transfer of information
Disposal of information.
Vendor employee training and user awareness.
Vendor incident response.
Consideration of these factors in vendor selection is an important part of any company’s efforts to
reduce its exposures and mitigate its risk of loss from privacy and security risks involving Personal
Information.
Conclusion
These are difficult times for information management, and companies in all lines of business are
faced with the need to address information and systems security, and evaluate and ensure their
compliance with the growing global network of regulatory and legal requirements governing the
collection, usage, disclosure and security of data.
Data breaches of all kinds, and resultant direct and indirect costs, continue to be a growing exposure
in our society. Concomitant with that exposure is the increase in state and federal laws and
regulations in the U.S., and the increase in regulation in other countries, imposing data security and
breach response requirements, particularly when that data includes information about individuals.
Moreover, confidential information of all kinds is increasingly subject to cyber attacks, with
resultant business losses to the targeted company and its clients. In addition, new technologies,
social media practices, and online behavior tracking practices are raising new privacy issues, with
increasing regulatory scrutiny, legislation, and litigation, and resulting exposures to assess and
manage.
Companies in all lines of business are subject to these exposures, and to the increasing regulatory
and other legal requirements designed to protect the privacy of individuals and the security of
information and critical infrastructure.
Acknowledgments:
The Locke Lord LLP Privacy & Cybersecurity Practice Group acknowledges with appreciation the
invaluable assistance provided by many of the firm’s Partners, Counsel, Associates and Trainees in
the preparation of this June 2015 edition of our White Paper. Our thanks to Natasha Ahmed, David
Anderson, Ted Augustinos, Karen Booth, Bart Huffman, Aaron Igdalsky, Laurie Kamaiko, Sean
Kilian, John Kloecker, Daryl Lapp, Molly McGinnis Stine, Robert Mecrate-Butcher, Alan
Meneghetti, Matthew Murphy, Charles Salmon, Tom Smedinghof, Brittany Summers, Phillipa
Townley, Tammy Woffenden, Yasemin Yanar, and Vita Zeltser.
AM 53756714.1
- How-to guide How-to guide: How to develop, implement and maintain a US information and data security compliance program (USA)
- How-to guide How-to guide: How to determine and apply relevant US privacy laws to your organization (USA)
- How-to guide How-to guide: How to establish a valid lawful basis for processing personal data under the GDPR (UK)