Developments in eCommerce, Privacy, Internet Advertising, Marketing, and Information Services Law and Policy
In this issue, we highlight inquiries sent to education technology companies from three senators on the collection of student data. We also detail settlements between the Federal Trade Commission (FTC) and multiple technology companies regarding alleged Children's Online Privacy Protection Act (COPPA) violations and false claims related to Privacy Shield participation. We look at the Ninth Circuit's holding regarding the Computer Fraud and Abuse Act (CFAA), and we discuss privacy legislation that passed in New York and Illinois. Lastly, we report on the passing of European Data Protection Supervisor (EDPS) Giovanni Buttarelli and cover former EDPS Assistant Supervisor Wojciech Wiewiórowski's new role as acting EDPS.
Heard on the Hill
Senators Write to EdTech and Data Collection Companies About Use of Student Data
On August 12, 2019, Democratic Senators Dick Durbin (D-IL), Ed Markey (D-MA), and Richard Blumenthal (D-CT) sent letters to dozens of education technology (EdTech) companies and data collection firms, asking questions about their practices and voicing concerns over the amount of student data that is being collected and how it is being used. The senators cited a Federal Bureau of Investigation Public Service Announcement issued last year and the hacking of Slate, a college admissions database, as major sources of concern. Last year's Public Service Announcement warned that the malicious use of student data could result in socihttp:al engineering, bullying, and identity theft.
The senators sent different letters to EdTech companies and data collection companies. In the letter to EdTech companies, the senators requested that the companies explain how long student data is being held, how it is being deleted, whether students and parents can opt-out of data collection, whether data is used or sold for advertising, and whether hackers have accessed any student data. In the letter to the data collection companies, the senators point to a Fordham University Law School report that found some firms were selling information that included grade point average, ethnicity, religion, and wealth. Both letters instructed the firms to respond within three weeks with details about how they collect information, the categories of data they gather, the disclosures made to third parties, and any known data breaches.
Around the Agencies and Executive Branch
Technology Companies Settle With FTC Regarding Alleged COPPA Violations
On September 4, 2019, the Federal Trade Commission (FTC) announced a settlement that included $170 million in penalties with a technology company and one of its subsidiaries (referred to collectively as the "companies") to resolve alleged violations of the FTC's Children's Online Privacy Protection Act (COPPA). In the complaint, the FTC and the New York Attorney General alleged that the companies had actual knowledge that they were collecting information from users of "child-directed channels" and that the defendants failed to provide parental notice or obtain verifiable parental consent as required under COPPA. The settlement represents the largest monetary penalty in the history of COPPA enforcement. The complaint also expressed the FTC's view that commercial operators of child-directed channels on YouTube are "operators" subject to COPPA in connection with this activity. Following the announcement of the settlement, the FTC warned that they will be conducting a sweep of child-directed channels.
As previously reported in the Download, the FTC published a Request for Public Comment in the Federal Register on July 25, 2019 to solicit public comment on potential revisions to the current iteration of the COPPA Rule, and will convene an event, "The Future of the COPPA Rule: An FTC Workshop", on October 7, 2019.
FTC Settles with Companies Over Privacy Shield Misrepresentations
The Federal Trade Commission (FTC) announced earlier this month that it settled with five companies over allegations of misrepresenting participation in the EU-U.S. Privacy Shield data transfer framework.
The Privacy Shield allows participants to transfer personal data from the European Union (EU) to the United States while complying with EU privacy and data protection laws. In order to participate, companies must annually self-certify compliance with seven Privacy Shield Principles (and sixteen Supplementary Principles) designed to ensure "adequate" protection for personal data.
The FTC alleged that four of the companies had falsely claimed on their websites that they were certified under the Privacy Shield, despite never having completed the certification process. The FTC also claims that the fifth company allowed its Privacy Shield certification to lapse in 2018, but failed to remove the Privacy Shield certification from its website; failed to comply with the Privacy Shield Principles; and failed to affirm with the Department of Commerce that it would continue to apply the Privacy Shield protections to personal data collected during its participation in the program. The consent agreements prohibit the companies from misrepresenting their participation in government, self-regulatory, and similar privacy or security programs and impose FTC reporting requirements. Violations of the consent agreements can carry penalties of up to $42,350.
These settlements serve as a reminder that the FTC's mandate covers misrepresentations relating to the Privacy Shield. Participants in the Privacy Shield must complete the certification process and annually recertify with the Department of Commerce in order to take advantage of the Privacy Shield's protections. In addition, participants are subject to lasting obligations, as even former Privacy Shield participants must comply with the Privacy Shield Principles for all personal data that is transferred under the program, for as long as they process that data.
In the Courts
Ninth Circuit Rules That LinkedIn Can't Halt HiQ's "Web Scraping"
On September 9, 2019, in the case of HiQ Labs, Inc. v. LinkedIn Corp., the Ninth Circuit Court of Appeals affirmed a district court's preliminary injunction prohibiting Defendant LinkedIn from denying Plaintiff HiQ, a data analytics company, access to LinkedIn members' publicly available profiles.1 The Court addressed whether professional networking website LinkedIn can prevent competitor HiQ from collecting and using information that LinkedIn users have shared on their public profiles, and that is available for viewing by anyone with a web browser.2 Because the case came to the Ninth Circuit as an appeal of a preliminary injunction, the Court did not address or resolve all of the legal and factual disputes in the case. In the opinion, the Court affirmed the preliminary injunction and discussed whether HiQ raised "serious questions on the merits of the factual and legal issues."3
According to the Court, HiQ used automated bots to "scrape" information that LinkedIn members included on their public pages.4 "Scraping" is a term defined by the Court to describe extracting data from a website and copying it into a structured format, allowing for data manipulation or analysis.5 As the Court stated, LinkedIn sent HiQ a cease-and-desist letter, asking HiQ to stop accessing and copying data from LinkedIn's server. HiQ filed suit, seeking injunctive relief based on California law and a declaratory judgment that LinkedIn could not lawfully invoke the Computer Fraud and Abuse Act (CFAA), the Digital Millennium Copyright Act, California Penal Code § 502(c) (computer crimes), or the common law of trespass.
In affirming the preliminary injunction against LinkedIn, the Ninth Circuit found that the district court did not abuse its discretion and noted that HiQ was able to prove irreparable harm to its business by showing that the entire business depended upon the company's ability to access public LinkedIn member profiles. The Court also considered the balance of equities and found that LinkedIn's interest in preventing HiQ from scraping the profiles was not more significant than HiQ's interest in continuing its business derived from the public LinkedIn pages.6
The Court addressed HiQ's likelihood of success on the merits for some of HiQ's claims, notably, tortious interference with contract, and the CFAA claim. On both claims, the court found that HiQ raised questions of merit. With respect to the tortious interference of contract claim, the Court did not find that LinkedIn could demonstrate a legitimate business purpose that could justify the intentional inducement of a contract breach. Regarding the CFAA claim, one of the most common claims for web scraping cases, the Court held that because the information accessed by HiQ was limited to publicly available information, HiQ was not accessing the information "without authorization," one of the required elements of a CFAA claim.7 The Court here viewed "without authorization" to mean "circumvent[ing] a computer's generally applicable rules regarding access permissions, such as username and password requirements, to gain access to a computer."8
The case was remanded to the district court for further proceedings.
In the States
New York's Amended Breach Notification Law Goes into Effect in October
New York recently expanded existing cybersecurity and breach notification laws under the Stop Hacks and Improve Electronic Data Security Act (SHIELD Act). The SHIELD Act amends New York's current breach notification law and effectively expands the scope of breaches that will require notification. The breach notification amendments go into effect on October 23, 2019, and the effective date for the new cybersecurity requirements is March 21, 2020.
The most prominent amendment under the SHIELD Act is the broadened definition of "private information." Under the SHIELD Act, the definition now includes: (1) account numbers and credit or debit card numbers, even without additional identifying information, if the number can be used to access an individual's financial account; (2) biometric information; and (3) a user name or e-mail address in combination with a password or security question and answer.
The SHIELD Act also expands the breach notification statute to capture access to, as opposed to the acquisition of, private information. According to the statute, businesses will now be required to notify persons where a system containing unencrypted private information has been accessed, even where the data has not been copied, downloaded, or acquired by an unauthorized user. This amendment aligns New York with a small number of jurisdictions that require notification for access (as opposed to acquisition).9 To determine whether unauthorized access has occurred, a business may consider "indications that the information was viewed, communicated with, used or altered."
Importantly, the SHIELD Act expands the territorial scope of New York's current breach notification law. Whereas the law previously limited applicability to entities conducting business in New York, it now includes any entity that experiences a breach involving New York residents' information. New York is one of a few states that use residency to define the territorial scope of their breach notification laws.10
The SHIELD Act includes a number of exemptions to the breach notification provisions. For example, if notification is required under the Health Insurance Portability and Accountability Act of 1996 or the Gramm-Leach-Bliley Act, additional notification is not required under the SHIELD Act. Furthermore, notification is not required if the entity determines the breach "will not likely result in misuse of such information, or financial harm to the affected persons or emotional harm in the case of unknown disclosure of online credentials[.]" To invoke this exception, however, the determination "must be documented in writing and maintained for at least five years" and such a determination must be provided to the attorney general within ten days if the breach affects more than 500 New York residents.
Illinois Student Online Personal Protection Act Amended
On August 23, 2019, Illinois Governor J.B. Pritzker signed into law House Bill 3606, the Student Online Personal Protection Act (Act). The Act amends Illinois' existing online personal protection statute to include parental privacy rights over "student data" collected by schools,11 and requires schools and operators to further safeguard "student data." The law becomes effective on July 1, 2021.
The Act provides the parent (as defined in the Illinois School Student Records Act) of a student enrolled in a school the right to inspect and review the student's covered information,12 regardless of whether it is held by a school, the State Board of Education, or an operator;13 request a correction of the student's covered information; and request a paper or electronic copy of the student's covered information from the school.
The Act states that it will also expand the duties of operators, as defined by the statute. According to the text of the Act, operators must implement and maintain reasonable security procedures and practices that otherwise meet or exceed industry standards made to protect covered information from unauthorized access, destruction, use, modification or disclosure. In addition, except for a nonpublic school, the Act's text requires that any operator who receives covered information from a school, school district, or the State Board of Education must enter into a written agreement with the school, school district, or the State Board of Education. The Act notes that these written agreements must be made available to the public and include information such as the categories of covered information to be provided to the operator, a statement by the operator that any collected information will be used only for authorized purposes and disclosed to third parties with consent from the school, and a description of actions that must be taken after a data breach.
Under the Act, schools must ensure that their practices address new prohibitions and duties regarding covered information. The Act's text prohibits schools from selling, leasing, or trading covered information. Sharing, transferring, disclosing or providing access to a student's covered information to an entity or individual other than the student's parent, school personnel, or State Board of Education, is also prohibited by the Act without a written agreement in place between the school and the student's parent, unless required by law.14 The Act also requires that a school provide, via either its website or an alternative accessible format available upon request, an explanation of the categories, use, and purposes for which it collects, maintains, or discloses covered information. Other duties required of schools under the Act include adopting a policy for designating school employees authorized to enter into written agreements with operators, disclosing a list of the operators with whom the school has written agreements, and for each operator, listing any subcontractors to whom covered information may be disclosed. The Act's text mandates that schools update such information no later than 30 calendar days following the start of a fiscal year and no later than 30 days following the beginning of a calendar year.
The Act addresses the handling of data breaches by both schools and operators. A "breach" is defined by the Act as the unauthorized acquisition of computerized data that compromises the security, confidentiality, or integrity of covered information maintained by an operator or school. Schools are required by the Act to notify the parent of any student whose covered information is involved in a breach no later than 30 calendar days after the determination of the breach. The Act provides that the notice include the date of the breach, a description of the covered information compromised in the breach, and contact information for consumer reporting agencies and the Federal Trade Commission. The law notes that operators must disclose in an agreement with the school how, if the breach is attributed to the operator, any costs and expenses incurred by the school will be allocated between the operator and the school.
EDPS Giovanni Buttarelli Passes Away, and Former EDPS Assistant Supervisor Wojciech Wiewiórowski Assumes Role of Acting EDPS
Former European Data Protection Supervisor (EDPS) Giovanni Buttarelli passed away on August 20, 2019. Shortly after on August 26, 2019, it was announced that pursuant to Article 100(4) of the Regulation (EU) 2018/1725, Mr. Buttarelli's former deputy, EDPS Assistant Supervisor Wojciech Wiewiórowski, has begun serving as acting EDPS.
The EDPS leads the European Union (EU) authority of the same name, which is responsible for ensuring that EU government entities and companies conducting business in the EU protect the privacy and data protection rights of EU citizens. EDPS terms last five years, and Acting EDPS Wiewiórowski will serve the remainder of Mr. Buttarelli's scheduled term, which will expire on December 5, 2019. Mr. Buttarelli served as the second EDPS, following the tenure of the EDPS's inaugural leader, Peter Hustinx, who served as EDPS from 2004 to 2014.
The UK ICO Issues Guidance on Data Minimization and Privacy Protection in Artificial Intelligence Systems
On August 21, 2019, the United Kingdom's (UK) Information Commissioner's Office (ICO) published a blog discussing techniques and best practices for artificial intelligence (AI) developers to engage in privacy-protective practices, including data minimization. In a blog post, the ICO's Research Fellow in AI, Reuben Binns, and Technology Policy Advisor, Valeria Gallo, discussed the importance of data minimization with regard to AI development and use within organizations. The ICO invited feedback as part of an ongoing call for input on the development of its framework for auditing AI.
The ICO noted that AI systems generally require large amounts of data but companies must comply with the data minimization principle under UK and European data protection law. In order to meet the requirements of these laws, the ICO noted that companies should train individuals who are responsible for AI development ondata minimization techniques, develop a risk management system, perform due diligence of AI systems that are procured for the organization, and work to ensure that data minimization does not create inaccurate or discriminatory AI models.
The ICO notes that the first step organizations should take is to map out AI systems and determine what personal data is used in those models. As part of this process, the ICO noted the following factors to assess when implementing data minimization in AI systems:
- Feature selection: Consider whether certain parts of a data set are required, such as limiting the use of financial information when it is not needed to produce a given result.
- Privacy-preserving methods: Consider modifying personal data to reduce the chance that it is traced back to a specific individual, such as by adding random "noise" into a data set.
- Converting personal data: When personal data is used to create inferences about individuals, consider creating datasets that are not human-readable by hashing or otherwise converting the data.
- Operate models locally: Consider running models on a device, rather than transferring the data to a centralized server.
- Privacy-preserving queries: If data cannot be processed by AI locally on a device, reduce the amount of personal data sent to a server to only that which is necessary to operate the AI model.
- Anonymization: Consider if data can be functionally anonymized or pseudonymized when processing an AI model.
The ICO noted that it would "genuinely welcome" any feedback on the thinking they laid out in the post and that this blog was just one part of the continued development of the ICO's auditing framework for AI.