Introduction

In this issue of The Download, we examine congressional developments, including Federal Communications Commission (FCC) oversight hearings held by the House Committee on Energy and Commerce's Subcommittee on Communications and Technology and the Senate Commerce Committee, in addition to the Senate Banking Committee's hearing on "data brokers." We discuss two events covering ongoing privacy framework developments hosted by the National Institute of Standards and Technology, the Federal Trade Commission's final hearing of the series on "Competition and Consumer Protection in the 21st Century," and the FCC's ruling to block robocalls. Outside of Washington, we report on new data privacy laws in Nevada and Maine, and publications from the United Kingdom's Information Commissioner's Office (ICO) on such topics as biometric data, artificial intelligence (AI), and the European Union's General Data Protection Regulation.

Contents

Heard on the Hill

Around the Agencies

In the States

International

Heard on the Hill

House Committee on Energy and Commerce's Subcommittee on Communications and Technology and Senate Commerce Committee Hold FCC Oversight Hearings

The House Committee on Energy and Commerce's Subcommittee on Communications and Technology (Subcommittee) convened a hearing on Federal Communications Commission (FCC) oversight on May 15, 2019, and the Senate Committee on Commerce, Science, and Transportation (Committee) followed with another oversight hearing on June 12, 2019. Hearing witnesses included FCC Chairman Ajit Pai and FCC Commissioners Michael O'Rielly, Brendan Carr, Jessica Rosenworcel, and Geoffrey Starks.

Each hearing covered similar topics, including unwanted robocalls, the collection and disclosure of consumer location data by wireless carriers, the role of foreign communications companies in the wireless network, and net neutrality.

At the House Subcommittee hearing, Chairman Frank Pallone (D-NJ) expressed the view that the FCC is not focused on the consumer in its regulatory work and instead is favoring corporate interests, and that it is Congress's role to get the FCC "back on track." Other members focused their remarks on the reported proliferation of unwanted robocalls and alleged disclosures of consumer data by wireless carriers without consumer knowledge. With regard to robocalls, multiple members discussed the introduction of H.R. 946, the Stopping Bad Robocalls Act. Commissioner Rosenworcel also expressed support for a robocall division at the FCC.1Subcommittee members additionally focused on allegations that wireless carriers and technology companies are collecting and selling consumer location data without consumers' knowledge. While the commissioners would not comment on ongoing investigations into such practices, Commissioner Starks stated that such allegations "must be prioritized" by the FCC.

At the Senate Committee hearing, the discussion focused primarily on robocalls. Chairman Pai discussed the recent declaratory ruling adopted by the FCC that allows telecommunications providers to turn robocall blocking on by default, instead of requiring consumers to opt in to that technology. He also stated that the passage of S. 151, the Telephone Robocall Abuse Criminal Enforcement and Deterrence [TRACED] Act, will complement the FCC's work to help stop unwanted robocalls.2 Several senators expressed support for blocking robocalls and stated that such a service should be free to consumers. The hearing also addressed net neutrality. Specifically, Democratic senators stated their concern that the FCC's repeal of net neutrality rules in 2018 has caused consumer harm. Chairman Pai noted that investment in and rollout of broadband services has increased since net neutrality rules were repealed. The hearing also touched on potential security risks related to foreign companies' involvement in U.S. infrastructure, with Commissioner Starks stating that partnering with foreign communication companies would present a serious risk, and that various stakeholders should address existing security risks that are "inherent in our infrastructure."

Senate Committee on Banking, Housing, and Urban Affairs Holds "Data Brokers" Hearing

On June 11, 2019, the Senate Committee on Banking, Housing, and Urban Affairs (Committee) convened a hearing, "Data Brokers and the Impact on Financial Data Privacy, Credit, Insurance, Employment and Housing." Witnesses included a representative from the Government Accountability Office (GAO) and a consumer privacy advocate. Members and witnesses addressed a variety of subjects, including federal privacy legislation and regulation of "consumer scores."

Opening statements by Committee Chairman Mike Crapo (R-ID) and Ranking Member Sherrod Brown (D-OH) suggested that most consumers have limited information about "data brokers" and their data practices. Chairman Crapo also mentioned the Fair Credit Reporting Act (FCRA) and stated that the FCRA's role in the digital economy should be reexamined. The GAO representative's opening statement focused on the U.S. privacy framework, explaining her view that the current system based on sectoral laws leaves "gaps," including those related to emerging technologies. The consumer privacy advocate urged Congress to expand the scope of the FCRA to address "consumer scores," which she described as any "score" assigned to a consumer not regulated by the FCRA.

During questioning, Committee members and witnesses expressed their concerns about the risks associated with inaccurate credit report data and the use of consumer data to make informal eligibility determinations, including via the use of biased algorithms. Witnesses addressed questions regarding the volume of consumer data held by "data brokers" and the relationship between credit bureaus and "data brokers". Both witnesses said that "data brokers" possess more consumer information than government entities and credit bureaus. The consumer privacy advocate stated that credit bureaus have integrated "data brokers" into their core businesses to obtain more consumer data. Hearing participants also discussed a number of privacy- related bills (both pending and not yet introduced), including bills related to establishing a duty of care for processing data, transparency regarding data collection and the value of consumer data, and data portability.

Around the Agencies

NIST Hosts Two Events Discussing the Ongoing Privacy Framework Development

In May 2019, the National Institute of Standards and Technology (NIST) hosted two events regarding the development of NIST's Privacy Framework. The effort to follow up on NIST's Cybersecurity Framework (released in 2014) and create a Privacy Framework began in October 2018, when NIST hosted the first Privacy Framework Workshop. The goal of creating a Privacy Framework, as described by Donna Dodson, Chief Cybersecurity Advisor at NIST, is to help organizations address how they can "consider the privacy impacts to individuals as they develop systems, products, and services."3

NIST hosted the second workshop in the Privacy Framework development series on May 13, 2019. This workshop was made up of three panels: the first with NIST employees; the second with industry voices; and the third with a more global focus, where representatives from companies in the space and the International Trade Administration spoke. During the workshop, Kevin Stine, Chief of the Applied Cybersecurity Division at NIST, commented that the Privacy Framework is meant to be voluntary, risk- and outcome-based, non-prescriptive, adaptable (including across different legal regimes), and written using "common and accessible language."4 Mr. Stine stated that the purpose of the workshop was to understand organizations' needs and challenges with respect to managing privacy risks and to use feedback to identify potential areas of improvement from the discussion draft. He also noted that a "preliminary draft" is anticipated in July/August 2019 and that "Version 1.0" is expected in October 2019.

The panelists at Workshop #2 offered a variety of feedback for the draft. Nick Oldham, Chief Privacy Officer and Data Governance Officer at Equifax, encouraged NIST to develop the Framework to be comprehensive, usable for business, and operable internationally. Annie Antón, Professor and Former Chair of the School of Interactive Computing at the Georgia Institute of Technology, cautioned NIST against correlating the Privacy Framework's structure too closely with that of the NIST Cybersecurity Framework and expressed concern that opportunities for guidance "missed by the Cybersecurity Framework" may be "overlooked" by the Privacy Framework.5

NIST's second May event, a live webinar hosted on May 28, 2019, "NIST Privacy Framework Discussion Draft," was hosted and  led by NIST officials, who talked about desired attributes for the Framework and the planned Privacy Development schedule, and solicited stakeholder feedback. Adam Sedgewick, Senior IT Policy Advisor at NIST, noted that NIST intended the Framework to be "agnostic" and useful for organizations subject to various privacy laws and regimes, while Ellen Nadeau, Deputy Manager of the Privacy Framework at NIST, said that NIST plans for the Framework to be "tech neutral." Naomi Lefkovitz, Senior Privacy Policy Advisor at NIST, described the layout of the Discussion Draft, which includes: (1) a "core" comprising five guidance "functions" for organizations; (2) "profiles" that enable organizations to determine how they are currently equipped to manage privacy risks, and what steps they should take to reach their privacy risk-management goals; (3) implementation tiers that describe different "levels" of privacy resources and processes for organizations; (4) references to existing laws and standards; and (5) a "roadmap" describing areas for which "there is not a lot of [privacy] guidance available," such as emerging technologies. Ms. Nadeau said that all contents of the Discussion Draft are "up for [stakeholder] debate.

Workshop #3, "Getting to V1.0 of the NIST Privacy Framework," will be held on July 8-9, 2019.

FTC Holds Final Hearing in Series on "Competition and Consumer Protection in the 21st Century"

On June 12, 2019, the Federal Trade Commission (FTC) held a hearing, "Roundtable with State Attorneys General." This hearing was the fourteenth and final hearing of the FTC's series on "Competition and Consumer Protection in the 21st Century." This hearing consisted of six panels that brought together state attorneys general (AGs), state AG staff, and academics, among others, to discuss a wide range of topics pertaining to consumer protection enforcement, policy solutions to complex consumer protection issues, and antitrust enforcement. Throughout the day, panelists covered such topics as: (1) possible regulation and enforcement of the tech and data industries; (2) digital advertising; (3) data breaches; (4) data privacy and security principles; (5) emerging technologies; and (6) current privacy laws and regulations.

A main focus of the first two panels was data privacy and consumer protection policy. Panelists discussed how heavily companies rely on data for their success. Tennessee AG Herbert Slatery III commented that Internet platforms' success is profoundly dependent on the data that they collect. Panelists also expressed concern that data use could lead to profiling. Panelist Benjamin Wiseman, Director of Consumer Protection at the District of Columbia AG's Office, and Kaitlin Caruso, Deputy Director, Division of Consumer Affairs at the New Jersey AG's Office, said that companies can use data to predict a variety of socioeconomic factors. In response to a question posed about existing anti-discrimination laws, Mr. Wiseman responded that there needs to be increased transparency to allow for the enforcement of existing laws. Panelists noted that a lack of transparency in digital advertising pertaining to the amount of data advertising companies receive is concerning from a consumer's standpoint.

When discussing possible policy solutions regarding data privacy, the panelists emphasized that any new federal rules or regulations should not backtrack on the current policies at the state and local levels. Kaitlin Caruso noted that the New Jersey AG's Office has an office devoted to cybersecurity and data privacy. Other officials from various state AG offices cited California, Maine, and the European Union as examples of places with strong consumer data privacy protection policies. The panelists recommended policy solutions that would encourage increased transparency from private companies. Panelists expressed agreement that data protection policies should be widespread and all-encompassing. The hearing also touched upon antitrust enforcement and policy. Bilal Sayyed from the FTC's Office of Policy Planning concluded the meeting by stating that a tradeoff exists between consumer privacy and competition as well as innovation, leading to the complexity of these debates.

FCC Votes to Allow Service Providers to Block Robocalls

On June 6, 2019, the Federal Communications Commission (FCC) voted on a Declaratory Ruling that enables "voice service providers" to block unwanted robocalls by default.6 The ruling encouraged providers to offer consumers an "opt in" program enabling them to receive calls only from numbers in their contact list, numbers that the FCC's ruling referred to as belonging to "whitelists." The ruling stated that the FCC would require providers to utilize the FCC's "SHAKEN/STIR caller authentication framework" (framework) if providers do not implement the framework by the end of the year. The press release noted that the FCC is seeking comment on whether the FCC should create a safe harbor "for providers that block calls that are maliciously spoofed so that caller ID cannot be authenticated and that block calls that are 'unsigned.'"7Unsigned calls, as described in the Declaratory Ruling, refers to calls that lack the signature or attestation – a header described in the SHAKEN/STIR standards – that otherwise would be inserted by the originating provider or gateway provider.

This ruling follows the House Committee on Energy and Commerce's Subcommittee on Communications and Technology (Subcommittee) hearing, "Accountability and Oversight of the Federal Communications Commission," which occurred on May 15, 2019. Throughout the hearing, Subcommittee members expressed frustration with robocalls and called on the FCC commissioners to protect consumers from them.

Following the FCC's ruling, the Senate Committee on Commerce, Science and Transportation (Committee) convened a hearing on "Oversight of the Federal Communications Commission" on June 12, 2019. Members of the Committee expressed disappointment that the FCC's ruling did not require the default blocking framework to be offered for free by providers.

In the States

Nevada Passes Privacy Law to Take Effect Before the CCPA

On May 29, 2019, Nevada Governor Steve Sisolak signed Senate Bill 220 into law, amending Nevada's existing online privacy statute to include a requirement that online operators provide consumers with a means to opt out of the sale of specific personal information collected by an Internet website or online service.8 The law goes into effect on October 1, 2019, three months before the effective date of the California Consumer Privacy Act (CCPA).

While Nevada's law includes concepts similar to the CCPA's right to opt out of the sale of personal information, the scope and coverage of Nevada's law are narrower than the CCPA in several notable ways:

  • Scope of the law. Nevada's law requires operators of Internet websites and online services to honor a consumer's request not to sell his or her personal information. Unlike the CCPA, consumers are not provided additional rights, such as the rights of access or deletion of personal information.
  • Coverage. The Nevada law applies to online "operators" who: (1) own or operate a website or online service for commercial purposes; (2) collect and maintain covered information from consumers who reside in Nevada and use or visit the Internet website or online service; and (3) purposefully direct their activities toward Nevada, consummate some transaction with Nevada or a resident of Nevada, or purposefully avail themselves of the privilege of conducting activities in Nevada. This differs from the CCPA, which applies to both online and offline business operations. The Nevada law does not apply to entities that are regulated by the Gramm-Leach-Bliley Act or the Health Insurance Portability and Accountability Act; third parties that operate, host, manage, or process information for the owner of a website or online service (i.e., service providers); or certain persons who manufacture, service, or repair motor vehicles.
  • Scope of "personal information." Nevada's law applies to a narrower scope of personal information as compared to the CCPA. Under Nevada's law, "covered information" includes enumerated categories of personal information about a consumer collected by an operator through an Internet website or online service that is maintained by the operator in an accessible form. While the CCPA also enumerates certain categories of personal information, it also includes "information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household."9
  • Definition of "sale." The Nevada law defines "sale" to be "the exchange of covered information for monetary consideration by the operator to a person for the person to license or sell the covered information to additional persons" and excludes where the information is disclosed for purposes that are consistent with the consumer's reasonable expectations, considering the context in which the consumer provided the information. The CCPA's definition of "sale" is broader, in that it includes a business's disclosure of personal information to another business or third party for monetary or "other valuable consideration."10
  • Definition of "consumer." The Nevada law applies to consumers, which the statute defines as a person who seeks or acquires any good, service, money, or credit for personal, family, or household purposes from the website or online service of an operator. The CCPA's definition of a consumer is more expansive and includes any California resident.
  • Response time. Nevada's law requires operators to respond to verified consumer requests within 60 days of receiving the request and permits a business to extend its response up to 30 days. In contrast, the CCPA gives businesses 45 days to respond to verified consumer access requests, with a potential 90-day extension.

Maine Joins California and Nevada by Passing Its Own Data Privacy Law

On June 6, 2019, Maine Governor Janet Mills signed into law a new data privacy bill that is set to take effect on July 1, 2020.11 Subject to a few exceptions, the law prohibits broadband Internet access service providers from using, disclosing, selling, or permitting access to "customer personal information," as that term is defined in the statute. Maine's law, An Act to Protect the Privacy of Online Customer Information, puts the state among the ranks of California and Nevada, states that have recently passed their own data privacy legislation.

Maine's law is both similar to and different from the California Consumer Privacy Act (CCPA). One similarity between Maine's law and the CCPA is that both laws prohibit regulated entities from refusing to serve a consumer or from charging a consumer a different rate based on the consumer's decision to exercise his or her privacy rights. However, Maine's law applies only to "broadband Internet access service providers," commonly known as ISPs, while the CCPA applies to any "business" that meets certain revenue or data processing thresholds.12 Other notable features of Maine's new data privacy law are as follows:

  • Opt in consent. Unlike the CCPA, Maine requires all customers to provide "express, affirmative consent" for broadband Internet access service providers to use, disclose, sell, or permit access to customer personal information. Maine's law also explicitly states that a customer can revoke such consent at any time.
  • Exception for certain broadband Internet access service provider marketing activities. Maine's law contains an exception that allows broadband Internet access service providers to collect, retain, use, disclose, sell, and permit access to customer personal information without receiving opt in consent for the limited purpose of advertising or marketing the provider's communications-related services to the customer.
  • Substantive data security requirements. Maine's law requires broadband Internet access service providers to "take reasonable measures" to protect consumer personal information. To determine what measures are reasonable, a provider must take into account the nature and scope of its activities, the sensitivity of the data it collects, its size, and the technical feasibility of security measures.
  • Required notices. A broadband Internet access service provider must make a "clear, conspicuous, and nondeceptive" privacy notice available at the point of sale and on its website to apprise consumers of their rights under Maine's new law.

International

UK's ICO Issues Blog Posts on Biometric Data, AI, and the GDPR

From mid- to late May 2019, the United Kingdom's (UK) Information Commissioner's Office (ICO) published blog posts on such topics as biometric data, artificial intelligence (AI), and the European Union's (EU) General Data Protection Regulation (GDPR). First, on May 10, 2019, the ICO published a blog by ICO Deputy Commissioner for Policy, Steve Wood, "Using biometric data in a fair, transparent, and accountable manner."13 In his post, Deputy Commissioner Wood presented four "key [guidance] points" that he said entities should consider when processing biometric data, and personal information more broadly: (1) the GDPR mandates that "data controllers" conduct data protection impact assessments (DPIAs); (2) entities must address risks identified by their respective DPIAs; (3) the GDPR requires entities to install organization-wide measures to demonstrate accountability; and (4) entities must gain explicit consent before using biometric data under the GDPR.

On May 23, 2019, the ICO issued a blog post authored by a group of ICO staff, "Known security risks exacerbated by AI."14 The post noted that entities may experience more difficulty in complying with security requirements when using AI than when using "more established technologies." Also, according to the blog post, the ICO intends to enhance its general security guidance materials to address "additional requirements" under the GDPR that the authors stated "will not be AI-specific," and will still be applicable for entities that use AI. The post's authors added that the ICO invites comment on how to expand its AI Auditing Framework, which is currently under development.

On May 30, 2019, the ICO posted a blog, "GDPR – one year on," in which UK Information Commissioner Elizabeth Denham characterized the entry into force of the GDPR and the UK's Data Protection Act 2018 as constituting a "seismic shift in privacy and information rights."15 She noted that the ICO will prioritize assisting the business community in 2019 and 2020, adding that companies should emphasize organizational accountability rather than baseline compliance. Information Commissioner Denham also stated that a number of ICO investigations are "now nearing completion and we expect outcomes soon." Though the conditions surrounding the UK's planned departure from the EU (Brexit) remain unclear, ICO officials have previously affirmed that they plan on adding the GDPR and its principles to existing UK law upon exit from the EU.

UK's ICO Issues "Interim Report" on Ongoing AI Research

On June 3, 2019, the UK Information Commissioner's Office (ICO) released an interim report on Project ExplAIn, a collaboration between the ICO and the Alan Turing Institute (Turing Institute) to create practical guidance to assist organizations with explaining artificial intelligence (AI) decisions. The report defines AI decisions as being based on the outputs of machine learning models, trained on data to generate predictions, recommendations, or classifications. The report states that AI can be used to inform the thinking of human decision-makers, or to automate the generation and delivery of decisions without any human involvement. After being tasked with the project in 2018, the ICO and the Turing Institute conducted research via citizens' juries and industry roundtables to gather views from a variety of stakeholders on the subject.

According to the report, the purpose of Project ExplAIn is to produce guidance to assist organizations with meeting individuals' expectations when explaining AI decisions about them. Individuals who participated in the research stated that explanations of AI decisions were important to them in areas where there are concerns about fairness, such as the areas of recruitment and criminal justice. When discussing AI's use in healthcare, participants were more interested in knowing that decisions were accurate than in why they were made. Participants also expressed interest in explanations of AI decisions where they would also expect an explanation of a human's decision.

Three key themes that emerged from the research are: the importance of context in explaining AI decisions; the need for education and awareness around AI; and the various challenges to providing explanations. According to the ICO, the research also showed a need for improved education and awareness around the use of AI in decision-making. The report did not clearly state who should be responsible for leading the initiative. One of the main challenges to providing explanations that was identified was the lack of a standard approach to establishing internal accountability for explainable AI decision systems. Additionally, the report stated that it is necessary to have board-level buy-in on explaining AI decisions. The interim report concluded the following implications of the guidance: (1) there is no one-size-fits-all approach for explaining AI decisions; (2) there is a need for board-level buy-in on explaining AI decisions; and (3) there is value in having a standardized approach to internal accountability to help assign responsibility for explainable AI decision systems.

The ICO plans to publish a first draft of its guidance over the summer. The first draft will be open to public consultation and will be followed by the final guidance in the fall. The ICO states that its guidance will help organizations comply with data protection law, promote best practices, and foster individuals' trust and confidence in AI decisions.