In 2015, the Federal Communications Commission joined the chorus of federal agencies seeking to declare its power when it comes to data breaches. In April, the agency made its first foray into the field by way of a consent decree with a major communications provider. In July, the FCC inked another consent decree, this time with TerraCom, Inc., and YourTel America, Inc., based on alleged failures to protect personally identifiable information. Both of these actions resulted in stiff monetary penalties, the imposition of stringent compliance programs, and years of audits.

And the FCC’s policing efforts in regard to consumer privacy did not stop there. Last month, the agency resolved another enforcement investigation by entering into a consent decree with Cox Communications, Inc., a cable operator, in connection with an August 2014 data breach by a hacker group known as the “Lizard Squad.” The hacker group employed a social engineering method known as “pretexting” to gain access to Cox’s systems containing private customer information. Pretexting is a one of the oldest ploys in the book of hacking—the hacker pretended to be an authorized employee from Cox’s information technology department and convinced Cox’s contractor to enter access credentials into a fake, or “phishing,” website controlled by the hacker group. As a result, according to the consent decree, the hackers gained access to Cox’s systems allowing them to view the PII of some of Cox’s current and former customers, as well as some Customer Proprietary Network Information (CPNI), which is information that telecommunications service providers acquire about their subscribers. Subsequently, the hackers posted some of this personal information (names, dates of birth, social security numbers, credit card information, email addresses, etc.) on social media websites and shared this information with other hackers. At the time of the breach, multi-factor authentication to access Cox’s systems was not required for all Cox employees and third-party contractors. Moreover, according to the FCC, Cox did not disclose the CPNI breach to the FCC, as required by law.

The Cox action marks the FCC’s first application of Section 631 of the 1934 Communications Act in the data breach context. Under Section 631(c), with exceptions, a cable operator “shall not disclose personally identifiable information concerning any subscriber without the prior written or electronic consent of the subscriber concerned and shall take such actions as are necessary to prevent unauthorized access to such information by a person other than the subscriber or cable operator.” Congress and the FCC have made it clear that cable operators and telecommunications carriers, such as Cox, must take “every reasonable precaution” to protect their customers’ data. Furthermore, the law requires cable operators to disclose promptly CPNI breaches via the FCC’s reporting portal, within seven business days after reasonable determination of a breach.

Moreover, although the FCC’s consent decrees in its previous enforcement actions impose a “reasonableness” standard on companies’ compliance and risk assessment plans, the Cox consent decree requires Cox to develop and implement specific technology solutions in its future compliance plan, which the FCC apparently thinks will ensure that similar data breaches won’t happen in the future. Specifically, the consent decree requires Cox to do the following:

  1. conduct risk assessment with reference to the NIST Cybersecurity Framework;
  2. implement technological safeguards such as a multiple factor access/authentication or equivalent control to Cox data systems;
  3. implement policies and procedures to identify the nature and extent of CPNI and PII collected or maintained by Cox and third-parties and minimize the number of employees who have access to customer information on a strict need-to-know basis;
  4. implement a third-party oversight program requiring that all off-network access by third-party contractors with access to Cox’s systems be authenticated through an approved site-to-site virtual private network;
  5. develop an Incident Response Plan (IRP) and perform annual test exercises of the IRP;
  6. provide privacy awareness training to employees and third-parties; and
  7. notify all affected customers of the breach and offer them free credit monitoring.

This is a good “checklist” against which other companies can measure their own information security programs.

In addition to the requirement of this robust compliance plan, the FCC fined Cox $595,000. Considering that the Cox security breach affected only 61 customers, this amounts to almost $10,000 per customer, in agency penalties alone. This shows that the FCC’s approach to data breaches is perhaps more punitive than proportional. The regulator’s enforcement approach stands in stark contrast to what is happening in federal courts, where putative class actions over data breaches are routinely tossed due to plaintiffs’ inability to show actual harm that would establish Article III standing, or to meet the “predominance” test that is necessary to certify a class. In most cases that have faced these issues, “the anxiety of waiting and wondering” about the potential that your data may be misused at some point in the future has been found not to be significant enough injury, and the individual factual issues regarding mitigation and harm are seen to predominate over the issues common to the class.

It’s possible that, as more and more government agencies exercise this kind of enforcement power, future courts struggling with the difficult issues “actual harm” and “predominance” in class action litigation may be tempted to cede their authority to regulators, perhaps judging that it would be more appropriate for these matters to be dealt with through regulatory action as opposed to litigation.