How much cybersecurity is enough? This question is as legal as it is technical. In legal terms, the question is answered by the applicable standard of care. The standard of care draws the line between conduct that renders a company liable, and that which does not. Where a company meets or exceeds the standard of care, it cannot be held liable in law for damages related to that conduct. In the context of cybersecurity, the standard of care may be established by a regulator, by the legislature, by contract or, retrospectively, by a court in the context of a lawsuit. This is rarely if ever done explicitly. Standards of care, typically are framed in “should” rather than “must” language. They are, often, technologically neutral, in the sense that they do not require a specific solution to a specific problem.
By way of example, most regulators prefer persuasive as opposed to mandatory regulation. Hence they prefer to issue “guidelines” or “advisories” to establish standards of care. Thus, for example, the CSA Staff Notice 11-326 Cyber Security is, as its name states, a notice, rather than an order or regulation. As a notice, it is not enforceable at the instance of the regulator, nor is there a penalty regime in place for failure to abide. That said, failure to comply would be a strike against an issuer, registrant or regulated entity in any proceeding that arises as a result of a cybersecurity breach.
Similarly, the Office of the Superintendent of Financial Institutions of Canada (OSFI) issued its Cyber Security Self-Assessment Guidance on October 28, 2013. While noting that many federally regulated financial institutions were already conducting assessments of their level of preparedness, OSFI suggested those institutions “could benefit from guidance related to such self-assessment activities.” While the guidance is neither a regulation nor order, per se, no one doubts that OSFI expects federally regulated institutions to abide by it, and that a failure to do so would have consequences in other forums’ proceedings related to cybersecurity breaches.
In the United States, Executive Order 13636, Improving Critical Infrastructure Cybersecurity, issued on February 12, 2013, called for the development of a voluntary, risk-based cybersecurity framework — a set of industry standards and best practices, to help organizations manage cybersecurity risks. In response, the National Institute of Standards and Technology (NIST) published its Framework for Improving Critical Infrastructure exactly one year later. The Framework “uses a common language to address and manage cybersecurity risk in a cost-effective way based on business needs without placing additional regulatory requirements on businesses”.
NIST is of particular interest when one is concerned with critical infrastructure businesses, such as public utilities, communication systems, electrical grids, pipelines and the like. It is premised on a “core” of five functions.
“Identify” means to identify those systems which are critical to the business, and those which are less critical, so that they can be prioritized. The control of private and confidential information may be critical to some businesses (for example, law firms), whereas the control SCADA systems may be much more relevant to others (for example, pipelines). “Protect” means to develop and implement activities necessary to ensure delivery of the critical services. “Detect” is the ability to know if and when a cybersecurity event occurs. This is no minor matter. Not all cyberattacks are easily detected. Some lay in wait, for a period of days, months or even years. “Respond” is the ability to take action to terminate or mitigate a threat. “Recover” is the ability to develop resilience and restore capabilities or services that were impaired as a result the event.
“Categories” divide functions into groups of outcomes like, for example, “access control” or “intrusion detection”. “Subcategories” further divide the category into more specific outcomes - for example “access control by double identification” or “notifications from intrusion detection system are investigated”. “Informative references” include specific standards, guidelines and practices common in infrastructure industries, if and to the extent they exist.
Some standards are industry-specific. The North American Electric Reliability Corporation (NERC) has issued standards for the North American electrical distribution grid in the form of CIP (critical infrastructure protection) Version 4, and is beginning to transition to CIP Version 5. The NERC standards were tested in the GridEx II physical security and cybersecurity exercise in November 2013.
The ISO/IEC 27000 family of standards comprises of information security standards published jointly by the ISO and the International Electrotechnical Commission. The series is meant to provide an overall information security system, within which cybersecurity risks are addressed. Whereas NIST is designed to apply in particular to infrastructure, systems, the ISO/IEC 27000 family establish information security management standards applicable generically - they include, for example, standards in respect of leadership, planning, support, operation, performance evaluation and improvement.
It is important to understand that NIST and ISO/IEC 27000 are systems for identifying specific objectives in “best of class” security systems. They may specify, for example, that there be “information transfer policies and procedures”, but neither tells you those what policies or procedures should be. These must be determined in the context of each case. NIST and ISO/IEC 27000 technologically neutral. So too are COBIT and PCI DSS.
COBIT, or Control Objectives for Information Related Technology, is a framework created by the Information Systems Audit and Control Association (ISACA). It is now in its fifth version. Its purpose is to link business goals to IT goals. COBIT operates at a high level as a process model, dividing the subject in the four domains- Plan and Organize, Acquire, and Implement, Deliver and Support, and Monitor and Evaluate. These subjects are further divided and can be linked to more particularized or detailed standards, such as PCI DSS.
PCI DSS is the Payment Card Industry Data Security Standard. It is a proprietary standard for organizations that handle credit cards, and is administered by the Payment Card Industry Security Standards Council. Now in its third version, the standard specifies 12 requirements for compliance. These include, inter alia, the installation and maintenance of a firewall to protect cardholder data, the encryption of cardholder data access, which is open to public networks, and the restriction of access to cardholder data by business on a need to know basis, etc.
None of NIST, ISO/IEC 2700, COBIT or PCI DSS constitutes a legal standard. The legal standard of care is the standard that a court considers that the defendant should meet, due regard being had not only for relevant technical or process standards, but the conduct of the prototypical “reasonable man” or “reasonable company” in like circumstances. In any given case, regimes and frameworks, such as NIST, ISO/IEC 2700, COBIT or PCI DSS, may or may not constitute part of the standard of care. Much depends on what others in a given industry, or in like industries, consider to be appropriate security processes, methods and regimes.
Legal standards of care can be established by analysis of the activities of industry participants. A report of the SEC’s Office of Compliance Inspections and Examinations issued February 3, 2015 in respect of registered broker dealers and investment advisers is an example. The National Exam Risk Alert, Cybersecurity Examination Sweep Summary, found that:
- 93 per cent of broker-dealers and 83 per cent of advisers had adopted written information security policies;
- 88 per cent of broker-dealers and 53 per cent of advisers referenced published cybersecurity risk prevention standards, such as the NIST standards or ISO-standards;
- the vast majority of examined firms conducted periodic risk assessments, on a firm-wide basis, but few applied those requirements to their vendors;
- 88 per cent of broker-dealers and 74 per cent of advisers stated that they had experienced cyberattacks directly or through one or more of their vendors;
- almost all of the examined broker-dealers and advisers made use of encryption in some form;
- 68 per cent of brokers and 30 per cent of advisers had a designated chief information security officer (CISO); and
- 58 per cent of brokers and 21 per cent of advisers maintained cybersecurity insurance.
Standards of care at law can also be legislated. No Canadian federal or provincial legislation establishes cybersecurity standards of care per se. There are, however, legislated standards of care with respect to private information and health-care information. As cybersecurity breaches often result in the disclosure of personal information, these standards of care are especially important.
Federally regulated workplaces in the private sector are regulated under the Personal Information Protection and Electronic Documents Act (PIPEDA). PIPEDA applies to banks listed in Schedules I and II of the Bank Act and to personal information that flows across provincial or national borders. Privacy rights in respect of information collected by federally regulated bodies (including Canada Revenue Agency, the Canadian Space Agency, the National Research Council of Canada, Statistics Canada, and the Treasury Board of Canada) are governed by the federal Privacy Act.
Privacy rights of provincial sector organizations are protected under statutes “substantially similar” to PIPEDA in the provinces of Alberta, British Columbia, and Quebec. PIPEDA applies in those provinces that do not have their own legislation. Each province and territory has its own public sector privacy legislation. In British Columbia, for example, the Freedom of Information and Protection of Privacy Act (FIPPA) sets out access and privacy rights of individuals as they relate to the public sector.
A review of each of these pieces of legislation is beyond our present scope, but a concise review of one is instructive. Under PIPEDA, personal information includes “information about an identifiable individual, but does not include the name, title or business address or telephone number of an employee of an organization”. The combined effect of PIPEDA s. 5 and s. 4.7.1 of Schedule 1 is to require that personal information be protected “by security safeguards appropriate to the sensitivity of the information,” including technological measures.
Section 34 of British Columbia’s Personal Information Protection Act (BC PIPA) states that “an organization must protect personal information in its custody or under its control by making reasonable security arrangements to manage unauthorized access, collection, use, disclosure, copying, modification or disposal or similar risks”.
Broad legislative standards such as “reasonable” and “appropriate” arguably do nothing more than invoke the common-law test, where the issue is whether conduct is reasonable, having regard to that which would have been undertaken by a reasonably minded person operating in the same circumstances.
Though not the decision of a court, the Privacy Commissioner’s PIPEDA Report on Findings #2014-004 is revealing. In this matter, an individual received a breach notification letter from a third-party provider of ticketing, marketing and fundraising services based in the United States. The letter indicated that her personal information (including name, contact information, and credit card number) had potentially been accessed through a cyberattack. While the individual had no direct relationship with the organization, she had made a purchase from a merchant that used its services.
The letter was part of a broader breach notification effort that included notifying (i) United States law enforcement, (ii) Canadian data protection authorities, including the Privacy Commissioner, and (iii) the organization’s clients. Some of the organization’s Canadian clients were small businesses, so the organization also opted to contact these clients’ customers directly, where this course of action would be the most expedient means of notification. After receiving a notification letter, the individual filed a complaint against the organization under the PIPEDA.
In keeping with requirements of Section 5 and Schedule 1 of the Act, the investigation focused on whether the organization had safeguards in place that were appropriate to the sensitivity of the information at the time of the breach. It noted that the fact that a breach had occurred was not necessarily indicative of a contravention of the Act, as “an organization may have appropriate safeguards in place and still fall victim to a determined, clever and/or innovative attacker”.
In this instance, the commissioner found that the organization had numerous technical safeguards in place at the time of the incident that were aimed at preventing and detecting breaches. These included: (i) the use of firewalls, (ii) the hashing and encryption of sensitive information, (iii) separate storage and obfuscation of encryption keys, and (iv) multiple intrusion detection systems (through which the breach was detected). The effectiveness of these safeguards was independently evaluated on a regular basis through external vulnerability scans and an audit of its “at-rest” data protection practices against industry standards.
The commissioner also accepted that “the organization had a vulnerability prevention program in place at the time of the breach; however, the vulnerability that led to the incident was a ‘zero-day exploit’, meaning it was not publicly known prior to the attack, and as such, the organization could not have had foreknowledge of it”.
Given the above, the commissioner found that the organization did have appropriate safeguards in place at the time of the breach. As such, the commissioner rejected the complaint.
While the criteria applied by the commissioner was “appropriateness”, it seems clear based on the reasons given that “appropriateness” was judged in the context of what was reasonable and what was not. Certainly, a finding that the technologies employed were appropriate necessarily leads to a conclusion that they were reasonable.
A recent decision of the United States District Court in New Jersey is also instructive. In this case an action was brought by a shareholder against the directors and officers of Wyndham Worldwide Corporation for their failure to sue as a result of data breaches pertaining to the company’s online networks, during which hackers accessed the personal and financial information of a large number of customers. The court found that the directors had made appropriate inquiries, obtained appropriate advice, and had enough information to make their decision. Their conduct was therefore reasonable in the circumstances, and the action was dismissed.
Wyndham Worldwide is, however, not yet out of the woods. The U.S. Federal Trade Commission commenced injunction proceedings in 2012, alleging that Wyndham’s failure to maintain reasonable security allowed intruders to obtain unauthorized access to its computer networks, as well as those of its franchisees, resulting in fraudulent charges on customers’ accounts, more than $10.6 million in fraud loss, and the export of hundreds of thousands of consumers’ payment card account information to a domain registered in Russia. Proceedings are ongoing.
Therefore, the answer to the question “How much cybersecurity is enough?” depends on the organization, the industry and the threats to which the organization is exposed. An equally important question is, “When do we have enough?” The frank answer to this question is, “Never.” Cybersecurity is a process, not a state. As cyber technologies and threats developed, so too do standards of care.