Emerging technologies, such as cloud computing and the “smart city,” have the potential to greatly advance our quality of life. The use, retention, and storage of data that go along with them, however, have raised citizen concerns about privacy risks. The National Institute of Standards and Technology (“NIST”) addresses these concerns in a new draft report titled Privacy Risk Management for Federal Information Systems (NISTIR 8062), which was released on May 29, 2015. The report introduces NIST’s Privacy Risk Management Framework (“PRMF”), which anticipates and addresses privacy risk resulting from the processing of personal information. NIST intends that the framework will lay the foundation for establishing a common vocabulary that facilitates better understanding of (and communication about) privacy risks and how to effectively implement privacy principles. Although the report is directed at federal systems, the principles outlined may be useful for any business that processes personal information. The NIST report focuses on the development of two key pillars of the PRMF: privacy engineering objectives and a Privacy Risk Model.
Privacy Engineering Objectives
Three privacy engineering objectives enable system designers and engineers to build information systems that are capable of implementing an agency’s privacy goals. They also are intended to support privacy risk management by facilitating consistent, actionable, and measurable design decisions.
- Predictability enables individuals to make reliable assumptions by about the use of personal information. Predictability is core to building trust and enabling self-determination: the goal is to design systems so that stakeholders are not surprised by the way in which personal information is handled. The concept enables operators to assess the impact of any changes in an information system and implement appropriate controls. Even in a system that may create unpredictable or previously unknown results—such as a large data analysis or research effort—predictability can provide a valuable set of insights about how to control privacy risks that may arise. For example, if the results of a study are inherently unpredictable, operators can implement controls to restrict access to or use of those results.
- Manageability provides the capability for granular administration of personal information, including alteration, deletion, and selective disclosure. Manageability is not an absolute right, but rather a system property that allows individuals to control their information while minimizing potential conflicts in system functionality. Consider a system in which fraud detection were a concern. In such a system, manageability may limit the ability of individuals to be able to edit or delete information themselves while enabling an appropriately privileged actor to administer changes to maintain accuracy and fair treatment of individuals.
- Disassociability enables a data system to process personal information or events without associating the information with individuals or devices beyond the system’s operational requirements. Some interactions—such as providing health care services or processing credit card transactions—rely on privacy but also require identification. Unlike confidentiality, which is focused on preventing unauthorized access to information, disassociability recognizes that privacy risks can result from exposures even when access is authorized or as a byproduct of a transaction. The principle allows system designers to deliberately weigh identification needs against privacy risks at each stage of design.
These three privacy engineering objectives are intended to provide a degree of precision and measurability so that system designers and engineers, working with policy teams, can use them to bridge the gap between high-level principles and practical implementation within a functional system.
Privacy Risk Model
The Privacy Risk Model aims to provide a repeatable and measurable method for addressing privacy risk in federal information systems. The model recognizes the distinction between privacy risk, in which adverse outcomes can arise from the operations of the system itself; and security risk, in which a damaging external event creates the risk.
The Model defines “privacy risk” as a function of the likelihood that a data action causes problems for individuals, such as loss of trust or economic loss, and the impact of the problematic data action. “Data action” refers to an information system operation that processes personal information. Problematic data actions include appropriation, in which personal information is used in ways that exceed an individual’s expectation or authorization; and surveillance, which is tracking or monitoring of personal information that is disproportionate to the purpose or outcome of the service. The function can be expressed as:
Privacy Risk = Likelihood of a problematic data action X Impact of a problematic data action
NIST has developed a set of worksheets collectively called the Privacy Risk Assessment Methodology (“PRAM”) to help stakeholders apply the Privacy Risk Model.
Future areas of work in privacy risk management will focus on improving the application of controls–policy, operational, and technical—to mitigate risks identified by the PRMF. Improving the application of controls will require research to identify the breadth of controls available, the kinds of privacy risks that controls can address, how controls can be effectively applied, and what kind of ancillary effects their application may create.
NIST requests feedback to refine the privacy engineering objectives and the privacy risk equation, and to develop additional guidance to assist agencies in determining the likelihood and impact of privacy risks. NIST invites the public to send comments on its draft report to firstname.lastname@example.org through July 31, 2015, at 5:00 p.m. EDT.
Brian Kennedy, Associate in our Washington, D.C. office, contributed to this post.