a NIST blog
Most in the IT space won’t know this, but NIST has one of the world’s best concrete engineering programs. Maybe we just have concrete on the mind since a couple of us in the office are doing house renovations, but with today’s publication of the NIST Internal Report 8062, An Introduction to Privacy Engineering and Risk Management in Federal Systems (NISTIR 8062), we are taking a page from the concrete folks’ book with a document that we believe hardens the way we treat privacy, moving us one step closer to making privacy more science than art. NISTIR 8062 introduces the concept of applying systems engineering practices to privacy and provides a new model for conducting privacy risk assessments on federal systems.
There were several reasons for venturing into this territory. Certainly the Office of Management and Budget’s July 2016 update to Circular A-130 gave us a strong impetus, but our ongoing trusted identities pilot program was also a significant earlier driver. The pilots need to demonstrate their alignment with the NSTIC Guiding Principles, but in the first couple of years of the program, grant recipients often had difficulty expressing to us how their solutions aligned with the Privacy Guiding Principle. Even agreeing about the kinds of privacy risks that were of greatest concern in federated identity solutions could drag out over multiple rounds of discussion.
NIST has produced a wealth of guidance on information security risk management (the foundation of which is NIST’s Risk Management Framework), but there is no comparable body of work for privacy. While there are international privacy framework standards that include the need for identifying privacy risk, there are no widely accepted models for doing the actual assessment.
We learned from stakeholders that part of the problem is the absence of a universal vocabulary for talking about the privacy outcomes that organizations want to see in their systems. In information security, organizations understand that they are trying to avoid losses of confidentiality, integrity and availability in their systems. The privacy field has the Fair Information Practice Principles, but as high-level principles they aren’t written in terms that system engineers can easily understand and apply. Oftentimes, privacy policy teams must make ad hoc translations to implement them in specific systems.
To try to bridge this communication gap and produce processes that are repeatable and could lead to measurable results, we began by considering how privacy and information security are related and how they are distinct. The Venn diagram below illustrates how information security operates in the space of unauthorized behavior within the system, whereas privacy can be better described as dealing with the aspects of system processing of personally identifiable information (PII) that is permissible, or authorized. The two fields overlap around security of PII.
We also reflected on whether having privacy engineering objectives that had some functional equivalency to confidentiality, integrity, and availability could help bridge the gap between privacy principles and their implementation in systems. Here’s what we came up with.
Lastly, we developed, and confirmed with stakeholders, a privacy risk model to use in conducting privacy risk assessments. We needed a frame of reference for analysis—a clear outcome—that organizations could understand and identify. In information security, the risk model is based on the likelihood that a system vulnerability could be exploited by a threat, and the impact if that occurs. What is the adverse event though when systems are processing data about people in an authorized manner - meaning any life cycle action the system takes with data from collection through disposal? We know that people can experience a variety of problems as a result of data processing such as psychologically-based problems like embarrassment or more quantifiable problems like identity theft. We think that if organizations could focus on identifying whether there was a likelihood that any given action the system was taking with data could create a problem for individuals, and what the impact would be, this would give them a clearer frame of reference for analyzing their systems and addressing any concerns they discovered.
How did this work out for our pilots? Frankly, it exceeded our expectations. Using this privacy risk model, they could identify new privacy risks, prioritize the risks, communicate them to senior management, and implement controls as appropriate (usually some combination of policy-based and technical controls). Shoutout to the pilots—we greatly appreciate your insights!
NISTIR 8062 is only an introduction to privacy engineering and risk management concepts. In the coming months and years, we will continue our engagement with stakeholders to refine these ideas and develop guidance on how to apply them. One of the properties of concrete that makes it so useful is that you can mold it into just about any shape, but once it sets you know exactly what to expect of its performance. This sort of flexible but consistent performance has long eluded those who care about systems-implementable privacy protections.