Featured Product
This Week in Quality Digest Live
Operations Features
Eryn Brown
Their prospects for surviving the pandemic may seem dim, but there are some encouraging signs, experts say
Matt Fieldman
A new Manufacturing Extension Partnership grant will fund a major push in manufacturing innovation
Ryan E. Day
Revolutionizing the supply chain with AI
Manfred Kets de Vries
A best practice is to withdraw from arguments and provide matter-of-fact feedback
Corey Brown
A proactive view on workforce training

More Features

Operations News
ProMation announces additional options for constructing motor-operated valves for industrial flow control
Free education source for global medical device community
Inspect nozzle welds using phased array ultrasound testing techniques including ray-tracing, scanner simulation, coverage maps
March 31, 2021 webinar features carbon and alloy steels
New standard for safe generator use created by the industry’s own PGMA with the assistance of industry experts
Rent with flexibility: ASM Factory Equipment Center
Higher quality contributes to higher efficiency and less downtime
Interfacial launches highly filled, proprietary polymer masterbatches

More News

Operations

Differential Privacy for Privacy-Preserving Data Analysis

New blog series from NIST seeks to fill gaps in its Privacy Framework

Published: Thursday, March 25, 2021 - 12:03

All articles in this series:

Does your organization want to aggregate and analyze data to learn trends, but in a way that protects privacy? Or perhaps you are already using differential privacy tools, but want to expand (or share) your knowledge? In either case, NIST’s blog series on differential privacy is for you.

Why are we doing this series? Last year, NIST launched a Privacy Engineering Collaboration Space to aggregate open source tools, solutions, and processes that support privacy engineering and risk management. As moderators for the Collaboration Space, we’ve helped NIST gather differential privacy tools under the topic area of de-identification. NIST also has published the “Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management” and a companion road map that recognized a number of challenge areas for privacy, including the topic of de-identification.

Now we’d like to leverage the Collaboration Space to help close the road map’s gap on de-identification. Our end-game is to support NIST in turning this series into more in-depth guidelines on differential privacy.

Each post in the series will begin with conceptual basics and practical use cases, aimed at helping professionals such as business process owners or privacy program personnel learn just enough to be dangerous (just kidding). After covering the basics, we’ll look at available tools and their technical approaches for privacy engineers or IT professionals interested in implementation details. To get everyone up to speed, this first post will provide background on differential privacy and describe some key concepts that we’ll use in the rest of the series.

The challenge

How can we use data to learn about a population, without learning about specific individuals within the population? Consider these two questions:
1.“How many people live in Vermont?”
2.“How many people named Joe Near live in Vermont?”

The first reveals a property of the whole population, while the second reveals information about one person. We need to be able to learn about trends in the population while preventing the ability to learn anything new about a particular individual. This is the goal of many statistical analyses of data, such as the statistics published by the U.S. Census Bureau, and machine learning more broadly. In each of these settings, models are intended to reveal trends in populations, not reflect information about any single individual.

But how can we answer the first question, “How many people live in Vermont?”—which we’ll refer to as a query—while preventing the second question, “How many people name Joe Near live in Vermont?” from being answered. The most widely used solution is called “de-identification” (or “anonymization”), which removes identifying information from the data set. (We’ll generally assume a dataset contains information collected from many individuals.)

Another option is to allow only aggregate queries, such as an average over the data. Unfortunately, we now understand that neither approach actually provides strong privacy protection. De-identified data sets are subject to database-linkage attacks. Aggregation only protects privacy if the groups being aggregated are sufficiently large, and even then, privacy attacks are still possible1,2,3,4.

Differential privacy

Differential privacy5,6 is a mathematical definition of what it means to have privacy. It is not a specific process like de-identification, but a property that a process can have. For example, it is possible to prove that a specific algorithm “satisfies” differential privacy.

Informally, differential privacy guarantees the following for each individual who contributes data for analysis: The output of a differentially private analysis will be roughly the same, whether or not you contribute your data. A differentially private analysis is often called a “mechanism,” and we denote it ℳ.


Figure 1: Informal definition of differential privacy

Figure 1 illustrates this principle. Answer “A” is computed without Joe’s data, while answer “B” is computed with Joe’s data. Differential privacy says that the two answers should be indistinguishable. This implies that whoever sees the output won’t be able to tell whether Joe’s data were used, or what Joe’s data contained.

We control the strength of the privacy guarantee by tuning the privacy parameter, ε, also called a “privacy loss” or “privacy budget.” The lower the value of the ε parameter, the more indistinguishable the results, and therefore the more each individual’s data are protected.


Figure 2: Formal definition of differential privacy

We can often answer a query with differential privacy by adding some random noise to the query’s answer. The challenge lies in determining where to add the noise and how much to add. One of the most commonly used mechanisms for adding noise is the Laplace mechanism5,7.

Queries with higher sensitivity require adding more noise in order to satisfy a particular “epsilon” quantity of differential privacy, and this extra noise has the potential to make results less useful. We will describe sensitivity and this tradeoff between privacy and usefulness in more detail in future blog posts.

Benefits of differential privacy

Differential privacy has several important advantages over previous privacy techniques:
• It assumes all information is identifying information, eliminating the challenging (and sometimes impossible) task of accounting for all identifying elements of the data.
• It is resistant to privacy attacks based on auxiliary information, so it can effectively prevent the linking attacks that are possible on de-identified data.
• It is compositional. We can determine the privacy loss of running two differentially private analyses on the same data by simply adding up the individual privacy losses for the two analyses. Compositionality means that we can make meaningful guarantees about privacy even when releasing multiple analysis results from the same data. Techniques like de-identification are not compositional, and multiple releases under these techniques can result in a catastrophic loss of privacy.

These advantages are the primary reasons why a practitioner might choose differential privacy over some other data privacy technique. A current drawback of differential privacy is that it is rather new, and robust tools, standards, and best-practices are not easily accessible outside of academic research communities. However, we predict this limitation can be overcome in the near future due to increasing demand for robust and easy-to-use solutions for data privacy.

References
1. Garfinkel, Simson, John M. Abowd, and Christian Martindale. “Understanding database reconstruction attacks on public data.” Communications of the ACM 62.3 (2019): 46–53.
2. Gadotti, Andrea, et al. “When the signal is in the noise: exploiting diffix’s sticky noise.” 28th USENIX Security Symposium (USENIX Security 19). 2019.
3. Dinur, Irit, and Kobbi Nissim. “Revealing information while preserving privacy.” Proceedings of the 22nd ACM SIGMOD-SIGACT-SIGART symposium on principles of database systems. 2003.
4. Sweeney, Latanya. “Simple demographics often identify people uniquely.” Health (San Francisco) 671 (2000): 1–34.
5. Dwork, Cynthia, et al. “Calibrating noise to sensitivity in private data analysis.” Theory of cryptography conference. Springer, Berlin, Heidelberg, 2006.
6. Wood, Alexandra, Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, James Honaker, Kobbi Nissim, David R. O'Brien, Thomas Steinke, and Salil Vadhan. “Differential privacy: A primer for a non-technical audience.” Vand. J. Ent. & Tech. L. 21 (2018): 209.
7. Dwork, Cynthia, and Aaron Roth. “The algorithmic foundations of differential privacy.” Foundations and Trends in Theoretical Computer Science 9, no. 3–4 (2014): 211–407.

First published July 27, 2020, on NIST’s Cybersecuity Insights blog.

Discuss

About The Authors

Joseph Near’s picture

Joseph Near

Joseph Near is an assistant professor of computer science at the University of Vermont who supports NIST as a moderator for the Privacy Engineering Collaboration Space.

David Darais’s picture

David Darais

David Darais is a principal scientist at Galois and supports NIST as a moderator for the Privacy Engineering Collaboration Space.

Kaitlin Boeckl’s picture

Kaitlin Boeckl

Katie Boeckl is a privacy risk strategist with the Privacy Engineering Program at the National Institute of Standards and Technology (NIST). In this role, she works to advance international privacy standards, develops privacy risk management guidance, and manages the Privacy Engineering Collaboration Space.