Papers

Why data anonymization has not taken off

MJ Schneider, JB, D Iacobucci. Consumer Needs and Solutions, 2025.

Abstract

Companies are looking to data anonymization research – including differential private and synthetic data methods – for simple and straightforward compliance solutions. But data anonymization has not taken off in practice because it is anything but simple to implement. For one, it requires making complex choices which are case dependent, such as the domain of the dataset to anonymize; the units to protect; the scope where the data protection should extend to; and the standard of protection. Each variation of these choices changes the very meaning, as well as the practical implications, of differential privacy (or of any other measure of data anonymization). Yet differential privacy is frequently being branded as the same privacy guarantee regardless of variations in these choices. Some data anonymization methods can be effective, but only when the insights required are much larger than the unit of protection. Given that businesses care about profitability, any solution must preserve the patterns between a firm’s data and that profitability. As a result, data anonymization solutions usually need to be bespoke and case-specific, which reduces their scalability. Companies should not expect easy wins, but rather recognize that anonymization is just one approach to data privacy with its own particular advantages and drawbacks, while the best strategies jointly leverage the full range of approaches to data privacy and security in combination.

The Five Safes as a privacy context

JB, R Gong. Preprint, 2025.

Abstract

The Five Safes is a framework used by national statistical offices (NSO) for assessing and managing the disclosure risk of data sharing. This paper makes two points: Firstly, the Five Safes can be understood as a specialization of a broader concept – contextual integrity – to the situation of statistical dissemination by an NSO. We demonstrate this by mapping the five parameters of contextual integrity onto the five dimensions of the Five Safes. Secondly, the Five Safes contextualizes narrow, technical notions of privacy within a holistic risk assessment. We demonstrate this with the example of differential privacy (DP). This contextualization allows NSOs to place DP within their Five Safes toolkit while also guiding the design of DP implementations within the broader privacy context, as delineated by both their regulation and the relevant social norms.

Property elicitation on imprecise probabilities

JB,\(\negthinspace^\dagger\) R Derr.\(\negthinspace^{\dagger}\) Working Paper, 2025.

Abstract

Property elicitation studies which attributes of a probability distribution can be determined by minimising a risk. We investigate a generalisation of property elicitation to imprecise probabilities (IP). This investigation is motivated by multi-distribution learning, which takes the classical machine learning paradigm of minimising a single risk over a (precise) probability and replaces it with \(\Gamma\)-maximin risk minimization over an IP. We provide necessary conditions for elicitability of a IP-property. Furthermore, we explain what an elicitable IP-property actually elicits through Bayes pairs – the elicited IP-property is the corresponding standard property of the maximum Bayes risk distribution.

Generalization bounds and stopping rules for learning with self-selected data

J Rodemann, JB. Preprint, 2025.

Abstract

Many learning paradigms self-select training data in light of previously learned parameters. Examples include active learning, semi-supervised learning, bandits, or boosting. Rodemann et al. (2024) unify them under the framework of 'reciprocal learning'. In this article, we address the question of how well these methods can generalize from their self-selected samples. In particular, we prove universal generalization bounds for reciprocal learning using covering numbers and Wasserstein ambiguity sets. Our results require no assumptions on the distribution of self-selected data, only verifiable conditions on the algorithms. We prove results for both convergent and finite iteration solutions. The latter are anytime valid, thereby giving rise to stopping rules for a practitioner seeking to guarantee the out-of-sample performance of their reciprocal learning algorithm. Finally, we illustrate our bounds and stopping rules for reciprocal learning's special case of semi-supervised learning.

Topics in privacy, data privacy and differential privacy

JB. PhD Thesis, Harvard University, 2025.

Abstract

In an era of unprecedented data availability and analytic capacity, the protection of individuals’ privacy in statistical data releases is becoming an increasingly difficult problem. This dissertation contributes to the theoretical and methodological foundations of statistical data privacy, largely focusing on differential privacy (DP). We begin with a multifaceted investigation into privacy from legal, economic, social, and philosophical standpoints, before turning to a formal system of DP specifications built around five core building blocks found throughout the literature: the domain, multiverse, input premetric, output premetric, and protection loss budget. This system is applied to statistical disclosure control (SDC) mechanisms used in the US Decennial Census, analyzing both the traditional method of data swapping and the contemporary TopDown Algorithm. Beyond these case studies, this dissertation explores the inferential limitations posed by DP and Pufferfish privacy in both frequentist and Bayesian settings, establishing general bounds under mild assumptions. It further addresses the challenges of applying DP to complex survey pipelines, incorporating issues such as sampling, weighting, and imputation. Finally, it contextualizes DP within broader frameworks of data privacy, namely the Five Safes and contextual integrity, advocating for a more integrated approach to privacy that respects statistical utility, transparency, and societal norms.

A refreshment stirred, not shaken (III): Can swapping be differentially private?

JB, R Gong, XL Meng. To appear in Data Privacy Protection and the Conduct of Applied Research: Methods, Approaches and Their Consequences, 2025.

Abstract

The quest for a precise and contextually grounded answer to the question in the present paper's title resulted in this stirred-not-shaken triptych, a phrase that reflects our desire to deepen the theoretical basis, broaden the practical applicability, and reduce the misperception of differential privacy (DP)—all without shaking its core foundations. Indeed, given the existence of more than 200 formulations of DP (and counting), before even attempting to answer the titular question one must first precisely specify what it actually means to be DP. Motivated by this observation, a theoretical investigation into DP's fundamental essence resulted in Part I of this trio, which introduces a five-building-block system explicating the who, where, what, how and how much aspects of DP. Instantiating this system in the context of the United States Decennial Census, Part II then demonstrates the broader applicability and relevance of DP by comparing a swapping strategy like that used in 2010 with the TopDown Algorithm—a DP method adopted in the 2020 Census. This paper provides nontechnical summaries of the preceding two parts as well as new discussion—for example, on how greater awareness of the five building blocks can thwart privacy theatrics; how our results bridging traditional SDC and DP allow a data custodian to reap the benefits of both these fields; how invariants impact disclosure risk; and how removing the implicit reliance on aleatoric uncertainty could lead to new generalizations of DP.

A refreshment stirred, not shaken (II): Invariant-preserving deployments of differential privacy for the US Decennial Census

JB, R Gong, XL Meng. Preprint, 2025.

Abstract

Through the lens of the system of differential privacy specifications developed in Part I of a trio of articles, this second paper examines two statistical disclosure control (SDC) methods for the United States Decennial Census: the Permutation Swapping Algorithm (PSA), which is similar to the 2010 Census's disclosure avoidance system (DAS), and the TopDown Algorithm (TDA), which was used in the 2020 DAS. To varying degrees, both methods leave unaltered some statistics of the confidential data – which are called the method's invariants – and hence neither can be readily reconciled with differential privacy (DP), at least as it was originally conceived. Nevertheless, we establish that the PSA satisfies \(\varepsilon\)-DP subject to the invariants it necessarily induces, thereby showing that this traditional SDC method can in fact still be understood within our more-general system of DP specifications. By a similar modification to \(\rho\)-zero concentrated DP, we also provide a DP specification for the TDA. Finally, as a point of comparison, we consider the counterfactual scenario in which the PSA was adopted for the 2020 Census, resulting in a reduction in the nominal privacy loss, but at the cost of releasing many more invariants. Therefore, while our results explicate the mathematical guarantees of SDC provided by the PSA, the TDA and the 2020 DAS in general, care must be taken in their translation to actual privacy protection – just as is the case for any DP deployment.

Mapping Africa settlements: High resolution urban and rural map by deep learning and satellite imagery

M Kakooei et al.. Preprint, 2024.

Abstract

Accurate Land Use and Land Cover (LULC) maps are essential for understanding the drivers of sustainable development, in terms of its complex interrelationships between human activities and natural resources. However, existing LULC maps often lack precise urban and rural classifications, particularly in diverse regions like Africa. This study presents a novel construction of a high-resolution rural-urban map using deep learning techniques and satellite imagery. We developed a deep learning model based on the DeepLabV3 architecture, which was trained on satellite imagery from Landsat-8 and the ESRI LULC dataset, augmented with human settlement data from the GHS-SMOD. The model utilizes semantic segmentation to classify land into detailed categories, including urban and rural areas, at a 10-meter resolution. Our findings demonstrate that incorporating LULC along with urban and rural classifications significantly enhances the model's ability to accurately distinguish between urban, rural, and non-human settlement areas. Therefore, our maps can support more informed decision-making for policymakers, researchers, and stakeholders. We release a continent wide urban-rural map, covering the period 2016 and 2022.

General inferential limits under differential and Pufferfish privacy

JB, R Gong. International Journal of Approximate Reasoning, 2024.

Abstract

Differential privacy (DP) is a class of mathematical standards for assessing the privacy provided by a data-release mechanism. This work concerns two important flavors of DP that are related yet conceptually distinct: pure ε-differential privacy (ε-DP) and Pufferfish privacy. We restate ε-DP and Pufferfish privacy as Lipschitz continuity conditions and provide their formulations in terms of an object from the imprecise probability literature: the interval of measures. We use these formulations to derive limits on key quantities in frequentist hypothesis testing and in Bayesian inference using data that are sanitised according to either of these two privacy standards. Under very mild conditions, the results in this work are valid for arbitrary parameters, priors and data generating models. These bounds are weaker than those attainable when analysing specific data generating models or data-release mechanisms. However, they provide generally applicable limits on the ability to learn from differentially private data – even when the analyst's knowledge of the model or mechanism is limited. They also shed light on the semantic interpretations of the two DP flavors under examination, a subject of contention in the current literature.

The complexities of differential privacy for survey data

J Drechsler, JB. To appear in Data Privacy Protection and the Conduct of Applied Research: Methods, Approaches and Their Consequences, 2024.

Abstract

The concept of differential privacy (DP) has gained substantial attention in recent years, most notably since the U.S. Census Bureau announced the adoption of the concept for its 2020 Decennial Census. However, despite its attractive theoretical properties, implementing DP in practice remains challenging, especially when it comes to survey data. In this paper we present some results from an ongoing project funded by the U.S. Census Bureau that is exploring the possibilities and limitations of DP for survey data. Specifically, we identify five aspects that need to be considered when adopting DP in the survey context: the multi-staged nature of data production; the limited privacy amplification from complex sampling designs; the implications of survey-weighted estimates; the weighting adjustments for nonresponse and other data deficiencies, and the imputation of missing values. We summarize the project's key findings with respect to each of these aspects and also discuss some of the challenges that still need to be addressed before DP could become the new data protection standard at statistical agencies.

Differential privacy: General inferential limits via intervals of measures

JB, R Gong. Thirteenth International Symposium on Imprecise Probability: Theories and Applications, 2023.

Abstract

Differential privacy (DP) is a mathematical standard for assessing the privacy provided by a data-release mechanism. We provide formulations of pure \(\varepsilon\)-differential privacy first as a Lipschitz continuity condition and then using an object from the imprecise probability literature: the interval of measures. We utilise this second formulation to establish bounds on the appropriate likelihood function for \(\varepsilon\)-DP data – and in turn derive limits on key quantities in both frequentist hypothesis testing and Bayesian inference. Under very mild conditions, these results are valid for arbitrary parameters, priors and data generating models. These bounds are weaker than those attainable when analysing specific data generating models or data-release mechanisms. However, they provide generally applicable limits on the ability to learn from differentially private data – even when the analyst’s knowledge of the model or mechanism is limited. They also shed light on the semantic interpretation of differential privacy, a subject of contention in the current literature.

Can swapping be differentially private? A refreshment stirred, not shaken

JB, R Gong, XL Meng. Working Paper, 2023.

Abstract

This paper presents a formal privacy analysis of data swapping, a family of statistical disclosure control (SDC) methods which were used in the 1990, 2000 and 2010 US Decennial Census disclosure avoidance systems (DAS). Like all swapping algorithms, the method we examine has invariants – statistics calculated from the confidential database which remain unchanged. We prove that our swapping method satisfies the classic notion of pure differential privacy (\(\varepsilon\)-DP) when conditioning on these invariants. To support this privacy analysis, we provide a framework which unifies many different types of DP while simultaneously explicating the nuances that differentiate these types. This framework additionally supplies a DP definition for the TopDown algorithm (TDA) which also has invariants and was used as the SDC method for the 2020 Census Redistricting Data (P.L. 94-171) Summary and the Demographic and Housing Characteristics Files. To form a comparison with the privacy of the TDA, we compute the budget (along with the other DP components) in the counterfactual scenario that our swapping method was used for the 2020 Decennial Census. By examining swapping in the light of formal privacy, this paper aims to reap the benefits of DP - formal privacy guarantees and algorithmic transparency - without sacrificing the advantages of traditional SDC. This examination also reveals an array of subtleties and traps in using DP for theoretically benchmarking privacy protection methods in general. Using swapping as a demonstration, our optimistic hope is to inspire formal and rigorous framing and analysis of other SDC techniques in the future, as well as to promote nuanced assessments of DP implementations which go beyond discussion of the privacy loss budget \(\varepsilon\).

Big data, differential privacy and national statistical organisations

JB. Statistical Journal of the IAOS, 2020.

Abstract

Differential privacy (DP) has emerged in the computer science literature as a measure of the impact on an individual’s privacy resulting from the publication of a statistical output such as a frequency table. This paper provides an introduction to DP for official statisticians and discuss its relevance, benefits and challenges from a National Statistical Organisation (NSO) perspective. We motivate our study by examining how privacy is evolving in the era of big data and how this might prompt a shift from traditional statistical disclosure techniques used in official statistics – which are generally applied on a cell-by-cell or table-by-table basis – to formal privacy methods, like DP, which are applied from a perspective encompassing the totality of the outputs generated from a given dataset. We identify an important interplay between DP’s holistic privacy risk measure and the difficulty for NSOs in implementing DP, showing that DP’s major advantage is also DP’s major challenge. This paper provides new work addressing two key DP research areas for NSOs: DP’s application to survey data and its incorporation within the Five Safes framework.

ABS perturbation methodology through the lens of differential privacy

JB, C-H Chien. Work Session on Statistical Data Confidentiality, UN Economic Commission for Europe, 2019.

Abstract

The Australian Bureau of Statistics (ABS), like other national statistical offices, is considering the opportunities of differential privacy (DP). This research considers the Australian Bureau of Statistics (ABS) TableBuilder perturbation methodology in a DP framework. DP and the ABS perturbation methodology are applying the same idea – infusing noise to the underlying microdata – to protect aggregate statistical outputs. This research describes some differences between these approaches. Our findings show that noise infusion protects against disclosure risks in the aggregate Census Tables. We highlight areas of future ABS research on this topic.


\(^\dagger\) indicates alphabetical ordering of authors.