Loading [MathJax]/extensions/MathZoom.js
Unexpected Information Leakage of Differential Privacy Due to the Linear Property of Queries | IEEE Journals & Magazine | IEEE Xplore

Unexpected Information Leakage of Differential Privacy Due to the Linear Property of Queries


Abstract:

Differential privacy is a widely accepted concept of privacy preservation, and the Laplace mechanism is a famous instance of differentially private mechanisms used to dea...Show More

Abstract:

Differential privacy is a widely accepted concept of privacy preservation, and the Laplace mechanism is a famous instance of differentially private mechanisms used to deal with numerical data. In this paper, we find that differential privacy does not take the linear property of queries into account, resulting in unexpected information leakage. Specifically, the linear property makes it possible to divide one query into two queries, such as q(D)=q(D1)+q(D2) if D=D1∪D2 and D1∩D2=Ø. If attackers try to obtain an answer to q(D), they can not only issue the query q(D) but also issue q(D1) and calculate q(D2) by themselves as long as they know D2. Through different divisions of one query, attackers can obtain multiple different answers to the same query from differentially private mechanisms. However, from the attackers' perspective and differentially private mechanisms' perspective, the total consumed privacy budget is different if divisions are delicately designed. This difference leads to unexpected information leakage because the privacy budget is the key parameter for controlling the amount of information that is legally released from differentially private mechanisms. To demonstrate unexpected information leakage, we present a membership inference attack against the Laplace mechanism. Specifically, under the constraints of differential privacy, we propose a method for obtaining multiple independent identically distributed samples of answers to queries that satisfy the linear property. The proposed method is based on a linear property and some background knowledge of the attackers. When the background knowledge is sufficient, the proposed method can obtain a sufficient number of samples from differentially private mechanisms such that the total consumed privacy budget can be made unreasonably large. Based on the obtained samples, a hypothesis testing method is used to determine whether a target record is in a target dataset.
Page(s): 3123 - 3137
Date of Publication: 26 April 2021

ISSN Information:

Funding Agency:

No metrics found for this document.

I. Introduction

Differential privacy is the state-of-the-art concept for privacy preservation because it formalizes a strong privacy guarantee with a solid mathematical foundation. That is, even if there is only one different record between two datasets, it is difficult to distinguish one dataset from the other dataset. In differential privacy, this privacy guarantee is quantified by a probability with which attackers can distinguish one dataset from another dataset. This probability is controlled by a parameter called the privacy budget.

Usage
Select a Year
2025

View as

Total usage sinceApr 2021:746
0246810JanFebMarAprMayJunJulAugSepOctNovDec947000000000
Year Total:20
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.