Loading [MathJax]/extensions/MathMenu.js
A White-Box Generator Membership Inference Attack Against Generative Models | IEEE Conference Publication | IEEE Xplore

A White-Box Generator Membership Inference Attack Against Generative Models


Abstract:

Using generative models to generate unlimited number of synthetic samples is a popular replacement of database sharing. When these models are built using sensitive data, ...Show More

Abstract:

Using generative models to generate unlimited number of synthetic samples is a popular replacement of database sharing. When these models are built using sensitive data, the developers should ensure that the training dataset is appropriately protected. Hence, quantifying the privacy risk of these models is important. In this paper, we focus on evaluating privacy risk of publishing generator in generative adversarial network (GAN) models. Specially, we conduct a white box membership inference attack against GAN models. The proposed attack is applicable to various kinds of GANs. We evaluate our attack accuracy with respect to various model types and training configurations. The results demonstrate superior performance of the proposed attack compared to previous attacks in white box generator access.
Date of Conference: 01-02 September 2021
Date Added to IEEE Xplore: 01 March 2022
ISBN Information:

ISSN Information:

Conference Location: Isfahan, Iran, Islamic Republic of
Citations are not available for this document.

I. Introduction

Nowadays, machine learning models are used in various applications. Availability of large datasets is one of the important factors in success of these models. These datasets are often crowded and may contain sensitive data; and confidentiality and privacy of them are important. However, these models are known to implicitly memorize inappropriate details of the sensitive data during training. Therefore, the assessment of the privacy risks of machine learning models is necessary. For this purpose, many attacks are conducted against these models which can infer information about training datasets. One such attack is membership inference attack [1]. In membership inference attack, given a data record and access to the learned model, the attacker determines if the record was in the model's training dataset or not.

Cites in Papers - |

Cites in Papers - IEEE (2)

Select All
1.
Minxing Zhang, Ning Yu, Rui Wen, Michael Backes, Yang Zhang, "Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models", 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp.4827-4837, 2024.
2.
Mohammadhadi Shateri, Francisco Messina, Fabrice Labeau, Pablo Piantanida, "Preserving Privacy in GANs Against Membership Inference Attack", IEEE Transactions on Information Forensics and Security, vol.19, pp.1728-1743, 2024.

Cites in Papers - Other Publishers (1)

1.
Kayode S. Adewole, Vicenç Torra, "Energy Disaggregation Risk Resilience through Microaggregation and Discrete Fourier Transform", Information Sciences, pp.120211, 2024.
Contact IEEE to Subscribe

References

References is not available for this document.