Loading [MathJax]/extensions/MathMenu.js
A White-Box Generator Membership Inference Attack Against Generative Models | IEEE Conference Publication | IEEE Xplore

A White-Box Generator Membership Inference Attack Against Generative Models


Abstract:

Using generative models to generate unlimited number of synthetic samples is a popular replacement of database sharing. When these models are built using sensitive data, ...Show More

Abstract:

Using generative models to generate unlimited number of synthetic samples is a popular replacement of database sharing. When these models are built using sensitive data, the developers should ensure that the training dataset is appropriately protected. Hence, quantifying the privacy risk of these models is important. In this paper, we focus on evaluating privacy risk of publishing generator in generative adversarial network (GAN) models. Specially, we conduct a white box membership inference attack against GAN models. The proposed attack is applicable to various kinds of GANs. We evaluate our attack accuracy with respect to various model types and training configurations. The results demonstrate superior performance of the proposed attack compared to previous attacks in white box generator access.
Date of Conference: 01-02 September 2021
Date Added to IEEE Xplore: 01 March 2022
ISBN Information:

ISSN Information:

Conference Location: Isfahan, Iran, Islamic Republic of
References is not available for this document.

I. Introduction

Nowadays, machine learning models are used in various applications. Availability of large datasets is one of the important factors in success of these models. These datasets are often crowded and may contain sensitive data; and confidentiality and privacy of them are important. However, these models are known to implicitly memorize inappropriate details of the sensitive data during training. Therefore, the assessment of the privacy risks of machine learning models is necessary. For this purpose, many attacks are conducted against these models which can infer information about training datasets. One such attack is membership inference attack [1]. In membership inference attack, given a data record and access to the learned model, the attacker determines if the record was in the model's training dataset or not.

Select All
1.
R. Shokri, M. Stronato, C. Song and V. Shamatikov, "Membership Inference Attacks Against Machine Learning Models", IEEE Symposium on Security and Privacy (SP), pp. 1-16, 2017.
2.
I. Goodfellow, J. Pougget-Abadie, M. Mirza, B. Xu, D. Warde-Farely, S. Ozair, et al., "Generative Adversarial Nets", 27th International Conference on Neural Information Processing Systems, pp. 2672-2680, 2014.
3.
D. Chen, N. Yu, Y. Zhang and M. Fritz, "GAN-Leaks: A taxonomy of membership inference attacks against generative models", Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pp. 343-362, 2020.
4.
A. Salem, Y. Zhang, M. Humbert, M. Fritz and M. Backes, "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models", Proceedings of the 26th Annual Network and Distributed System Security Symposium, pp. 1-16, 2019.
5.
Y. Long, V. Bindschaedler, L. Wang, D. Bu, X. Wang, H. Tang, et al., "Understanding Membership Inference in Well-Generalized Learning Models", arXiv, 2018.
6.
S. Yeom, I. Giacomelli, M. Fredrikson and S. Jha, "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting", 2018 IEEE 31st Computer Security Foundations Symposium, pp. 268-282, 2018.
7.
A. Sablayrolles, M. Douze, C. Schmid, Y. Ollivier and H. Jegou, "White-box vs Black-box: Bayes Optimal Strategies for Membership Inference", Proceedings of the 36th International Conference on Machine Learning, pp. 1-11, 2019.
8.
M. Nasr, R. Shokri and A. Houmansadr, "Comprehensive privacy analysis of deep learning stand-alone and federated learning under passive and active white-box inference attacks", IEEE Symposium on Security and Privacy, pp. 739-853, 2019.
9.
S. Kumar Murakonda and Reza Shokri, ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the Privacy Risks of Machine Learning, 2020, [online] Available: https://arxiv.org/abs/2007.09339.
10.
J. Hayes, L. Melis, G. Denerzis and E. De Cristofaro, "LOGAN: membership inference attacks against generative models", Proceedings on Privacy Enhancing Technologies, vol. 2019, no. 1, pp. 133-152, 2019.
11.
B. Hilprecht, M. Harterich and D. Bernau, "Monte Carlo and reconstruction membership inference attacks against generative models", Proceedings on Privacy Enhancing Technologies, vol. 4, pp. 232-249, 2019.
12.
M. Arjovsky, S. Chintala and L. Bottou, "Wasserstein Generative Adversarial Networks", International Conference on Machine Learning, pp. 214-223, 2017.
13.
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin and Aaron C. Courville, "Improved Training of Wasserstein GANs", Annual Conference on Neural Information Processing Systems (NIPS), pp. 5767-5777, 2017.
14.
N. Kodali, J. Hays, J. Abernethy and Z. Kira, "On Convergence and Stability of GANs", ICLR 2018 Conference Blind Submission, pp. 1-18, 2018.
15.
X. Mao, Q. Li, H. Xie, R. Y.K. Lau, Z. Wang and S. P. Smolley, "Least Squares Generative Adversarial Networks", 2017 IEEE International Conference on Computer Vision, pp. 1-17, 2017.
16.
A. Radford, L. Metz and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks", arXiv preprint, 2015.
17.
M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler and S. Hochreiter, "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium", Annual Conference on Neural Information Processing Systems (NIPS), pp. 6626-6637, 2017.

Contact IEEE to Subscribe

References

References is not available for this document.