Loading [MathJax]/extensions/MathMenu.js
Adversarial for Good – Defending Training Data Privacy with Adversarial Attack Wisdom | IEEE Conference Publication | IEEE Xplore

Adversarial for Good – Defending Training Data Privacy with Adversarial Attack Wisdom


Abstract:

Machine learning models dazzle us with their incredible performance but may irritate us with data privacy issues. Various attacks have been proposed to peep into the sens...Show More

Abstract:

Machine learning models dazzle us with their incredible performance but may irritate us with data privacy issues. Various attacks have been proposed to peep into the sensitive training data of machine learning models, the mainstream ones being membership inference attacks and model inversion attacks. As a countermeasure, defense strategies have been devised. Nonetheless, a unified theoretical framework and evaluation testbed for training data privacy analysis is lacking. In this paper, we taxonomize representative attack methods regarding the attack objective, attack knowledge, and attack capability. As for the defense, we focus on the novel idea of turning adversarial attacks into privacy protection tools, hence the title adversarial for good. To provide an open-sourced integrated platform to evaluate different attacks and defenses. Our experiment results show that adopting adversarial example attacks and adversarial training for data privacy protection is compelling, which may motivate more efforts in transforming adversarial to good in the future.
Date of Conference: 12-14 August 2024
Date Added to IEEE Xplore: 06 November 2024
ISBN Information:
Conference Location: Hong Kong, China
References is not available for this document.

I. Introduction

Deep learning penetrates the metaverse industry with vigorous modeling capability, such as creating virtual avatars. However, to improve the vividness of virtual avatars, deep learning models’ training requires mountainous real-world human behavioral data, such as appearance, movements, expressions, and speech, which are highly sensitive for their creators. With a series of regulations, such as the General Data Protection Regulation (GDPR), being proposed to protect personal training data, the metaverse industry should pay particular attention to training data privacy.

Select All
1.
U. Aïvodji, S. Gambs and T. Ther, "Gamin: an adversarial approach to black-box model inversion", 2019.
2.
S. An, G. Tao, Q. Xu, Y. Liu, G. Shen, Y. Yao, et al., "Mirror: model inversion for deep learning network with high fidelity", Network and Distributed Systems Security Symposium, 2022.
3.
T. Bekman, M. Abolfathi, H. Jafarian, A. Biswas, F. Banaei-Kashani and K. Das, "Practical black box model inversion attacks against neural nets", Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2021.
4.
N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. Brown, D. Song, U. Erlingsson et al., "Extracting training data from large language models", USENIX Security Symposium, 2021.
5.
C. Chen, X. He, L. Lyu and F. Wu, "Killing one bird with two stones: model extraction and attribute inference attacks against bert-based apis", 2021.
6.
D. Chen, N. Yu, Y. Zhang and M. Fritz, "Gan-leaks: a taxonomy of membership inference attacks against generative models", ACM SIGSAC conference on computer and communications security, 2020.
7.
M. Chen, Z. Zhang, T. Wang, M. Backes, M. Humbert and Y. Zhang, "When machine unlearning jeopardizes privacy", ACM SIGSAC Conference on Computer and Communications Security, 2021.
8.
S. Chen, M. Kahla, R. Jia and G.-J. Qi, "Knowledge-enriched distributional model inversion attacks", IEEE/CVF international conference on computer vision, 2021.
9.
C. A. Choquette-Choo, F. Tramer, N. Carlini and N. Papernot, "Label-only membership inference attacks", International Conference on Machine Learning, 2021.
10.
L. Comanducci, P. Bestagini, M. Tagliasacchi, A. Sarti and S. Tubaro, "Reconstructing speech from cnn embeddings", IEEE Signal Processing Letters, vol. 28, pp. 952-956, 2021.
11.
V. Duddu, A. Boutet and V. Shejwalkar, "Quantifying privacy leakage in graph embedding", Mobiquitous 2020-17th EAI International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services, 2020.
12.
G. Fang, J. Song, X. Wang, C. Shen, X. Wang and M. Song, "Contrastive model inversion for data-free knowledge distillation", 2021.
13.
M. Fredrikson, S. Jha and T. Ristenpart, "Model inversion attacks that exploit confidence information and basic countermeasures", ACM SIGSAC conference on computer and communications security, 2015.
14.
M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page and T. Ristenpart, "Privacy in pharmacogenetics: an {end-to-end} case study of personalized warfarin dosing", USENIX Security Symposium, 2014.
15.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, et al., "Generative adversarial nets", Advances in neural information processing systems, vol. 27, 2014.
16.
U. Gupta, D. Stripelis, P. K. Lam, P. Thompson, J. L. Ambite and G. Ver Steeg, "Membership inference attacks on deep regression models for neuroimaging" in Medical Imaging with Deep Learning., PMLR, 2021.
17.
J. Hayes, L. Melis, G. Danezis and E. De Cristofaro, "Logan: membership inference attacks against generative models" in Privacy Enhancing Technologies, De Gruyter, vol. 2019, no. 1, pp. 133-152, 2019.
18.
S. Hidano, T. Murakami, S. Katsumata, S. Kiyomoto and G. Hanaoka, "Model inversion attacks for prediction systems: without knowledge of non-sensitive attributes", Annual Conference on Privacy Security and Trust, 2017.
19.
B. Hilprecht, M. Härterich and D. Bernau, "Monte carlo and reconstruction membership inference attacks against generative models", Privacy Enhancing Technologies, vol. 2019, no. 4, pp. 232-249, 2019.
20.
J. Höhmann, A. Rettinger and K. Kugler, "Invbert: text reconstruction from contextualized embeddings used for derived text formats of literary works", 2021.
21.
H. Hu and J. Pang, "Membership inference attacks against gans by leveraging over-representation regions", ACM SIGSAC Conference on Computer and Communications Security, 2021.
22.
M. Kahla, S. Chen, H. A. Just and R. Jia, "Label-only model inversion attacks via boundary repulsion", IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
23.
T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen and T. Aila, "Analyzing and improving the image quality of stylegan", IEEE/CVF conference on computer vision and pattern recognition, 2020.
24.
M. Khosravy, K. Nakamura, Y. Hirose, N. Nitta and N. Babaguchi, "Model inversion attack by integration of deep generative models: privacy-sensitive face generation from a face recognition system", IEEE Transactions on Information Forensics and Security, vol. 17, pp. 357-372, 2022.
25.
T. Kim and J. Yang, "Latent-space-level image anonymization with adversarial protector networks", IEEE Access, vol. 7, pp. 84 992-84 999, 2019.
26.
A. Krizhevsky, G. Hinton et al., "Learning multiple layers of features from tiny images", 2009.
27.
Z. Li and Y. Zhang, "Membership leakage in label-only exposures", ACM SIGSAC Conference on Computer and Communications Security, 2021.
28.
G. Liu, C. Wang, K. Peng, H. Huang, Y. Li and W. Cheng, "Socinf: membership inference attacks on social media health data with machine learning", IEEE Transactions on Computational Social Systems, vol. 6, no. 5, pp. 907-921, 2019.
29.
H. Liu, J. Jia, W. Qu and N. Z. Gong, "Encodermi: membership inference against pre-trained encoders in contrastive learning", ACM SIGSAC Conference on Computer and Communications Security, pp. 2081-2095, 2021.
30.
K. S. Liu, C. Xiao, B. Li and J. Gao, "Performing co-membership attacks against deep generative models", IEEE International Conference on Data Mining, 2019.
Contact IEEE to Subscribe

References

References is not available for this document.