Label-Only Model Inversion Attacks via Boundary Repulsion | IEEE Conference Publication | IEEE Xplore

Label-Only Model Inversion Attacks via Boundary Repulsion


Abstract:

Recent studies show that the state-of-the-art deep neural networks are vulnerable to model inversion attacks, in which access to a model is abused to reconstruct private ...Show More

Abstract:

Recent studies show that the state-of-the-art deep neural networks are vulnerable to model inversion attacks, in which access to a model is abused to reconstruct private training data of any given target class. Existing attacks rely on having access to either the complete target model (whitebox) or the model's soft-labels (blackbox). However, no prior work has been done in the harder but more practical scenario, in which the attacker only has access to the model's predicted label, without a confidence measure. In this paper, we introduce an algorithm, Boundary-Repelling Model Inversion (BREP-MI), to invert private training data using only the target model's predicted labels. The key idea of our algorithm is to evaluate the model's predicted labels over a sphere and then estimate the direction to reach the target class's centroid. Using the example of face recognition, we show that the images reconstructed by BREP-MI successfully reproduce the semantics of the private training data for various datasets and target model architectures. We compare BREP-MI with the state-of-the-art white-box and blackbox model inversion attacks, and the results show that despite assuming less knowledge about the target model, BREP-MI outperforms the blackbox attack and achieves comparable results to the whitebox attack. Our code is available online.11https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion
Date of Conference: 18-24 June 2022
Date Added to IEEE Xplore: 27 September 2022
ISBN Information:

ISSN Information:

Conference Location: New Orleans, LA, USA

1. Introduction

Machine learning (ML) algorithms are often trained on private or sensitive data, such as face images, medical records, and financial information. Unfortunately, since ML models tend to memorize information about training data, even when stored and processed securely, privacy information can still be exposed through the access to the models [20]. Indeed, the prior study of privacy attacks has demonstrated the possibility of exposing training data at different granularities, ranging from “coarse-grained” information, such as determining whether a certain point participates in training [10], [14], [16], [21] or whether a training dataset satisfies certain properties [9], [15], to more “fine-grained” information, such as reconstructing the raw data [2], [3], [7], [24].

Contact IEEE to Subscribe

References

References is not available for this document.