Loading [MathJax]/extensions/MathZoom.js
Stealthy Inference Attack on DNN via Cache-based Side-Channel Attacks | IEEE Conference Publication | IEEE Xplore

Stealthy Inference Attack on DNN via Cache-based Side-Channel Attacks


Abstract:

The advancement of deep neural networks (DNNs) motivates the deployment in various domains, including image classification, disease diagnoses, voice recognition, etc. Sin...Show More

Abstract:

The advancement of deep neural networks (DNNs) motivates the deployment in various domains, including image classification, disease diagnoses, voice recognition, etc. Since some tasks that DNN undertakes are very sensitive, the label information is confidential and contains a commercial value or critical privacy. This paper demonstrates that DNNs also bring a new security threat, leading to the leakage of label information of input instances for the DNN models. In particular, we leverage the cache-based side-channel attack (SCA), i.e., Flush-Reload on the DNN (victim) models, to observe the execution of computation graphs, and create a database of them for building a classifier that the attacker can use to decide the label information of (unknown) input instances for victim models. Then we deploy the cache-based SCA on the same host machine with victim models and deduce the labels with the attacker's classification model to compromise the privacy and confidentiality of victim models. We explore different settings and classification techniques to achieve a high attack success rate of stealing label information from the victim models. Additionally, we consider two attacking scenarios: binary attacking identifies specific sensitive labels and others while multi-class attacking targets recognize all classes victim DNNs provide. Last, we implement the attack on both static DNN models with identical architectures for all inputs and dynamic DNN models with an adaptation of architectures for different inputs to demonstrate the vast existence of the proposed attack, including DenseNet 121, DenseNet 169, VGG 16, VGG 19, MobileNet v1, and MobileNet v2. Our experiment exhibits that MobileNet v1 is the most vulnerable one with 99% and 75.6% attacking success rates for binary and multi-class attacking scenarios, respectively.
Date of Conference: 14-23 March 2022
Date Added to IEEE Xplore: 19 May 2022
ISBN Information:

ISSN Information:

Conference Location: Antwerp, Belgium

I. Introduction

Deep neural networks (DNNs) have made significant progress in the past decade and gained increasing popularity in undertaking different tasks, including image classification [4], [9], [25], language processing [2], [5], security enhancement [23], etc. Several successful deep neural network models have been proposed and received notable success, including but not limited to VGG [25], DenseNet [14], etc. The advancement of DNNs magnifies the deployment on both servers and edge devices with different computation resources and energy restrictions [12], [14]. Since then, a number of DNN-enabled applications have been deployed in past decades across various critical domains, such as disease diagnosis [8], [21], intelligent surveillance [16], [32], financial decision [15], and so on. However, the DNN models also bring new security risks-the leakage of label information may cause financial loss and privacy compromise since the label information of such DNN-enabled applications is directly linked to users' crucial decisions and sensitive information. Taking the investment decision-related applications [15] as an example, the leakage of label information can expose the big financial decision to attackers who can take advantage of them and make an illicit profit out of it.

References

References is not available for this document.