Loading [MathJax]/extensions/MathMenu.js
NOSnoop: An Effective Collaborative Meta-Learning Scheme Against Property Inference Attack | IEEE Journals & Magazine | IEEE Xplore

NOSnoop: An Effective Collaborative Meta-Learning Scheme Against Property Inference Attack


Abstract:

Collaborative learning has been used to train a joint model on geographically diverse data through periodically sharing knowledge. Although participants keep the data loc...Show More

Abstract:

Collaborative learning has been used to train a joint model on geographically diverse data through periodically sharing knowledge. Although participants keep the data locally in collaborative learning, the adversary can still launch inference attacks through participants’ shared information. In this article, we focus on the property inference attack during model training and design a novel defense mechanism, namely, NOSnoop, to defend such an attack. We propose a collaborative meta-learning architecture to learn the common knowledge over all participants and utilize the natural advantage of meta-learning to hide the sensitive property data. We consider both irrelevant property and relevant property preservation in NOSnoop. For irrelevant property preservation, we utilize the inherent advantage of meta-learning to hide the sensitive property data in meta-training support data set. Thus, the adversary cannot capture the key information related to the sensitive properties and cannot infer victim’s private property successfully. For relevant property preservation, an adversarial game is further proposed to reduce the inference success rate of the adversary. We conduct comprehensive experiments to evaluate the effectiveness of NOSnoop. When hiding the sensitive property data in meta-training support data set, NOSnoop achieves an inference AUC score as low as 0.4984 for irrelevant property preservation, meaning the adversary cannot distinguish whether the training batch has the sensitive property data or not. When preserving the relevant property, NOSnoop is able to achieve an inference AUC score of 0.5091 without compromising model utility.
Published in: IEEE Internet of Things Journal ( Volume: 9, Issue: 9, 01 May 2022)
Page(s): 6778 - 6789
Date of Publication: 15 September 2021

ISSN Information:

Funding Agency:


I. Introduction

With the development of Internet of Things (IoT), the smart IoT devices have been integrated into all aspects of our lives (e.g., smart cameras). In order to make better use of collected data and provide more intelligent decision making, AI technology has been applied to IoT applications to achieve the real-time edge intelligence. However, the increasing awareness of data privacy (e.g., faces, identities, and behavioral habits) motivates users to keep data locally on their own devices. Privacy concern on large-scale data aggregation also leads to administrative policies, such as general data protection regulation (GDPR) in European Union and California Privacy Act (CPA) in USA. Together with the greatly enhanced computation capability of end devices, collaborative learning (termed as federated learning as well) emerges as a vastly developed learning scheme in real-world IoT applications. Collaborative learning only requires gradients rather than raw data from participants for model training; hence, data privacy protection comes naturally with little cost. Unfortunately, researchers [1]–[3] found that an attacker could still infer private information of participants in collaborative learning merely from shared knowledge, such as gradients, empirical loss, or model parameters.

Contact IEEE to Subscribe

References

References is not available for this document.