Loading [MathJax]/extensions/MathZoom.js
Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack | IEEE Conference Publication | IEEE Xplore

Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack


Abstract:

Federated Learning with Model Distillation (FedMD) is a nascent collaborative learning paradigm, where only output logits of public datasets are transmitted as distilled ...Show More

Abstract:

Federated Learning with Model Distillation (FedMD) is a nascent collaborative learning paradigm, where only output logits of public datasets are transmitted as distilled knowledge, instead of passing on private model parameters that are susceptible to gradient inversion attacks, a known privacy risk in federated learning. In this paper, we found that even though sharing output logits of public datasets is safer than directly sharing gradients, there still exists a sub-stantial risk of data exposure caused by carefully designed malicious attacks. Our study shows that a malicious server can inject a PLI (Paired-Logits Inversion) attack against FedMD and its variants by training an inversion neural network that exploits the confidence gap between the server and client models. Experiments on multiple facial recognition datasets validate that under FedMD-like schemes, by using paired server-client logits of public datasets only, the malicious server is able to reconstruct private images on all tested benchmarks with a high success rate.
Date of Conference: 17-24 June 2023
Date Added to IEEE Xplore: 22 August 2023
ISBN Information:

ISSN Information:

Conference Location: Vancouver, BC, Canada

Funding Agency:


1 Introduction

Federated Learning (FL) [28] is a distributed learning paradigm, where each party sends the gradients or parameters of its locally trained model to a centralized server that learns a global model with the aggregated gradients/parameters. While this process allows clients to hide their private datasets, a malicious server can still manage to reconstruct private data (only visible to individual client) from the shared gradients/parameter, exposing serious privacy risks [30], [43], [49].

Contact IEEE to Subscribe

References

References is not available for this document.