Loading [MathJax]/extensions/MathZoom.js
Mixup Based Privacy Preserving Mixed Collaboration Learning | IEEE Conference Publication | IEEE Xplore

Mixup Based Privacy Preserving Mixed Collaboration Learning


Abstract:

The amount of high-quality data determines the performance of the deep learning model. In reality, the data is often physically distributed in different organizations, an...Show More

Abstract:

The amount of high-quality data determines the performance of the deep learning model. In reality, the data is often physically distributed in different organizations, and model averaging can train a deep model on the distributed data, while providing competitive performance compared with training a model on the centralized data. However, it cannot prevent inversion attack, as the intermediate parameters are transmitted during training. Some data enhancement methods, such as mixup, can effectively enhance the data privacy. In this paper, we propose a novel model averaging method combined with mixup, which provides protection against inversion attack. Besides we conduct experiments using state-of-the-art deep network architectures on multiple types of dataset to show that our method improves the classification accuracy of models.
Date of Conference: 04-09 April 2019
Date Added to IEEE Xplore: 06 May 2019
ISBN Information:

ISSN Information:

Conference Location: San Francisco, CA, USA

Contact IEEE to Subscribe

References

References is not available for this document.