Loading [MathJax]/extensions/MathZoom.js
Training Robust Deep Neural Networks via Adversarial Noise Propagation | IEEE Journals & Magazine | IEEE Xplore

Training Robust Deep Neural Networks via Adversarial Noise Propagation


Abstract:

In practice, deep neural networks have been found to be vulnerable to various types of noise, such as adversarial examples and corruption. Various adversarial defense met...Show More

Abstract:

In practice, deep neural networks have been found to be vulnerable to various types of noise, such as adversarial examples and corruption. Various adversarial defense methods have accordingly been developed to improve adversarial robustness for deep models. However, simply training on data mixed with adversarial examples, most of these models still fail to defend against the generalized types of noise. Motivated by the fact that hidden layers play a highly important role in maintaining a robust model, this paper proposes a simple yet powerful training algorithm, named Adversarial Noise Propagation (ANP), which injects noise into the hidden layers in a layer-wise manner. ANP can be implemented efficiently by exploiting the nature of the backward-forward training style. Through thorough investigations, we determine that different hidden layers make different contributions to model robustness and clean accuracy, while shallow layers are comparatively more critical than deep layers. Moreover, our framework can be easily combined with other adversarial training methods to further improve model robustness by exploiting the potential of hidden layers. Extensive experiments on MNIST, CIFAR-10, CIFAR-10-C, CIFAR-10-P, and ImageNet demonstrate that ANP enables the strong robustness for deep models against both adversarial and corrupted ones, and also significantly outperforms various adversarial defense methods.
Published in: IEEE Transactions on Image Processing ( Volume: 30)
Page(s): 5769 - 5781
Date of Publication: 23 June 2021

ISSN Information:

PubMed ID: 34161231
No metrics found for this document.

I. Introduction

Recent advances in deep learning have achieved remarkable successes in various challenging tasks, including computer vision [1]–[3], natural language processing [4], [5] and speech [6], [7]. In practice, deep learning has been routinely applied on large-scale datasets containing data collected from daily life, which inevitably contain large amounts of noise including adversarial examples and corruption [8], [9]. Unfortunately, while such noise is imperceptible to human beings, it is highly misleading to deep neural networks, which presents potential security threats for practical machine learning applications in both the digital and physical world [10]–[14].

Usage
Select a Year
2025

View as

Total usage sinceJun 2021:1,369
0102030405060JanFebMarAprMayJunJulAugSepOctNovDec345134000000000
Year Total:119
Data is updated monthly. Usage includes PDF downloads and HTML views.

Contact IEEE to Subscribe

References

References is not available for this document.