Relationship between Model Compression and Adversarial Robustness: A Review of Current Evidence | IEEE Conference Publication | IEEE Xplore

Relationship between Model Compression and Adversarial Robustness: A Review of Current Evidence


Abstract:

Increasing the model capacity is a known approach to enhance the adversarial robustness of deep learning networks. On the other hand, various model compression techniques...Show More

Abstract:

Increasing the model capacity is a known approach to enhance the adversarial robustness of deep learning networks. On the other hand, various model compression techniques, including pruning and quantization, can reduce the size of the network while preserving its accuracy. Several recent studies have addressed the relationship between model compression and adversarial robustness, while some experiments have reported contradictory results. This work summarizes available evidence and discusses possible explanations for the observed effects.
Date of Conference: 05-08 December 2023
Date Added to IEEE Xplore: 01 January 2024
ISBN Information:

ISSN Information:

Conference Location: Mexico City, Mexico
References is not available for this document.

I. Introduction and Related Work

Goodfellow et al. [1] and Szegedy et al. [2] first brought up the risk of adversarial attacks, small perturbations (often imperceptible by humans) that are carefully crafted and added to the input of state-of-the-art (SOTA) deep neural networks (DNNs). Without specific DNN training or mitigation mea-sures, these attacks lead to high-confidence wrong outputs of SOTA DNNs and convolutional neural networks (CNNs). This inherent vulnerability of DNN s poses an especially high risk when applying them in autonomous driving, facial recognition, or medical domains.

Select All
1.
I. J. Goodfellow, J. Shlens and C. Szegedy, "Explaining and harnessing adversarial examples", International Conference on Learning Repre-sentations (ICLR), 2015.
2.
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Good-fellow, et al., "Intriguing properties of neural networks", International Conference on Learning Representations (ICLR), 2014.
3.
A. Shafahi, M. Najibi, M. A. Ghiasi, Z. Xu, J. Dickerson, C. Studer, et al., "Adversarial training for free!", Advances in Neural Information Processing Systems (NIPS), 2019.
4.
T. Pang, X. Yang, Y. Dong, H. Su and J. Zhu, "Bag of Tricks for Adversarial Training", International Conference on Learning Representations (ICLR), 2021.
5.
P. Maini, E. Wong and J. Z. Kolter, "Adversarial robustness against the union of multiple perturbation models", International Conference on Machine Learning (ICML), 2020.
6.
L. Schott, J. Rauber, M. Bethge and W. Brendel, "Towards the first adversarially robust neural network model on mnist", International Conference on Learning Representations (ICLR), 2018.
7.
A. Athalye, L. Engstrom, A. Ilyas and K. Kwok, "Synthesizing Ro-bust Adversarial Examples", International Conference on Machine Learning (ICML), 2018.
8.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, et al., "Imagenet large scale visual recognition challenge", Int. J. Comput. Vis., 2015.
9.
H. Salman, A. Ilyas, L. Engstrom, A. Kapoor and A. Madry, "Do adversarially robust imagenet models transfer better?", Advances in Neural Information Processing Systems (NIPS)s, 2020.
10.
C. Xie, M. Tan, B. Gong, J. Wang, A. L. Yuille and Q. V. Le, "Adversarial examples improve image recognition", Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
11.
M. Andriushchenko and N. Flammarion, "Understanding and improving fast adversarial training", Advances in Neural Information Processing Systems (NIPS), 2020.
12.
A. Madry, A. Makelov, L. Schmidt, D. Tsipras and A. Vladu, "Towards Deep Learning Models Resistant to Adversarial Attacks", International Conference on Learning Representations (ICLR), 2018.
13.
L. Rice, E. Wong and J. Z. Kolter, "Overfitting in adversarially robust deep learning", International Conference on Machine Learning (ICML), 2020.
14.
H. Li, A. Kadav, I. Durdanovic, H. Samet and H. P. Graf, Pruning filters for efficient convnets, 2017.
15.
S. Han, H. Mao and W. J. Dally, Deep compression: Compressing deep neural network with pruning trained quantization and huffman coding, 2016.
16.
P. Stock, A. Joulin, R. Gribonval, B. Graham and H. Jegou, "And the bit goes down: Revisiting the quantization of neural networks", International Conference on Learning Representations (ICLR). Open-Review. net, 2020.
17.
A. Galloway, G. W. Taylor and M. Moussa, "Attacking binarized neural networks", International Conference on Learning Representations (ICLR), 2018.
18.
A. S. Rakin, J. Yi, B. Gong and D. Fan, "Defend deep neural networks against adversarial examples via fixed and dynamic quantized activation functions", arXiv preprint, 2018.
19.
A. W. Wijayanto, J. J. Choong, K. Madhawa and T. Murata, "Towards robust compressed convolutional neural networks", IEEE International Conference on Big Data and Smart Computing (BigComp), 2019.
20.
J. Lin, C. Gan and S. Han, "Defensive quantization: When efficiency meets robustness", International Conference on Learning Representations (ICLR), 2019.
21.
Y. LeCun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition", Proc. IEEE, 1998.
22.
K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition", Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
23.
N. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks", IEEE Symposium on Security and Privacy, 2017.
24.
S. Zagoruyko and N. Komodakis, Wide residual networks, 2016.
25.
A. Krizhevsky, G. Hinton et al., Learning multiple layers of features from tiny images, 2009.
26.
N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik and A. Swami, "The limitations of deep learning in adversarial settings", IEEE European symposium on security and privacy (EuroS&P), 2016.
27.
P. Chen, H. Zhang, Y. Sharma, J. Yi and C. Hsieh, "ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models", ACM Workshop on Artificial Intelligence and Security, 2017.
28.
A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, et al., "Mobilenets: Efficient convolutional neural networks for mobile vision applications", CoRR, vol. absI1704.04861, 2017.
29.
A. Zhou, A. Yao, Y. Guo, L. Xu and Y. Chen, "Incremental network quantization: Towards lossless cnns with low-precision weights", International Conference on Learning Representations (ICLR), 2017.
30.
Y. Guo, A. Yao and Y. Chen, "Dynamic network surgery for efficient dnns", Advances in Neural Information Processing Systems (NIPS), 2016.

References

References is not available for this document.