Loading [MathJax]/extensions/MathMenu.js
Toward Compact and Robust Model Learning Under Dynamically Perturbed Environments | IEEE Journals & Magazine | IEEE Xplore

Toward Compact and Robust Model Learning Under Dynamically Perturbed Environments


Abstract:

Network pruning has been widely studied to reduce the complexity of deep neural networks (DNNs) and hence speed up their inference. Unfortunately, most existing pruning m...Show More

Abstract:

Network pruning has been widely studied to reduce the complexity of deep neural networks (DNNs) and hence speed up their inference. Unfortunately, most existing pruning methods ignore the changes in the model’s robustness before and after pruning, which makes pruned models vulnerable under dynamically perturbed environments (e.g., autonomous driving). Only a few works have explored the robustness of pruned models against adversarial attacks that significantly differ from perturbations in real-world scenarios. To bridge the gap between real-world applications and existing studies, in this work, we propose an adversarial pruning scheme, which automatically identifies and preserves robust channels to obtain robust pruned models that are suitable for practical deployment in dynamically perturbed environments. Specifically, to simulate real-world perturbations, we first employ multi-type adversarial attack samples and adversarial perturbation samples generated by an adversarial perturbation generator to create mixed noise samples. Then, we propose a plug-and-play feature scoring module and a novel contribution difference loss to evaluate the robustness of intermediate features dynamically. Next, to leverage robust intermediate features to identify robust channels, we have developed a simple but effective gating mechanism that evaluates the robustness of channels and preserves robust channels during training. Lastly, we compress the model in a layer-wise or block-wise manner. Compared to existing methods, our scheme enhances the robustness of the pruned model in a broader sense, making it better able to against dynamic perturbations in the real world. Extensive experimental results on well-known dataset benchmarks and popular network architectures demonstrate the effectiveness of our method.
Page(s): 4857 - 4873
Date of Publication: 28 November 2023

ISSN Information:

Funding Agency:


I. Introduction

In recent years, researchers have extensively explored the application of DNNs on numerous computer vision tasks, such as image classification [3], [4], object detection [5], [6], and video analysis [7], [8], [9]. However, deep models are often difficult to deploy to some resource-constrained devices (e.g., smart bracelets, mobile phones, sensors) due to the enormous computational cost and memory footprint. To address this issue, various model compression methods have been proposed to compress and accelerate the deep model, including network pruning [10], quantization [11], [12], knowledge distillation [13] and tensor factorization [14], etc.

Contact IEEE to Subscribe

References

References is not available for this document.