Loading [MathJax]/extensions/MathMenu.js
Exploring Sparsity in Image Super-Resolution for Efficient Inference | IEEE Conference Publication | IEEE Xplore

Exploring Sparsity in Image Super-Resolution for Efficient Inference


Abstract:

Current CNN-based super-resolution (SR) methods process all locations equally with computational resources being uniformly assigned in space. However, since missing detai...Show More

Abstract:

Current CNN-based super-resolution (SR) methods process all locations equally with computational resources being uniformly assigned in space. However, since missing details in low-resolution (LR) images mainly exist in regions of edges and textures, less computational resources are required for those flat regions. Therefore, existing CNN-based methods involve redundant computation in flat regions, which increases their computational cost and limits their applications on mobile devices. In this paper, we explore the sparsity in image SR to improve inference efficiency of SR networks. Specifically, we develop a Sparse Mask SR (SMSR) network to learn sparse masks to prune redundant computation. Within our SMSR, spatial masks learn to identify "important" regions while channel masks learn to mark redundant channels in those "unimportant" regions. Consequently, redundant computation can be accurately localized and skipped while maintaining comparable performance. It is demonstrated that our SMSR achieves state-of-the-art performance with 41%/33%/27% FLOPs being reduced for ×2/3/4 SR. Code is available at: https://github.com/LongguangWang/SMSR.
Date of Conference: 20-25 June 2021
Date Added to IEEE Xplore: 02 November 2021
ISBN Information:

ISSN Information:

Conference Location: Nashville, TN, USA
References is not available for this document.

1. Introduction

The goal of single image super-resolution (SR) is to recover a high-resolution (HR) image from a single low-resolution (LR) observation. Due to the powerful feature representation and model fitting capabilities of deep neural networks, CNN-based SR methods have achieved significant performance improvements over traditional ones. Recently, many efforts have been made towards real-world applications, including few-shot SR [38], [39], blind SR [12], [49], [42], and scale-arbitrary SR [15], [43]. With the popularity of intelligent edge devices (such as smartphones and VR glasses), performing SR on these devices is highly demanded. Due to the limited resources of edge devices, efficient SR is crucial to the applications on these devices.

Select All
1.
Eirikur Agustsson and Radu Timofte, "NTIRE 2017 challenge on single image super-resolution: Dataset and study", CVPRW, pp. 1122-1131, 2017.
2.
Namhyuk Ahn, Byungkon Kang and Kyung-Ah Sohn, "Fast accurate and lightweight super-resolution with cascading residual network", ECCV, pp. 252-268, 2018.
3.
Sefi Bell-Kligler, Assaf Shocher and Michal Irani, "Blind super-resolution kernel estimation using an internal-gan", NeurIPS, pp. 284-293, 2019.
4.
Marco Bevilacqua, Aline Roumy, Christine Guillemot and Marie-Line Alberi-Morel, "Low-complexity single-image super-resolution based on nonnegative neighbor embedding", BMVC, pp. 1-10, 2012.
5.
Kumar Chellapilla, Sidd Puri and Patrice Simard, "High performance convolutional neural networks for document processing", IWFHR, 2006.
6.
Xiangxiang Chu, Bo Zhang, Hailong Ma, Ruijun Xu, Jixiang Li and Qingyuan Li, "Fast accurate and lightweight super-resolution with neural architecture search", ICPR, 2020.
7.
Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia and Lei Zhang, "Second-order attention network for single image super-resolution", CVPR, 2019.
8.
Chao Dong, Chen Change Loy, Kaiming He and Xiaoou Tang, "Learning a deep convolutional network for image super-resolution", ECCV, pp. 184-199, 2014.
9.
Michael Figurnov, Maxwell D. Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry P. Vetrov, et al., "Spatially adaptive computation time for residual networks", CVPR, pp. 1790-1799, 2017.
10.
Xitong Gao, Yiren Zhao, Lukasz Dudziak, Robert D. Mullins and Cheng-Zhong Xu, "Dynamic channel pruning: Feature boosting and suppression", ICLR, 2019.
11.
Benjamin Graham, Martin Engelcke and Laurens van der Maaten, "3d semantic segmentation with submanifold sparse convolutional networks", CVPR, pp. 9224-9232, 2018.
12.
Jinjin Gu, Hannan Lu, Wangmeng Zuo and Chao Dong, "Blind super-resolution with iterative kernel correction", CVPR, 2019.
13.
Song Han, Jeff Pool, John Tran and William Dally, "Learning both weights and connections for efficient neural network", NeurIPS, pp. 1135-1143, 2015.
14.
Yang He, Ping Liu, Ziwei Wang, Zhilan Hu and Yi Yang, "Filter pruning via geometric median for deep convolutional neural networks acceleration", CVPR, pp. 4340-4349, 2019.
15.
Xuecai Hu, Haoyuan Mu, Xiangyu Zhang, Zilei Wang, Jian Sun and Tieniu Tan, "Meta-SR: A magnification-arbitrary network for super-resolution", CVPR, 2019.
16.
Jia-Bin Huang, Abhishek Singh and Narendra Ahuja, "Single image super-resolution from transformed self-exemplars", CVPR, pp. 5197-5206, 2015.
17.
Zheng Hui, Xiumei Wang and Xinbo Gao, "Fast and accurate single image super-resolution via information distillation network", CVPR, 2018.
18.
Eric Jang, Shixiang Gu and Ben Poole, "Categorical reparameterization with gumbel-softmax", ICLR, 2017.
19.
Jiwon Kim, Jung Kwon Lee and Kyoung Mu Lee, "Accurate image super-resolution using very deep convolutional networks", CVPR, pp. 1646-1654, 2016.
20.
Jiwon Kim, Jung Kwon Lee and Kyoung Mu Lee, "Deeply-recursive convolutional network for image super-resolution", CVPR, pp. 1637-1645, 2016.
21.
Diederik P. Kingma and Jimmy Ba, "Adam: A method for stochastic optimization", ICLR, 2015.
22.
Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja and Ming-Hsuan Yang, "Deep laplacian pyramid networks for fast and accurate super-resolution", CVPR, pp. 5835-5843, 2017.
23.
Andrew Lavin and Scott Gray, "Fast algorithms for convolutional neural networks", CVPR, pp. 4013-4021, 2016.
24.
Wonkyung Lee, Junghyup Lee, Dohyung Kim and Bumsub Ham, "Learning with privileged information for efficient image super-resolution", ECCV, 2020.
25.
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet and Hans Peter Graf, "Pruning filters for efficient convnets", ICLR, 2017.
26.
Hao Li, Hong Zhang, Xiaojuan Qi, Ruigang Yang and Gao Huang, "Improved techniques for training adaptive deep networks", ICCV, pp. 1891-1900, 2019.
27.
Xiaoxiao Li, Ziwei Liu, Ping Luo, Chen Change Loy and Xiaoou Tang, "Not all pixels are equal: Difficulty-aware se-mantic segmentation via deep layer cascade", CVPR, pp. 6459-6468, 2017.
28.
Zhen Li, Jinglei Yang, Zheng Liu, Xiaomin Yang, Gwanggil Jeon and Wei Wu, "Feedback network for image super-resolution", CVPR, pp. 3867-3876, 2018.
29.
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah and Kyoung Mu Lee, "Enhanced deep residual networks for single image super-resolution", CVPR, 2017.
30.
Ji Lin, Yongming Rao, Jiwen Lu and Jie Zhou, "Runtime neural pruning", NeurIPS, pp. 2181-2191, 2017.

Contact IEEE to Subscribe

References

References is not available for this document.