Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution | IEEE Conference Publication | IEEE Xplore

Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution


Abstract:

Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Su...Show More

Abstract:

Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.
Date of Conference: 21-26 July 2017
Date Added to IEEE Xplore: 09 November 2017
ISBN Information:
Print ISSN: 1063-6919
Conference Location: Honolulu, HI, USA

1. Introduction

Single-image super-resolution (SR) aims to reconstruct a high-resolution (HR) image from a single low-resolution (LR) input image. In recent years, example-based SR methods have demonstrated the state-of-the-art performance by learning a mapping from LR to HR image patches using large image databases. Numerous learning algorithms have been applied to learn such a mapping, including dictionary learning [37], [38], local linear regression [30], [36], and random forest [26].

Contact IEEE to Subscribe

References

References is not available for this document.