Loading [a11y]/accessibility-menu.js
A Deep Convolutional Neural Network with Selection Units for Super-Resolution | IEEE Conference Publication | IEEE Xplore

A Deep Convolutional Neural Network with Selection Units for Super-Resolution


Abstract:

Rectified linear units (ReLU) are known to be effective in many deep learning methods. Inspired by linear-mapping technique used in other super-resolution (SR) methods, w...Show More

Abstract:

Rectified linear units (ReLU) are known to be effective in many deep learning methods. Inspired by linear-mapping technique used in other super-resolution (SR) methods, we reinterpret ReLU into point-wise multiplication of an identity mapping and a switch, and finally present a novel nonlinear unit, called a selection unit (SU). While conventional ReLU has no direct control through which data is passed, the proposed SU optimizes this on-off switching control, and is therefore capable of better handling nonlinearity functionality than ReLU in a more flexible way. Our proposed deep network with SUs, called SelNet, was top-5th ranked in NTIRE2017 Challenge, which has a much lower computation complexity compared to the top-4 entries. Further experiment results show that our proposed SelNet outperforms our baseline only with ReLU (without SUs), and other state-of-the-art deep-learning-based SR methods.
Date of Conference: 21-26 July 2017
Date Added to IEEE Xplore: 24 August 2017
ISBN Information:
Electronic ISSN: 2160-7516
Conference Location: Honolulu, HI, USA
Citations are not available for this document.

1. Introduction

With the advent of 4K displays, super-resolution (SR) technique has become more crucial, due to the lack of available 4K contents. Specifically, single image SR is able to reconstruct high-quality high-resolution (HR) images from their low-resolution (LR) counterparts.

Getting results...

Contact IEEE to Subscribe

References

References is not available for this document.