Loading [MathJax]/extensions/MathZoom.js
Zero-Shot Super-Resolution Using Deep Internal Learning | IEEE Conference Publication | IEEE Xplore

Zero-Shot Super-Resolution Using Deep Internal Learning


Abstract:

Deep Learning has led to a dramatic leap in SuperResolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific...Show More

Abstract:

Deep Learning has led to a dramatic leap in SuperResolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the low-resolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce "Zero-Shot" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method.
Date of Conference: 18-23 June 2018
Date Added to IEEE Xplore: 16 December 2018
ISBN Information:

ISSN Information:

Conference Location: Salt Lake City, UT, USA

1. Introduction

Super-Resolution (SR) from a single image has recently received a huge boost in performance using Deep-Learning based methods [4], [10], [9], [12], [13]. The recent SotA (State of the Art) method [13] exceeds previous non-Deep SR methods (supervised [22] or unsupervised [5]–[7]) by a few dBs - a huge margin! This boost in performance was obtained with very deep and well engineered CNNs, which were trained exhaustively on external databases, for lengthy periods of time (days or weeks). However, while these externally supervised

We use the term “supervised” for any method that trains on externally supplied examples (even if their generation does not require manual labelling).

methods perform extremely well on data satisfying the conditions they were trained on, their performance deteriorates significantly once these conditions are not satisfied.

Contact IEEE to Subscribe

References

References is not available for this document.