Direct Unsupervised Super-Resolution Using Generative Adversarial Network (DUS-GAN) for Real-World Data | IEEE Journals & Magazine | IEEE Xplore

Direct Unsupervised Super-Resolution Using Generative Adversarial Network (DUS-GAN) for Real-World Data


Abstract:

The deep learning models for the Single Image Super-Resolution (SISR) task have found success in recent years. However, one of the prime limitations of existing deep lear...Show More

Abstract:

The deep learning models for the Single Image Super-Resolution (SISR) task have found success in recent years. However, one of the prime limitations of existing deep learning-based SISR approaches is that they need supervised training. Specifically, the Low-Resolution (LR) images are obtained through known degradation (for instance, bicubic downsampling) from the High-Resolution (HR) images to provide supervised data as an LR-HR pair. Such training results in a domain shift of learnt models when real-world data is provided with multiple degradation factors not present in the training set. To address this challenge, we propose an unsupervised approach for the SISR task using Generative Adversarial Network (GAN), which we refer to hereafter as DUS-GAN. The novel design of the proposed method accomplishes the SR task without degradation estimation of real-world LR data. In addition, a new human perception-based quality assessment loss, i.e., Mean Opinion Score (MOS), has also been introduced to boost the perceptual quality of SR results. The pertinence of the proposed method is validated with numerous experiments on different reference-based (i.e., NTIRE Real-world SR Challenge validation dataset) and no-reference based (i.e., NTIRE Real-world SR Challenge Track-1 and Track-2) testing datasets. The experimental analysis demonstrates committed improvement from the proposed method over the other state-of-the-art unsupervised SR approaches, both in terms of subjective and quantitative evaluations on different reference metrics (i.e., LPIPS, PI-RMSE graph) and no-reference quality measures such as NIQE, BRISQUE and PIQE. We also provide the implementation of the proposed approach (https://github.com/kalpeshjp89/DUSGAN) to support reproducible research.
Published in: IEEE Transactions on Image Processing ( Volume: 30)
Page(s): 8251 - 8264
Date of Publication: 24 September 2021

ISSN Information:

PubMed ID: 34559651

Funding Agency:

No metrics found for this document.

I. Introduction

High-Resolution (HR) images provide richer details of objects being observed and are preferred in various computer vision tasks such as detection and/or feature extraction, including human perception. The spatial resolution of the imaging sensor plays a crucial role in acquiring images with high resolution. While sensors with HR capability are preferred in most applications, several factors such as cost of production, space requirements of sensor, ease of manufacturing hinder HR sensors for broader application. Software-based solutions called image Super-Resolution (SR) is proposed to overcome this limitation to a certain extent. SR solutions are both economic and effective alternatives to demanding the use/replacement of HR sensors. The goal in the SR problem is to estimate HR images from a given Low-Resolution (LR) image or a set of LR images. Despite extensive SR works presented in the literature, the inherent ill-posed nature of the problem, complexity and unavailability of practical quantitative measurements make it an open research problem in the community [1].

Usage
Select a Year
2025

View as

Total usage sinceSep 2021:1,759
0102030405060JanFebMarAprMayJunJulAugSepOctNovDec35520000000000
Year Total:87
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.