Loading [MathJax]/extensions/MathZoom.js
Rethinking Deep Image Prior for Denoising | IEEE Conference Publication | IEEE Xplore

Rethinking Deep Image Prior for Denoising


Abstract:

Deep image prior (DIP) serves as a good inductive bias for diverse inverse problems. Among them, denoising is known to be particularly challenging for the DIP due to nois...Show More

Abstract:

Deep image prior (DIP) serves as a good inductive bias for diverse inverse problems. Among them, denoising is known to be particularly challenging for the DIP due to noise fitting with the requirement of an early stopping. To address the issue, we first analyze the DIP by the notion of effective degrees of freedom (DF) to monitor the optimization progress and propose a principled stopping criterion before fitting to noise without access of a paired ground truth image for Gaussian noise. We also propose the ‘stochastic temporal ensemble (STE)’ method for incorporating techniques to further improve DIP’s performance for denoising. We additionally extend our method to Poisson noise. Our empirical validations show that given a single noisy image, our method denoises the image while preserving rich textual details. Further, our approach outperforms prior arts in LPIPS by large margins with comparable PSNR and SSIM on seven different datasets.
Date of Conference: 10-17 October 2021
Date Added to IEEE Xplore: 28 February 2022
ISBN Information:

ISSN Information:

Conference Location: Montreal, QC, Canada

Funding Agency:


1. Introduction

Deep neural network has been widely used in many computer vision tasks, yielding significant improvements over conventional approaches since AlexNet [18]. However, image denoising has been one of the tasks in which conventional methods such as BM3D [7] outperformed many early deep learning based ones [5], [47], [48] until DnCNN [51] outperforms it for synthetic Gaussian noise at the expense of massive amount of noiseless and noisy image pairs [51].

Contact IEEE to Subscribe

References

References is not available for this document.