Loading [MathJax]/extensions/TeX/boldsymbol.js
On the Contractivity of Plug-and-Play Operators | IEEE Journals & Magazine | IEEE Xplore

On the Contractivity of Plug-and-Play Operators


Abstract:

In plug-and-play (PnP) regularization, the proximal operator in algorithms such as ISTA and ADMM is replaced by a powerful denoiser. This formal substitution works surpri...Show More

Abstract:

In plug-and-play (PnP) regularization, the proximal operator in algorithms such as ISTA and ADMM is replaced by a powerful denoiser. This formal substitution works surprisingly well in practice. In fact, PnP has been shown to give state-of-the-art results for various imaging applications. The empirical success of PnP has motivated researchers to understand its theoretical underpinnings and, in particular, its convergence. It was shown in prior work that for kernel denoisers such as the nonlocal means, PnP-ISTA provably converges under some strong assumptions on the forward model. The present work is motivated by the following questions: Can we relax the assumptions on the forward model? Can the convergence analysis be extended to PnP-ADMM? Can we estimate the convergence rate? In this letter, we resolve these questions using the contraction mapping theorem: i) for symmetric denoisers, we show that (under mild conditions) PnP-ISTA and PnP-ADMM exhibit linear convergence; and ii) for kernel denoisers, we show that PnP-ISTA and PnP-ADMM converge linearly for image inpainting. We validate our theoretical findings using reconstruction experiments.
Published in: IEEE Signal Processing Letters ( Volume: 30)
Page(s): 1447 - 1451
Date of Publication: 09 October 2023

ISSN Information:

Funding Agency:

Citations are not available for this document.

I. Introduction

Image reconstruction tasks such as denoising, inpainting, deblurring, and superresolution can be modeled as a linear inverse problem: we wish to recover an image from noisy linear measurements , where is the forward model and is white Gaussian noise. A standard approach is to solve the optimization problem \begin{equation*} \mathop{\text{minimize}}\limits_{\boldsymbol{x}\in \mathbb {R}^{n}} \, f(\boldsymbol{x}) + g(\boldsymbol{x}), \quad f(\boldsymbol{x})= \frac{1}{2} \Vert \mathbf {A}\boldsymbol{x}- \boldsymbol{b}\Vert _{2}^{2}, \tag{1} \end{equation*} where the loss function is derived from the forward model and is an image regularizer [1], [2]. The choice of regularizer has evolved from simple Tikhonov and Laplacian regularizers [3] to wavelet, total-variation, dictionary, etc. [3], [4], [5], and to more recent learning-based models [6], [7], [8].

Cites in Papers - |

Cites in Papers - IEEE (2)

Select All
1.
Arghya Sinha, Kunal N. Chaudhury, "On the Strong Convexity of PnP Regularization Using Linear Denoisers", IEEE Signal Processing Letters, vol.31, pp.2790-2794, 2024.
2.
Chirayu D. Athalye, Kunal N. Chaudhury, "Corrections to “On the Contractivity of Plug-and-Play Operators”", IEEE Signal Processing Letters, vol.30, pp.1817-1817, 2023.

Cites in Papers - Other Publishers (1)

1.
Pravin Nair, Kunal N. Chaudhury, "Averaged Deep Denoisers for Image Regularization", Journal of Mathematical Imaging and Vision, 2024.
Contact IEEE to Subscribe

References

References is not available for this document.