Loading [MathJax]/extensions/MathMenu.js
Generalizing to Out-of-Sample Degradations via Model Reprogramming | IEEE Journals & Magazine | IEEE Xplore

Generalizing to Out-of-Sample Degradations via Model Reprogramming


Abstract:

Existing image restoration models are typically designed for specific tasks and struggle to generalize to out-of-sample degradations not encountered during training. Whil...Show More

Abstract:

Existing image restoration models are typically designed for specific tasks and struggle to generalize to out-of-sample degradations not encountered during training. While zero-shot methods can address this limitation by fine-tuning model parameters on testing samples, their effectiveness relies on predefined natural priors and physical models of specific degradations. Nevertheless, determining out-of-sample degradations faced in real-world scenarios is always impractical. As a result, it is more desirable to train restoration models with inherent generalization ability. To this end, this work introduces the Out-of-Sample Restoration (OSR) task, which aims to develop restoration models capable of handling out-of-sample degradations. An intuitive solution involves pre-translating out-of-sample degradations to known degradations of restoration models. However, directly translating them in the image space could lead to complex image translation issues. To address this issue, we propose a model reprogramming framework, which translates out-of-sample degradations by quantum mechanic and wave functions. Specifically, input images are decoupled as wave functions of amplitude and phase terms. The translation of out-of-sample degradation is performed by adapting the phase term. Meanwhile, the image content is maintained and enhanced in the amplitude term. By taking these two terms as inputs, restoration models are able to handle out-of-sample degradations without fine-tuning. Through extensive experiments across multiple evaluation cases, we demonstrate the effectiveness and flexibility of our proposed framework. Our codes are available at https://github.com/ddghjikle/Out-of-sample-restoration.
Published in: IEEE Transactions on Image Processing ( Volume: 33)
Page(s): 2783 - 2794
Date of Publication: 05 April 2024

ISSN Information:

PubMed ID: 38578860

Funding Agency:

References is not available for this document.

I. Introduction

Image restoration have achieved remarkable success with the rapid development of deep neural networks. Previous researches [1], [2], [3] always characterize specific types of degradations as individual issues and propose dedicated solutions. While this methodology effectively addresses some real-world scenarios, it falls short when considering more complex situations like autonomous driving on rainy days. In such cases, the perceived images can be degraded by a combination of rain, haze, blur, and noise, making it difficult to attribute the degradation to a specific form. Moreover, as real-world images can exhibit diverse and unpredictable degradation patterns, enumerating all possible degradations to train restoration networks is practically infeasible [4]. Therefore, it is imperative to develop restoration models that can effectively restore images degraded by various factors, including those not encountered during training.

Select All
1.
J. Liang, J. Cao, G. Sun, K. Zhang, L. Van G and R. Timofte, "SwinIR: Image restoration using Swin transformer", Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV) Workshops, pp. 1833-1844, Oct. 2021.
2.
S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan and M.-H. Yang, "Restormer: Efficient transformer for high-resolution image restoration", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5728-5739, Jun. 2022.
3.
X. Zhang, R. Jiang, T. Wang and W. Luo, "Single image dehazing via dual-path recurrent network", IEEE Trans. Image Process., vol. 30, pp. 5211-5222, 2021.
4.
L. Liu et al., "TAPE: Task-agnostic prior embedding for image restoration", arXiv:2203.06074, 2022.
5.
S. Zhao, L. Zhang, Y. Shen and Y. Zhou, "RefineDNet: A weakly supervised refinement framework for single image dehazing", IEEE Trans. Image Process., vol. 30, pp. 3391-3404, 2021.
6.
B. Li, Y. Gou, J. Z. Liu, H. Zhu and J. T. Zhou, "Zero-shot image dehazing", IEEE Trans. Image Process., vol. 29, pp. 8457-8466, 2020.
7.
B. Li, Y. Gou, S. Gu, J. Z. Liu, J. T. Zhou and X. Peng, "You only look yourself: Unsupervised and untrained single image dehazing neural network", Int. J. Comput. Vis., vol. 129, no. 5, pp. 1754-1767, 2021.
8.
B. Li, X. Peng, Z. Wang, J. Xu and D. Feng, "AOD-Net: All-in-one dehazing network", Proc. IEEE Int. Conf. Comput. Vis., pp. 4770-4778, Jun. 2017.
9.
M. Long, Y. Cao, J. Wang and M. Jordan, "Learning transferable features with deep adaptation networks", Proc. 32nd Int. Conf. Mach. Learn., vol. 37, pp. 97-105, Jul. 2015.
10.
K. Saito, K. Watanabe, Y. Ushiku and T. Harada, "Maximum classifier discrepancy for unsupervised domain adaptation", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 3723-3732, Jun. 2018.
11.
A. Lengyel, S. Garg, M. Milford and J. C. van Gemert, "Zero-shot day-night domain adaptation with a physics prior", Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pp. 4399-4409, Oct. 2021.
12.
S. Li et al., "Semantic concentration for domain adaptation", Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pp. 9102-9111, Oct. 2021.
13.
Z. Han, H. Sun and Y. Yin, "Learning transferable parameters for unsupervised domain adaptation", IEEE Trans. Image Process., vol. 31, pp. 6424-6439, 2022.
14.
Y. Tang et al., "An image patch is a wave: Phase-aware vision MLP", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 10935-10944, Jun. 2022.
15.
J.-J. Huang and P. L. Dragotti, "WINNet: Wavelet-inspired invertible network for image denoising", IEEE Trans. Image Process., vol. 31, pp. 4377-4392, 2022.
16.
Y. Qu, Y. Chen, J. Huang and Y. Xie, "Enhanced pix2pix dehazing network", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8160-8168, Jun. 2019.
17.
C. Chen and H. Li, "Robust representation learning with feedback for single image deraining", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 7742-7751, Jun. 2021.
18.
B. Li et al., "Benchmarking single-image dehazing and beyond", IEEE Trans. Image Process., vol. 28, pp. 492-505, 2018.
19.
X. Liu, Y. Ma, Z. Shi and J. Chen, "GridDehazeNet: Attention-based multi-scale network for image dehazing", Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pp. 7314-7323, Oct. 2019.
20.
S. Zhuo, Z. Jin, W. Zou and X. Li, "RIDNet: Recursive information distillation network for color image denoising", Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshop (ICCVW), pp. 3896-3903, Oct. 2019.
21.
R. Neshatavar, M. Yavartanoo, S. Son and K. M. Lee, "CVF-SID: Cyclic multi-variate function for self-supervised image denoising by disentangling noise from image", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 17562-17570, Jun. 2022.
22.
Y. Wang, J. Yu and J. Zhang, "Zero-shot image restoration using denoising diffusion null-space model", arXiv:2212.00490, 2022.
23.
L. Van der Maaten and G. Hinton, "Visualizing data using t-SNE", J. Mach. Learn. Res., vol. 9, no. 11, pp. 2579-2605, 2008.
24.
J. Johnson, A. Alahi and L. Fei-Fei, "Perceptual losses for real-time style transfer and super-resolution", Proc. Eur. Conf. Comput. Vis., pp. 694-711, 2016.
25.
X. Chen, S. Wang, M. Long and J. Wang, "Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation", Proc. Int. Conf. Mach. Learn., pp. 1081-1090, 2019.
26.
P.-Y. Chen, "Model reprogramming: Resource-efficient cross-domain machine learning", arXiv:2202.10629, 2022.
27.
G. F. Elsayed, I. Goodfellow and J. Sohl-Dickstein, "Adversarial reprogramming of neural networks", arXiv:1806.11146, 2018.
28.
Y. Tsai, P.-Y. Chen and T.-Y. Ho, "Transfer learning without knowing: Reprogramming black-box machine learning models with scarce data and limited resources", Proc. Int. Conf. Mach. Learn., pp. 9614-9624, 2020.
29.
I. Melnyk et al., "Reprogramming pretrained language models for antibody sequence infilling", arXiv:2210.07144, 2022.
30.
P. Neekhara, S. Hussain, J. Du, S. Dubnov, F. Koushanfar and J. McAuley, "Cross-modal adversarial reprogramming", Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. (WACV), pp. 2898-2906, Jan. 2022.

Contact IEEE to Subscribe

References

References is not available for this document.