Semi-Supervised Image Deraining Using Gaussian Processes | IEEE Journals & Magazine | IEEE Xplore

Semi-Supervised Image Deraining Using Gaussian Processes


Abstract:

Recent CNN-based methods for image deraining have achieved excellent performance in terms of reconstruction error as well as visual quality. However, these methods are li...Show More

Abstract:

Recent CNN-based methods for image deraining have achieved excellent performance in terms of reconstruction error as well as visual quality. However, these methods are limited in the sense that they can be trained only on fully labeled data. Due to various challenges in obtaining real world fully-labeled image deraining datasets, existing methods are trained only on synthetically generated data and hence, generalize poorly to real-world images. The use of real-world data in training image deraining networks is relatively less explored in the literature. We propose a Gaussian Process-based semi-supervised learning framework which enables the network in learning to derain using synthetic dataset while generalizing better using unlabeled real-world images. More specifically, we model the latent space vectors of unlabeled data using Gaussian Processes, which is then used to compute pseudo-ground-truth for supervising the network on unlabeled data. The pseudo ground-truth is further used to supervise the network at the intermediate level for the unlabeled data. Through extensive experiments and ablations on several challenging datasets (such as Rain800, Rain200L and DDN-SIRR), we show that the proposed method is able to effectively leverage unlabeled data thereby resulting in significantly better performance as compared to labeled-only training. Additionally, we demonstrate that using unlabeled real-world images in the proposed GP-based framework results in superior performance as compared to the existing methods. Code is available at: https://github.com/rajeevyasarla/Syn2Real.
Published in: IEEE Transactions on Image Processing ( Volume: 30)
Page(s): 6570 - 6582
Date of Publication: 16 July 2021

ISSN Information:

PubMed ID: 34270423

Funding Agency:

References is not available for this document.

I. Introduction

Images captured under rainy conditions are often of poor quality. The artifacts introduced by rain streaks adversely affect the performance of subsequent computer vision algorithms such as object detection and recognition [1]–[4]. With such algorithms becoming vital components in several applications such as autonomous navigation and video surveillance [5]–[7], it is increasingly important to develop algorithms for rain removal.

Select All
1.
R. Girshick, "Fast R-CNN", Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 1440-1448, Dec. 2015.
2.
W. Liu et al., "SSD: Single shot multibox detector", Proc. Eur. Conf. Comput. Vis, pp. 21-37, Oct. 2016.
3.
S. Ren, K. He, R. Girshick and J. Sun, "Faster R-CNN: Towards real-time object detection with region proposal networks", Proc. Adv. Neural Inf. Process. Syst., pp. 91-99, 2015.
4.
Y. Chen, W. Li, C. Sakaridis, D. Dai and L. Van Gool, "Domain adaptive faster R-CNN for object detection in the wild", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 3339-3348, Jun. 2018.
5.
C. R. Qi, W. Liu, C. Wu, H. Su and L. J. Guibas, "Frustum PointNets for 3D object detection from RGB-D data", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 918-927, Jun. 2018.
6.
M. Liang, B. Yang, S. Wang and R. Urtasun, "Deep continuous fusion for multi-sensor 3D object detection", Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 641-656, 2018.
7.
A. G. Perera, Y. W. Law and J. Chahl, "UAV-GESTURE: A dataset for UAV control and gesture recognition", Proc. Eur. Conf. Comput. Vis. (ECCV) Workshops, 2018.
8.
H. Zhang and V. M. Patel, "Density-aware single image de-raining using a multi-stream dense network", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 695-704, Jun. 2018.
9.
X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding and J. Paisley, "Removing rain from single images via a deep detail network", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1715-1723, Jul. 2017.
10.
Y. Li, R. T. Tan, X. Guo, J. Lu and M. S. Brown, "Rain streak removal using layer priors", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2736-2744, Jun. 2016.
11.
W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo and S. Yan, "Deep joint rain detection and removal from a single image", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1357-1366, Jul. 2017.
12.
L. Zhu, C.-W. Fu, D. Lischinski and P.-A. Heng, "Joint bi-layer optimization for single-image rain streak removal", Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2534-2536, Oct. 2017.
13.
Z. Fan, H. Wu, X. Fu, Y. Hunag and X. Ding, "Residual-guide feature fusion network for single image deraining" in arXiv:1804.07493, 2018, [online] Available: http://arxiv.org/abs/1804.07493.
14.
W. Yang, R. T. Tan, J. Feng, Z. Guo, S. Yan and J. Liu, "Joint rain detection and removal from a single image with contextualized deep networks", IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 6, pp. 1377-1393, Jun. 2020.
15.
S. Li, W. Ren, J. Zhang, J. Yu and X. Guo, "Single image rain removal via a deep decomposition–composition network", Comput. Vis. Image Understand., vol. 186, pp. 48-57, Sep. 2019.
16.
G. Wang, C. Sun and A. Sowmya, "Cascaded attention guidance network for single rainy image restoration", IEEE Trans. Image Process., vol. 29, pp. 9190-9203, 2020.
17.
R. Yasarla, J. M. J. Valanarasu and V. M. Patel, "Exploring overcomplete representations for single image deraining using CNNs", IEEE J. Sel. Topics Signal Process., vol. 15, no. 2, pp. 229-239, Feb. 2021.
18.
X. Li, J. Wu, Z. Lin, H. Liu and H. Zha, "Recurrent squeeze-and-excitation context aggregation net for single image deraining", Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 262-277, 2018.
19.
D. Ren, W. Zuo, Q. Hu, P. Zhu and D. Meng, "Progressive image deraining networks: A better and simpler baseline", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 3937-3946, 2019.
20.
H. Zhang, V. Sindagi and V. M. Patel, "Image de-raining using a conditional generative adversarial network" in arXiv:1701.05957, 2017, [online] Available: http://arxiv.org/abs/1701.05957.
21.
W. Wei, D. Meng, Q. Zhao, Z. Xu and Y. Wu, "Semi-supervised transfer learning for image rain removal", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 3877-3886, Jun. 2019.
22.
R. Yasarla, V. A. Sindagi and V. M. Patel, "Syn2Real transfer learning for image deraining using Gaussian processes", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2723-2733, Jun. 2020.
23.
R. Yasarla and V. M. Patel, "Confidence measure guided single image de-raining", IEEE Trans. Image Process., vol. 29, pp. 4544-4555, 2020.
24.
X. Zhang, H. Li, Y. Qi, W. K. Leow and T. K. Ng, "Rain removal in video by combining temporal and chromatic properties", Proc. IEEE Int. Conf. Multimedia Expo, pp. 461-464, Jul. 2006.
25.
K. Garg and S. K. Nayar, "Vision and rain", Int. J. Comput. Vis., vol. 75, no. 1, pp. 3-27, 2007.
26.
V. Santhaseelan and V. Asari, "Utilizing local phase information to remove rain from video", Int. J. Comput. Vis., vol. 112, pp. 71-89, Mar. 2015.
27.
J. Liu, W. Yang, S. Yang and Z. Guo, "Erase or fill? Deep joint recurrent rain removal and reconstruction in videos", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 3233-3242, Jun. 2018.
28.
M. Li et al., "Video rain streak removal by multiscale convolutional sparse coding", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 6644-6653, Jun. 2018.
29.
J. Liu, W. Yang, S. Yang and Z. Guo, "D3R-Net: Dynamic routing residue recurrent network for video rain removal", IEEE Trans. Image Process., vol. 28, no. 2, pp. 699-712, Feb. 2019.
30.
M. Tremblay, S. S. Halder, R. de Charette and J.-F. Lalonde, "Rain rendering for evaluating and improving robustness to bad weather", Int. J. Comput. Vis., vol. 129, pp. 1-20, Feb. 2020.
Contact IEEE to Subscribe

References

References is not available for this document.