Loading [MathJax]/extensions/MathZoom.js
Symmetric Parallax Attention for Stereo Image Super-Resolution | IEEE Conference Publication | IEEE Xplore

Symmetric Parallax Attention for Stereo Image Super-Resolution


Abstract:

Although recent years have witnessed the great advances in stereo image super-resolution (SR), the beneficial information provided by binocular systems has not been fully...Show More

Abstract:

Although recent years have witnessed the great advances in stereo image super-resolution (SR), the beneficial information provided by binocular systems has not been fully used. Since stereo images are highly symmetric under epipolar constraint, in this paper, we improve the performance of stereo image SR by exploiting symmetry cues in stereo image pairs. Specifically, we propose a symmetric bi-directional parallax attention module (biPAM) and an inline occlusion handling scheme to effectively interact cross-view information. Then, we design a Siamese network equipped with a biPAM to super-resolve both sides of views in a highly symmetric manner. Finally, we design several illuminance-robust losses to enhance stereo consistency. Experiments on four public datasets demonstrate the superior performance of our method. Source code is available at https://github.com/YingqianWang/iPASSR.
Date of Conference: 19-25 June 2021
Date Added to IEEE Xplore: 01 September 2021
ISBN Information:

ISSN Information:

Conference Location: Nashville, TN, USA

Funding Agency:

References is not available for this document.

1. Introduction

With recent advances in stereo vision, dual cameras are commonly adopted in mobile phones and autonomous vehicles. Using the complementary information (i.e., cross-view information) provided by binocular systems, the resolution of image pairs can be enhanced. However, it is challenging to achieve good performance in stereo image super-resolution (SR) due to the following issues: 1) Varying parallax. Objects at different depths have different disparity values and thus locate at different positions along the horizontal epipolar line. It is challenging to capture reliable stereo correspondence and effectively integrate cross-view information for stereo image SR. 2) Information incorporation. Since context information within a single view (i.e., intra-view information) is crucial and contributes to stereo image SR in a different manner, it is important but challenging to fully incorporate both intra-view and cross-view information. 3) Occlusions & boundaries. In occlusion and boundary areas, pixels in one view cannot find their correspondence in the other view. In this case, only intra-view information is available for stereo image SR. It is challenging to fully use cross-view information in non-occluded regions while maintaining promising performance in occluded regions.

Select All
1.
Kelvin CK Chan, Xintao Wang, Xiangyu Xu, Jinwei Gu and Chen Change Loy, "Glean: Generative latent bank for large-factor image super-resolution", CVPR, 2021.
2.
Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia and Lei Zhang, "Second-order attention network for single image super-resolution", CVPR, pp. 11065-11074, 2019.
3.
Chao Dong, Chen Change Loy, Kaiming He and Xiaoou Tang, "Learning a deep convolutional network for image super-resolution" in ECCV, Springer, pp. 184-199, 2014.
4.
Andreas Geiger, Philip Lenz and Raquel Urtasun, "Are we ready for autonomous driving? the kitti vision benchmark suite", CVPR, 2012.
5.
Xiaoyang Guo, Kai Yang, Wukui Yang, Xiaogang Wang and Hongsheng Li, "Group-wise correlation stereo network", CVPR, pp. 3273-3282, 2019.
6.
Daniel S Jeon, Seung-Hwan Baek, Inchang Choi and Min H Kim, "Enhancing the spatial resolution of stereo images using a parallax prior", CVPR, pp. 1721-1730, 2018.
7.
Sameh Khamis, Sean Fanello, Christoph Rhemann, Adarsh Kowdle, Julien Valentin and Shahram Izadi, "Stereonet: Guided hierarchical refinement for real-time edge-aware depth prediction", ECCV, pp. 573-590, 2018.
8.
Jiwon Kim, Jung Kwon Lee and Kyoung Mu Lee, "Accurate image super-resolution using very deep convolutional networks", CVPR, pp. 1646-1654, 2016.
9.
Jiwon Kim, Jung Kwon Lee and Kyoung Mu Lee, "Deeply-recursive convolutional network for image super-resolution", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1637-1645, 2016.
10.
Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja and Ming-Hsuan Yang, "Deep laplacian pyramid networks for fast and accurate super-resolution", CVPR, pp. 624-632, 2017.
11.
Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang et al., "Photo-realistic single image super-resolution using a generative adversarial network", CVPR, pp. 4681-4690, 2017.
12.
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah and Kyoung Mu Lee, "Enhanced deep residual networks for single image super-resolution", CVPRW, pp. 136-144, 2017.
13.
Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, et al., "A large dataset to train convolutional networks for disparity optical flow and scene flow estimation", CVPR, pp. 4040-4048, 2016.
14.
Moritz Menze and Andreas Geiger, "Object scene flow for autonomous vehicles", CVPR, pp. 3061-3070, 2015.
15.
Anish Mittal, Anush Krishna Moorthy and Alan Conrad Bovik, "No-reference image quality assessment in the spatial domain", IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695-4708, 2012.
16.
Anish Mittal, Rajiv Soundararajan and Alan C Bovik, "Making a completely blind image quality analyzer", IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209-212, 2012.
17.
Daniel Scharstein, Heiko Hirschmüller, York Kitajima, Greg Krathwohl, Nera Nešić, Xi Wang, et al., "High-resolution stereo datasets with subpixel-accurate ground truth" in GCPR, Springer, pp. 31-42, 2014.
18.
Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, et al., "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network", CVPR, pp. 1874-1883, 2016.
19.
Wonil Song, Sungil Choi, Somi Jeong and Kwanghoon Sohn, "Stereoscopic image super-resolution with stereo consistent feature", AAAI, pp. 12031-12038, 2020.
20.
Ying Tai, Jian Yang and Xiaoming Liu, "Image super-resolution via deep recursive residual network", CVPR, pp. 3147-3155, 2017.
21.
Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang and Lei Zhang, "Ntire 2017 challenge on single image super-resolution: Methods and results", CVPRW, pp. 114-125, 2017.
22.
Longguang Wang, Xiaoyu Dong, Yingqian Wang, Xinyi Ying, Zaiping Lin, Wei An, et al., "Exploring sparsity in image super-resolution for efficient inference", CVPR, 2021.
23.
Longguang Wang, Yulan Guo, Yingqian Wang, Zhengfa Liang, Zaiping Lin, Jungang Yang, et al., "Parallax attention for unsupervised stereo correspondence learning", IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
24.
Longguang Wang, Yingqian Wang, Xiaoyu Dong, Qingyu Xu, Jungang Yang, Wei An, et al., "Unsupervised degradation representation learning for blind super-resolution", CVPR, 2021.
25.
Longguang Wang, Yingqian Wang, Zhengfa Liang, Zaiping Lin, Jungang Yang, Wei An, et al., "Learning parallax attention for stereo image super-resolution", CVPR, 2019.
26.
Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, et al., "Esrgan: Enhanced super-resolution generative adversarial networks", ECCVW, pp. 0-0, 2018.
27.
Yingqian Wang, Longguang Wang, Jungang Yang, Wei An and Yulan Guo, "Flickr1024: A large-scale dataset for stereo image super-resolution", ICCVW, pp. 3852-3857, Oct 2019.
28.
Qingyu Xu, Longguang Wang, Yingqian Wang, Weidong Sheng and Xinpu Deng, "Deep bilateral learning for stereo image super-resolution", IEEE Signal Processing Letters, 2021.
29.
Bo Yan, Chenxi Ma, Bahetiyaer Bare, Weimin Tan and Steven C. H. Hoi, "Disparity-aware domain adaptation in stereo image restoration", CVPR, 2020.
30.
Jia Yan, Jie Li and Xin Fu, "No-reference quality assessment of contrast-distorted images using contrast enhancement", 2019.
Contact IEEE to Subscribe

References

References is not available for this document.