Loading [MathJax]/extensions/MathMenu.js
SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked Autoencoders | IEEE Conference Publication | IEEE Xplore

SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked Autoencoders


Abstract:

Generating a high-quality High Dynamic Range (HDR) image from dynamic scenes has recently been extensively studied by exploiting Deep Neural Networks (DNNs). Most DNNs-ba...Show More

Abstract:

Generating a high-quality High Dynamic Range (HDR) image from dynamic scenes has recently been extensively studied by exploiting Deep Neural Networks (DNNs). Most DNNs-based methods require a large amount of training data with ground truth, requiring tedious and time-consuming work. Few-shot HDR imaging aims to generate satisfactory images with limited data. However, it is difficult for modern DNNs to avoid overfitting when trained on only a few images. In this work, we propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR. Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum, we first generate content of saturated regions with a self-supervised mechanism and then address ghosts via an iterative semi-supervised learning framework. Concretely, considering that saturated regions can be regarded as masking Low Dynamic Range (LDR) input regions, we design a Saturated Mask AutoEncoder (SMAE) to learn a robust feature representation and reconstruct a non-saturated HDR image. We also propose an adaptive pseudo-label selection strategy to pick high-quality HDR pseudo-labels in the second stage to avoid the effect of mislabeled samples. Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets, achieving appealing HDR visualization with few labeled samples.
Date of Conference: 17-24 June 2023
Date Added to IEEE Xplore: 22 August 2023
ISBN Information:

ISSN Information:

Conference Location: Vancouver, BC, Canada

Funding Agency:

Citations are not available for this document.

1. Introduction

Standard digital photography sensors are unable to capture the wide range of illumination present in natural scenes, resulting in Low Dynamic Range (LDR) images that often suffer from over or underexposed regions, which can damage the details of the scene. High Dynamic Range (HDR) imaging has been developed to address these limitations. This technique combines several LDR images with different exposures to generate an HDR image. While HDR imaging can effectively recover details in static scenes, it may produce ghosting artifacts when used with dynamic scenes or hand-held camera scenarios.

The proposed method generates high-quality images with few labeled samples when compared with several methods.

Cites in Papers - |

Cites in Papers - IEEE (6)

Select All
1.
Kadir Cenk Alpay, Ahmet Oğuz Akyüz, Nicola Brandonisio, Joseph Meehan, Alan Chalmers, "DeepDuoHDR: A Low Complexity Two Exposure Algorithm for HDR Deghosting on Mobile Devices", IEEE Transactions on Image Processing, vol.33, pp.6592-6606, 2024.
2.
Sixian Zhang, Xinyao Yu, Xinhang Song, Xiaohan Wang, Shuqiang Jiang, "Imagine Before Go: Self-Supervised Generative Map for Object Goal Navigation", 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.16414-16425, 2024.
3.
Kangzhen Yang, Tao Hu, Kexin Dai, Genggeng Chen, Yu Cao, Wei Dong, Peng Wu, Yanning Zhang, Qingsen Yan, "CRNet: A Detail-Preserving Network for Unified Image Restoration and Enhancement Task", 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.6086-6096, 2024.
4.
Genggeng Chen, Kexin Dai, Kangzhen Yang, Tao Hu, Xiangyu Chen, Yongqing Yang, Wei Dong, Peng Wu, Yanning Zhang, Qingsen Yan, "Bracketing Image Restoration and Enhancement with High-Low Frequency Decomposition", 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.6097-6107, 2024.
5.
Huafeng Li, Zhenmei Yang, Yafei Zhang, Dapeng Tao, Zhengtao Yu, "Single-Image HDR Reconstruction Assisted Ghost Suppression and Detail Preservation Network for Multi-Exposure HDR Imaging", IEEE Transactions on Computational Imaging, vol.10, pp.429-445, 2024.
6.
Zexian Zhou, Xiaojing Liu, "Masked Autoencoders in Computer Vision: A Comprehensive Survey", IEEE Access, vol.11, pp.113560-113579, 2023.

Cites in Papers - Other Publishers (1)

1.
Chao Wang, Krzysztof Wolski, Bernhard Kerbl, Ana Serrano, Mojtaba Bemana, Hans‐Peter Seidel, Karol Myszkowski, Thomas Leimkühler, "Cinematic Gaussians: Real‐Time HDR Radiance Fields with Depth of Field", Computer Graphics Forum, 2024.
Contact IEEE to Subscribe

References

References is not available for this document.