A Unified HDR Imaging Method with Pixel and Patch Level | IEEE Conference Publication | IEEE Xplore

A Unified HDR Imaging Method with Pixel and Patch Level


Abstract:

Mapping Low Dynamic Range (LDR) images with different exposures to High Dynamic Range (HDR) remains nontrivial and challenging on dynamic scenes due to ghosting caused by...Show More

Abstract:

Mapping Low Dynamic Range (LDR) images with different exposures to High Dynamic Range (HDR) remains nontrivial and challenging on dynamic scenes due to ghosting caused by object motion or camera jitting. With the success of Deep Neural Networks (DNNs), several DNNs-based methods have been proposed to alleviate ghosting, they cannot generate approving results when motion and saturation occur. To generate visually pleasing HDR images in various cases, we propose a hybrid HDR deghosting network, called HyHDRNet, to learn the complicated relationship between reference and non-reference images. The proposed HyHDRNet consists of a content alignment subnetwork and a Transformer-based fusion subnetwork. Specifically, to effectively avoid ghosting from the source, the content alignment subnetwork uses patch aggregation and ghost attention to integrate similar content from other non-reference images with patch level and suppress undesired components with pixel level. To achieve mutual guidance between patch-level and pixel-level, we leverage a gating module to sufficiently swap useful information both in ghosted and saturated regions. Furthermore, to obtain a high-quality HDR image, the Transformer-based fusion subnetwork uses a Residual Deformable Transformer Block (RDTB) to adaptively merge information for different exposed regions. We examined the proposed method on four widely used public HDR image deghosting datasets. Experiments demonstrate that HyHDRNet outperforms state-of-the-art methods both quantitatively and qualitatively, achieving appealing HDR visualization with unified textures and colors.
Date of Conference: 17-24 June 2023
Date Added to IEEE Xplore: 22 August 2023
ISBN Information:

ISSN Information:

Conference Location: Vancouver, BC, Canada

1. Introduction

Natural scenes cover a very broad range of illumination, but standard digital camera sensors can only measure a limited dynamic range. Images captured by cameras often have saturated or under-exposed regions, which lead to terrible visual effects due to severely missing details. High Dynamic Range (HDR) imaging has been developed to address these limitations, and it can display richer details. A common way of HDR imaging is to fuse a series of differently exposed Low Dynamic Range (LDR) images. It can recover a high-quality HDR image when both the scene and the camera are static, however, it suffers from ghosting artifacts on dynamic objects or hand-held camera scenarios.

Our approach produces high-quality hdr images, leveraging both patch-wise aggregation and pixel-wise ghost attention. The two modules provide complementary visual information: patch aggregation recovers patch-level content of the complex distorted regions and ghost attention provides pixel-level alignment.

Contact IEEE to Subscribe

References

References is not available for this document.