1. Introduction
Natural scenes cover a very broad range of illumination, but standard digital camera sensors can only measure a limited dynamic range. Images captured by cameras often have saturated or under-exposed regions, which lead to terrible visual effects due to severely missing details. High Dynamic Range (HDR) imaging has been developed to address these limitations, and it can display richer details. A common way of HDR imaging is to fuse a series of differently exposed Low Dynamic Range (LDR) images. It can recover a high-quality HDR image when both the scene and the camera are static, however, it suffers from ghosting artifacts on dynamic objects or hand-held camera scenarios.
Our approach produces high-quality hdr images, leveraging both patch-wise aggregation and pixel-wise ghost attention. The two modules provide complementary visual information: patch aggregation recovers patch-level content of the complex distorted regions and ghost attention provides pixel-level alignment.