Learning a Practical SDR-to-HDRTV Up-conversion using New Dataset and Degradation Models | IEEE Conference Publication | IEEE Xplore

Learning a Practical SDR-to-HDRTV Up-conversion using New Dataset and Degradation Models


Abstract:

In media industry, the demand of SDR-to-HDRTV upconversion arises when users possess HDR-WCG (high dynamic range-wide color gamut) TVs while most off-the-shelf footage is...Show More

Abstract:

In media industry, the demand of SDR-to-HDRTV upconversion arises when users possess HDR-WCG (high dynamic range-wide color gamut) TVs while most off-the-shelf footage is still in SDR (standard dynamic range). The research community has started tackling this low-level vision task by learning-based approaches. When applied to real SDR, yet, current methods tend to produce dim and desaturated result, making nearly no improvement on viewing experience. Different from other network-oriented methods, we attribute such deficiency to training set (HDR-SDR pair). Consequently, we propose new HDRTV dataset (dubbed HDRTV4K) and new HDR-to-SDR degradation models. Then, it's used to train a luminance-segmented network (LSN) consisting of a global mapping trunk, and two Transformer branches on bright and dark luminance range. We also update assessment criteria by tailored metrics and subjective experiment. Finally, ablation studies are conducted to prove the effectiveness. Our work is available at: https://github.com/AndreGuo/HDRTVDM
Date of Conference: 17-24 June 2023
Date Added to IEEE Xplore: 22 August 2023
ISBN Information:

ISSN Information:

Conference Location: Vancouver, BC, Canada

1. Introduction

The dynamic range of image is defined as the maximum recorded luminance to the minimum. Larger luminance container endows high dynamic range (HDR) a better expressiveness of scene. In media and film industry, the superiority of HDR is further boosted by advanced electrooptical transfer function (EOTF) e.g. PQ/HLG [2], and wide color-gamut (WCG) RGB primaries e.g. BT.2020 [3].

Contact IEEE to Subscribe

References

References is not available for this document.