Low-Light Enhancement and Global-Local Feature Interaction for RGB-T Semantic Segmentation | IEEE Journals & Magazine | IEEE Xplore

Low-Light Enhancement and Global-Local Feature Interaction for RGB-T Semantic Segmentation


Abstract:

The performance of RGB-T semantic segmentation tasks is affected by the quality of visible (VIS) and infrared (IR) images captured by sensor instruments. In low-light env...Show More

Abstract:

The performance of RGB-T semantic segmentation tasks is affected by the quality of visible (VIS) and infrared (IR) images captured by sensor instruments. In low-light environments, various degradation factors lead to the poor quality of captured VIS and IR images, ultimately reducing the performance of subsequent semantic segmentation tasks. To address this issue, we propose a novel RGB-T semantic segmentation framework, which contains a low-light enhancement network and a segmentation network. The low-light enhancement network is designed to improve the quality of low-light images by learning the mapping from low-quality (LQ) low-light to high-quality (HQ) normal-light fused images. To obtain training data for the low-light enhancement network, we design a low-light degradation model (LDM) to simulate degradation factors in low-light environments and generate synthesized low-light images. Then, the trained low-light enhancement network generates HQ normal-light fused images as enhanced inputs for the subsequent semantic segmentation network, improving the segmentation performance. Subsequently, a global-local feature interaction module (GLFIM) is designed within the segmentation network to facilitate the interaction between global and local features from the enhanced inputs, thus further enhancing the semantic segmentation performance. Experimental results on the multi-spectral fusion network (MFNet) and PST900 datasets demonstrate that our proposed segmentation framework achieves state-of-the-art segmentation performance. The training code and pretrained models will be made publicly available at: https://github.com/Yuyu-1015/LLE-Seg.
Article Sequence Number: 5012513
Date of Publication: 25 February 2025

ISSN Information:

Funding Agency:


I. Introduction

Semantic segmentation is a crucial task in computer vision, aiming to assign predefined semantic labels to each pixel in an image and providing a higher-level representation of the image [1]. Traditional segmentation methods based only on RGB images are sensitive to lighting conditions and lack robustness in challenging low-light environments. Consequently, researchers have proposed fusion methods for thermal (T) and RGB images, utilizing complementary information to enhance segmentation performance in challenging low-light scenarios. The proposal of the RGB-T semantic segmentation task also promotes further development in instrument measurement tasks.

Contact IEEE to Subscribe

References

References is not available for this document.