Loading [MathJax]/extensions/MathZoom.js
Lightweight Infrared and Visible Image Fusion Technique: Guided Gradient Optimization Driven | IEEE Journals & Magazine | IEEE Xplore

Lightweight Infrared and Visible Image Fusion Technique: Guided Gradient Optimization Driven


Abstract:

Infrared and visible image fusion technology aims to combine data from several spectral bands in order to increase target identification, processing capabilities, and ima...Show More

Abstract:

Infrared and visible image fusion technology aims to combine data from several spectral bands in order to increase target identification, processing capabilities, and image quality. With the rapid development of consumer electronic products for imaging, there is an urgent need for a lightweight and efficient fusion technology that ensures efficient information extraction and fusion while maintaining image quality. Existing algorithms designed to achieve accurate information extraction, noise reduction, artefact suppression, and edge preservation need to be simplified and more challenging to meet the requirements of lightweight imaging consumer electronic products. We propose a lightweight method for the fusion of infrared and visible images by exploiting the properties of the Anisotropic Guided Filter and the Gradientlet Filter. This method achieves significant feature texture extraction, effectively reduces gradient texture and noise, minimizes halo artifacts, and enhances edge contours while preserving overall image brightness and edge gradients. Furthermore, the explicit stage processing and concise algorithmic structure design of this method contribute to its optimal time efficiency. Experimental results demonstrate its superiority in both subjective visual effects and objective metrics over nine other existing image fusion methods.
Published in: IEEE Transactions on Consumer Electronics ( Volume: 70, Issue: 4, November 2024)
Page(s): 7233 - 7243
Date of Publication: 20 June 2024

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

Extracting important feature data from different spectral bands and creating a comprehensive fused image is the main objective of image fusion. This technology is widely applied across a spectrum of fields, including biomedicine, modern agriculture, digital photography, environmental monitoring, military reconnaissance, and meteorological forecasting, among others [1], [2]. In particular, infrared and visible images play a crucial role as fusion data sources, leading to technological breakthroughs and innovative applications in these domains. The application of the fusion technology of the infrared and visible pictures in the area of security and surveillance is illustrated in Fig. 1. It can be observed that the information captured by the (a) and (b) exhibits significant differences. The fusion result (C) is capable of combining the information from both sources, providing a more enhanced visual effect. The aim of fusing the infrared and the visible images to achieve mutual complementarity and maximise the use of the information available. Infrared sensors transform objects’ thermal properties into high-contrast grayscale images, enhancing target visibility against diverse backgrounds. However, these images can suffer from poor resolution and noise. By contrast, images captured with visible light offer high resolution and detail, but are sensitive to environmental conditions. Thus, by merging the advantages of both modalities—infrared and visible light—a more thorough and accurate depiction of the scene can be achieved.

(a), (b) and (c) show a pair of infrared and visible images and the fusion result.

Select All
1.
L. Zhang, X. Yang, Z. Wan, D. Cao and Y. Lin, "A real-time FPGA implementation of infrared and visible image fusion using guided filter and saliency detection", Sensors, vol. 22, no. 21, pp. 8487, 2022.
2.
G. Terren-Serrano and M. Martinez-Ramon, "Deep learning for intra-hour solar forecasting with fusion of features extracted from infrared sky images", Inf. Fusion, vol. 95, pp. 42-61, Jul. 2023.
3.
N. Singh and A. K. Bhandari, "Noise aware L-LP decomposition-based enhancement in extremely low light conditions with web application", IEEE Trans. Consum. Electron., vol. 68, no. 2, pp. 161-169, May 2022, [online] Available: https://api.semanticscholar.org/CorpusID:248879305.
4.
Y. He, X. Jin, Q. Jiang, Z. Cheng, P. Wang and W. Zhou, "LKAT-GAN: A GAN for thermal infrared image colorization based on large kernel and AttentionUNet-transformer", IEEE Trans. Consum. Electron., vol. 69, no. 3, pp. 478-489, Aug. 2023, [online] Available: https://api.semanticscholar.org/CorpusID:259461207.
5.
W. Kong, Y. Lei and H. Zhao, "Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization", Infrared Phys. Technol., vol. 67, pp. 161-172, Nov. 2014.
6.
Y. Zhou, A. Mayyas and M. A. Omar, "Principal component analysis-based image fusion routine with application to automotive stamping split detection", Res. Nondestruct. Eval., vol. 22, pp. 76-91, Mar. 2011.
7.
D. P. Bavirisetti and R. Dhuli, "Two-scale image fusion of visible and infrared images using saliency detection", Infrared Phys. Technol., vol. 76, pp. 52-64, May 2016.
8.
B. Yang, C. Yang and G. Huang, "Efficient image fusion with approximate sparse representation", Int. J. Wavelets Multiresolution Inf. Process., vol. 14, no. 4, 2016.
9.
Y. Liu, S. Liu and Z. Wang, "A general framework for image fusion based on multi-scale transform and sparse representation", Inf. Fusion, vol. 24, pp. 147-164, Jul. 2015.
10.
S. Li, X. Kang and J. Hu, "Image fusion with guided filtering", IEEE Trans. Image Process., vol. 22, pp. 2864-2875, 2013.
11.
S. Li, B. Yang and J. Hu, "Performance comparison of different multi-resolution transforms for image fusion", Inf. Fusion, vol. 12, no. 2, pp. 74-84, 2011.
12.
B. K. S. Kumar, "Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform", Signal Image Video Process., vol. 7, no. 6, pp. 1125-1143, 2013.
13.
J. Ma, W. Yu, P. Liang, C. Li and J. Jiang, "FusionGAN: A generative adversarial network for infrared and visible image fusion", Inf. Fusion, vol. 48, pp. 11-26, Aug. 2019.
14.
H. Li and X.-J. Wu, "DenseFuse: A fusion approach to infrared and visible images", IEEE Trans. Image Process., vol. 28, pp. 2614-2623, 2019.
15.
Y. Liu, X. Chen, R. K. Ward and Z. J. Wang, "Image fusion with convolutional sparse representation", IEEE Signal Process. Lett., vol. 23, no. 12, pp. 1882-1886, Dec. 2016.
16.
C. Yu and L. Z. Hou, "Realization of a real-time image denoising system for dashboard camera applications", IEEE Trans. Consum. Electron., vol. 68, no. 2, pp. 181-190, May 2022, [online] Available: https://api.semanticscholar.org/CorpusID:248879599.
17.
H. Xu, M. Gong, X. Tian, J. Huang and J. Ma, "CUFD: An encoder-decoder network for visible and infrared image fusion based on common and unique feature decomposition", Comput. Vis. Image Underst., vol. 218, Apr. 2022.
18.
Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao and L. Zhang, "IFCNN: A general image fusion framework based on convolutional neural network", Inf. Fusion, vol. 54, pp. 99-118, Feb. 2020.
19.
Y. Gao, S. Ma and J. Liu, "DCDR-GAN: A densely connected disentangled representation generative adversarial network for infrared and visible image fusion", IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 2, pp. 549-561, Feb. 2023.
20.
J. Ma, H. Xu, J. Jiang, X. Mei and X.-P. Zhang, "DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion", IEEE Trans. Image Process., vol. 29, pp. 4980-4995, 2020.
21.
H. Zhang, Z. Le, Z. Shao, H. Xu and J. Ma, "MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion", Inf. Fusion, vol. 66, pp. 40-53, Feb. 2021.
22.
Y. Liu, X. Chen, J. Cheng, H. Peng and Z. Wang, "Infrared and visible image fusion with convolutional neural networks", Int. J. Wavelets Multiresolution Inf. Process., vol. 16, no. 3, 2018.
23.
W. Jia, Z. Song and Z. Li, "Multi-scale exposure fusion via content adaptive edge-preserving smoothing pyramids", IEEE Trans. Consum. Electron., vol. 68, no. 4, pp. 317-326, Nov. 2022, [online] Available: https://api.semanticscholar.org/CorpusID:251776964.
24.
C. Zheng, W. Jia, S. Wu and Z. Li, "Neural augmented exposure interpolation for two large-exposure-ratio images", IEEE Trans. Consum. Electron., vol. 69, no. 1, pp. 87-97, Feb. 2023, [online] Available: https://api.semanticscholar.org/CorpusID:256392368.
25.
S. S. Gupta, S. Hossain and K.-D. Kim, "HDR-like image from pseudo-exposure image fusion: A genetic algorithm approach", IEEE Trans. Consum. Electron., vol. 67, no. 2, pp. 119-128, May 2021, [online] Available: https://api.semanticscholar.org/CorpusID:233650036.
26.
C. Jun, C. Lei, L. Wei and Y. Yang, "Infrared and visible image fusion via gradientlet filter and salience-combined map", Multimedia Tools Appl., vol. 83, pp. 57223-57241, Dec. 2024.
27.
Y. Zhang and H. J. Lee, "Multisensor infrared and visible image fusion via double joint edge preservation filter and nonglobally saliency gradient operator", IEEE Sensors J., vol. 23, no. 9, pp. 10252-10267, May 2023.
28.
L. Wang, B. Li and L. F. Tian, "EGGDD: An explicit dependency model for multi-modal medical image fusion in shift-invariant shearlet transform domain", Inf. Fusion, vol. 19, pp. 29-37, Sep. 2014.
29.
J. Ma and Y. Zhou, "Infrared and visible image fusion via gradientlet filter", Comput. Vis. Image Underst., vol. 197, Aug. 2020.
30.
K. He, J. Sun and X. Tang, "Guided image filtering", IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1397-1409, Jun. 2013.
Contact IEEE to Subscribe

References

References is not available for this document.