Loading [MathJax]/extensions/MathMenu.js
A Semantically Nonredundant Continuous-Scale Feature Network for Panchromatic and Multispectral Classification | IEEE Journals & Magazine | IEEE Xplore

A Semantically Nonredundant Continuous-Scale Feature Network for Panchromatic and Multispectral Classification


Abstract:

In recent years, panchromatic (PAN) images and multispectral (MS) images, as a type of multimodal remote sensing data, are attracting increasingly more attention to their...Show More

Abstract:

In recent years, panchromatic (PAN) images and multispectral (MS) images, as a type of multimodal remote sensing data, are attracting increasingly more attention to their classification problems. However, effectively representing size variations of targets in remote sensing images and reducing redundant representations of different modalities’ deep semantic features to enhance classification accuracy remains a challenge. In this article, we propose a semantically nonredundant continuous-scale feature network (SNCF-Net) for PAN and MS classification, consisting of two modules: the texture-enhanced continuous scale input generation module and the cross-modal feature Kernel interaction (CMKI) module. By simulating the human eye’s adjustment of distance to observe objects of different sizes, we employ 3-D convolution to extract continuous-scale images generated by the texture-enhanced continuous-scale input generation (TCIG) module, enabling optimal feature representation of objects in remote sensing images. Additionally, the texture enhancement (TE) strategy in the TCIG module alleviates texture diffusion in scale space, enhancing the network’s ability to represent texture features. Subsequently, the CMKI module utilizes the response differences between different features to generate convolution kernels from deep feature maps, enabling feature interaction between the PAN modal and MS modal. This reduces redundant representations of essential image content information in deep features of two modalities, facilitating a better mapping between dual-modal features and categories. Our results achieve state-of-the-art performance on multiple datasets. The code is available at https://github.com/Xidian-AIGroup190726/SNCFNet.
Article Sequence Number: 5407815
Date of Publication: 12 August 2024

ISSN Information:

Funding Agency:


I. Introduction

In recent years, there has been an increasing interest in the fusion of different remote sensing data sources to improve image classification accuracy [1], [2], [3]. In particular, the fusion of panchromatic (PAN) and multispectral (MS) data has received considerable attention due to the complementary information provided by these two data sources [4].

Contact IEEE to Subscribe

References

References is not available for this document.