Abstract:
Combining convolutional neural networks (CNNs) and transformers is a crucial direction in remote sensing image semantic segmentation. However, due to differences in the s...Show MoreMetadata
Abstract:
Combining convolutional neural networks (CNNs) and transformers is a crucial direction in remote sensing image semantic segmentation. However, due to differences in the spatial information focus and feature extraction methods, existing feature transfer and fusion strategies do not effectively integrate the advantages of both approaches. To address these issues, we propose a CNN-transformer hybrid network for precise remote sensing image semantic segmentation. We propose a novel Swin Transformer block to optimize feature extraction and enable the model to handle remote sensing images of arbitrary sizes. Additionally, we design an Edge Spatial Attention module to focus attention on local edge structures, effectively integrating global features and local details. This facilitates efficient information flow between the Transformer encoder and CNN decoder. Finally, a multi-scale convolutional decoder is employed to fully leverage both global information from the Transformer and local features from the CNN, leading to accurate segmentation results. Our network achieved state-of-the-art performance on the Vaihingen and Potsdam datasets, reaching mIoU and F1 scores of 67.37% and 79.82%, as well as 72.39% and 83.68%, respectively. Our code is publicly available at: https://github.com/TarsDolores/LZ
Published in: IEEE Signal Processing Letters ( Early Access )