Loading [a11y]/accessibility-menu.js
MAENet: Multiple Attention Encoder–Decoder Network for Farmland Segmentation of Remote Sensing Images | IEEE Journals & Magazine | IEEE Xplore

MAENet: Multiple Attention Encoder–Decoder Network for Farmland Segmentation of Remote Sensing Images


Abstract:

With the rapid development of computer vision, semantic segmentation as an important part of the technology has made some achievements in different applications. However,...Show More

Abstract:

With the rapid development of computer vision, semantic segmentation as an important part of the technology has made some achievements in different applications. However, in the farmland segmentation scenario of remote sensing images, the capability of common semantic segmentation methods in restoring the farmland edge and identifying narrow farmland ridges needs to be improved. Therefore, in this letter a semantic segmentation method–multiple attention encoder–decoder network (MAENet)–for farmland segmentation is proposed. The design of a dual-pooling efficient channel attention (DPECA) module and its embedment in the backbone to improve the efficiency of feature extraction is described; secondly, a dual-feature attention (DFA) module is proposed to extract contextual information of high-level features; finally, a global-guidance information upsample (GIU) module is added to the decoder to reduce the influence of redundant information on feature fusion. We use three self-made farmland image datasets representing UAV data to train MAENet and compare them with other methods. The results show that the performances of segmentation and generalization of MAENet are improved compared with other methods. The MIoU and Kappa coefficient in the farmland multi-classification test set can reach 93.74% and 96.74%.
Published in: IEEE Geoscience and Remote Sensing Letters ( Volume: 19)
Article Sequence Number: 2503005
Date of Publication: 22 December 2021

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

In recent years, with the wide application of AI in agriculture, smart agriculture has become a major research direction for modern and future agricultural development. The appearance of remote sensing by UAV is furthering the development of smart agriculture [1]. Farmland segmentation based on remote sensing images is an important research direction for smart agriculture and an important foundation of smart farmland management. Therefore, the study of an accurate farmland segmentation method that can be applied to high-spatial-resolution remote sensing images is important for the development of smart agriculture.

Select All
1.
P. Tripicchio, M. Satler, G. Dabisias, E. Ruffaldi and C. A. Avizzano, "Towards smart farming and sustainable agriculture with drones", Proc. Int. Conf. Intell. Environ., pp. 140-143, Jul. 2015.
2.
Z. Li, W. Shi, H. Zhang and M. Hao, "Change detection based on Gabor wavelet features for very high resolution remote sensing images", IEEE Geosci. Remote Sens. Lett., vol. 14, no. 5, pp. 783-787, May 2017.
3.
A. Krizhevsky, I. Sutskever and G. E. Hinton, "ImageNet classification with deep convolutional neural networks", Commun. ACM, vol. 60, no. 6, pp. 84-90, May 2017.
4.
I. Demir et al., "DeepGlobe 2018: A challenge to parse the Earth through satellite images", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), pp. 172-181, Jun. 2018.
5.
X.-Y. Tong, Q. Lu, G.-S. Xia and L. Zhang, "Large-scale land cover classification in Gaofen-2 satellite imagery", Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), pp. 3599-3602, Jul. 2018.
6.
K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770-778, Jun. 2016.
7.
J. Hu, L. Shen and G. Sun, "Squeeze-and-excitation networks", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 7132-7141, Jun. 2018.
8.
M. Liu, Q. Shi, A. Marinoni, D. He, X. Liu and L. Zhang, "Super-resolution-based change detection network with stacked attention module for images with different resolutions", IEEE Trans. Geosci. Remote Sens., Jul. 2021.
9.
Q. Shi, M. Liu, S. Li, X. Liu, F. Wang and L. Zhang, "A deeply supervised attention metric-based network and an open aerial image dataset for remote sensing change detection", IEEE Trans. Geosci. Remote Sens., Jun. 2021.
10.
S. Woo, J. Park, J.-Y. Lee and I. S. Kweon, "CBAM: Convolutional block attention module", Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 3-19, Sep. 2018.
11.
Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo and Q. Hu, "ECA-Net: Efficient channel attention for deep convolutional neural networks", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 11531-11539, Jun. 2020.
12.
J. Long, E. Shelhamer and T. Darrell, "Fully convolutional networks for semantic segmentation", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 3431-3440, Jun. 2015.
13.
O. Ronneberger, P. Fischer and T. Brox, "U-Net: Convolutional networks for biomedical image segmentation", Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., pp. 234-241, Oct. 2015.
14.
V. Badrinarayanan, A. Kendall and R. Cipolla, "SegNet: A deep convolutional encoder-decoder architecture for image segmentation", IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481-2495, Dec. 2017.
15.
H. Zhao, J. Shi, X. Qi, X. Wang and J. Jia, "Pyramid scene parsing network", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 6230-6239, Jul. 2017.
16.
L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff and H. Adam, "Encoder-decoder with atrous separable convolution for semantic image segmentation", Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 801-818, 2018.
17.
J. Fu et al., "Dual attention network for scene segmentation", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 3141-3149, Jun. 2019.
18.
H. Li, P. Xiong, J. An and L. Wang, "Pyramid attention network for semantic segmentation", arXiv:1805.10180, 2018.
19.
H. Zhang et al., "Context encoding for semantic segmentation", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 7151-7160, Jun. 2018.
20.
Y. Kim and Y. Kim, "Improved classification accuracy based on the output-level fusion of high-resolution satellite images and airborne LiDAR data in urban area", IEEE Geosci. Remote Sens. Lett., vol. 11, no. 3, pp. 636-640, Mar. 2014.
21.
M. Yang, K. Yu, C. Zhang, Z. Li and K. Yang, "DenseASPP for semantic segmentation in street scenes", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 3684-3692, Jun. 2018.

References

References is not available for this document.