An Empirical Study of Remote Sensing Pretraining | IEEE Journals & Magazine | IEEE Xplore

An Empirical Study of Remote Sensing Pretraining


Abstract:

Deep learning has largely reshaped remote sensing (RS) research for aerial image understanding and made a great success. Nevertheless, most of the existing deep models ar...Show More

Abstract:

Deep learning has largely reshaped remote sensing (RS) research for aerial image understanding and made a great success. Nevertheless, most of the existing deep models are initialized with the ImageNet pretrained weights since natural images inevitably present a large domain gap relative to aerial images, probably limiting the fine-tuning performance on downstream aerial scene tasks. This issue motivates us to conduct an empirical study of RS pretraining (RSP) on aerial images. To this end, we train different networks from scratch with the help of the largest RS scene recognition dataset up to now—MillionAID—to obtain a series of RS pretrained backbones, including both convolutional neural networks (CNNs) and vision transformers, such as Swin and ViTAE, which have shown promising performance on computer vision tasks. Then, we investigate the impact of RSP on representative downstream tasks, including scene recognition, semantic segmentation, object detection, and change detection using these CNN and vision transformer backbones. Empirical study shows that RSP can help deliver distinctive performances in scene recognition tasks and in perceiving RS-related semantics, such as “Bridge” and “Airplane.” We also find that, although RSP mitigates the data discrepancies of traditional ImageNet pretraining on RS images, it may still suffer from task discrepancies, where downstream tasks require different representations from scene recognition tasks. These findings call for further research efforts on both large-scale pretraining datasets and effective pretraining methods. The codes and pretrained models will be released at https://github.com/ViTAE-Transformer/ViTAE-Transformer-Remote-Sensing.
Article Sequence Number: 5608020
Date of Publication: 25 May 2022

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

With the development of geoinformatics technology, the Earth observation fields have witnessed significant progress, where various remote sensing (RS) sensors and devices have been widely used. Among them, with the advantages of real-time, abundant amount, and easy access, the aerial image has become one of the most important data sources in Earth vision to serve the requirements of a series of practical tasks, such as precision agriculture [1], [2] and environmental monitoring [3]. In these applications, aerial scene recognition is a fundamental and active research topic over the past years. However, because of the own characteristics of aerial images, it is still challenging to efficiently understand the aerial scene.

Select All
1.
X. Zhang, Y. Sun, K. Shang, L. Zhang and S. Wang, "Crop classification based on feature band set construction and object-oriented approach using hyperspectral images", IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 9, no. 9, pp. 4117-4128, Sep. 2016.
2.
J. Zhang and D. Tao, "Empowering things with intelligence: A survey of the progress challenges and opportunities in artificial intelligence of things", IEEE Internet Things J., vol. 8, no. 10, pp. 7789-7817, May 2021.
3.
X. Yang and Y. Yu, "Estimating soil salinity under various moisture conditions: An experimental study", IEEE Trans. Geosci. Remote Sens., vol. 55, no. 5, pp. 2525-2533, May 2017.
4.
M. J. Swain and D. H. Ballard, "Color indexing", Int. J. Comput. Vis., vol. 7, no. 1, pp. 11-32, Nov. 1991.
5.
R. M. Haralick, K. Shanmugam and I. Dinstein, "Textural features for image classification", IEEE Trans. Syst. Man Cybern., vol. SMC-3, no. 6, pp. 610-621, Nov. 1973.
6.
A. Oliva and A. Torralba, "Modeling the shape of the scene: A holistic representation of the spatial envelope", Int. J. Comput. Vis., vol. 42, no. 3, pp. 145-175, 2001.
7.
O. A. B. Penatti, K. Nogueira and J. A. D. Santos, "Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), pp. 44-51, Jun. 2015.
8.
Y. Bengio, A. Courville and P. Vincent, "Representation learning: A review and new perspectives", IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798-1828, Aug. 2013.
9.
T. Hofmann, "Unsupervised learning by probabilistic latent semantic analysis", Mach. Learn., vol. 42, no. 1, pp. 177-196, Jan. 2001.
10.
J. Philbin, O. Chum, M. Isard, J. Sivic and A. Zisserman, "Object retrieval with large vocabularies and fast spatial matching", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1-8, Jun. 2007.
11.
K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition", Proc. ICLR, pp. 1-14, May 2015.
12.
K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770-778, Jun. 2016.
13.
Z. Liu et al., "Swin transformer: Hierarchical vision transformer using shifted windows", Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pp. 10012-10022, Oct. 2021.
14.
Y. Xu, Q. Zhang, J. Zhang and D. Tao, "ViTAE: Vision transformer advanced by exploring intrinsic inductive bias", Proc. NeurIPS, vol. 34, pp. 1-14, 2021.
15.
T. Xiao, Y. Liu, B. Zhou, Y. Jiang and J. Sun, "Unified perceptual parsing for scene understanding", Proc. ECCV, pp. 418-434, 2018.
16.
Q. Zhang, J. Zhang, W. Liu and D. Tao, "Category anchor-guided unsupervised domain adaptation for semantic segmentation", Proc. NeurIPS, vol. 32, pp. 1-11, 2019.
17.
L. Gao, J. Zhang, L. Zhang and D. Tao, "DSP: Dual soft-paste for unsupervised domain adaptive semantic segmentation", Proc. 29th ACM Int. Conf. Multimedia, pp. 2825-2833, Oct. 2021.
18.
D. Wang, B. Du, L. Zhang and Y. Xu, "Adaptive spectral-spatial multiscale contextual feature extraction for hyperspectral image classification", IEEE Trans. Geosci. Remote Sens., vol. 59, no. 3, pp. 2461-2477, Mar. 2020.
19.
L. Zhang, M. Lan, J. Zhang and D. Tao, "Stagewise unsupervised domain adaptation with adversarial self-training for road segmentation of remote-sensing images", IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1-13, 2022.
20.
D. Wang, B. Du and L. Zhang, "Fully contextual network for hyperspectral scene parsing", IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1-16, 2022.
21.
G. Cheng, J. Han and X. Lu, "Remote sensing image scene classification: Benchmark and state of the art", Proc. IEEE, vol. 105, no. 10, pp. 1865-1883, Oct. 2017.
22.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, "ImageNet: A large-scale hierarchical image database", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 248-255, Jun. 2009.
23.
K. Xu, H. Huang, P. Deng and Y. Li, "Deep feature aggregation framework driven by graph convolutional network for scene classification in remote sensing", IEEE Trans. Neural Netw. Learn. Syst., Apr. 2021.
24.
H. Sun, S. Li, X. Zheng and X. Lu, "Remote sensing scene classification by gated bidirectional network", IEEE Trans. Geosci. Remote Sens., vol. 58, no. 1, pp. 82-96, Jan. 2020.
25.
Q. Zhao, Y. Ma, S. Lyu and L. Chen, "Embedded self-distillation in compact multibranch ensemble network for remote sensing scene classification", IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1-15, 2022.
26.
Q. Zhao, S. Lyu, Y. Li, Y. Ma and L. Chen, "MGML: Multigranularity multilevel feature ensemble network for remote sensing scene classification", IEEE Trans. Neural Netw. Learn. Syst., Sep. 2021.
27.
J. Kang, R. Fernandez-Beltran, P. Duan, S. Liu and A. J. Plaza, "Deep unsupervised embedding for remotely sensed images based on spatially augmented momentum contrast", IEEE Trans. Geosci. Remote Sens., vol. 59, no. 3, pp. 2598-2610, Mar. 2021.
28.
Y. Long et al., "On creating benchmark dataset for aerial image interpretation: Reviews guidances and million-AID", IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 4205-4230, 2021.
29.
Q. Zhang, Y. Xu, J. Zhang and D. Tao, "ViTAEv2: Vision transformer advanced by exploring inductive bias for image recognition and beyond", arXiv:2202.10108, 2022.
30.
X. Wang, S. Wang, C. Ning and H. Zhou, "Enhanced feature pyramid network with deep semantic embedding for remote sensing scene classification", IEEE Trans. Geosci. Remote Sens., vol. 59, no. 9, pp. 7918-7932, Sep. 2021.

Contact IEEE to Subscribe

References

References is not available for this document.