Loading [MathJax]/extensions/MathMenu.js
Domain Adaptation for SAR Target Recognition with Limited Training Data Via Rigid Transformation-Based Feature Conversion | IEEE Conference Publication | IEEE Xplore

Domain Adaptation for SAR Target Recognition with Limited Training Data Via Rigid Transformation-Based Feature Conversion


Abstract:

This paper presents a semi-supervised domain adaptation method for SAR target recognition. The proposed method only requires a few real data to be labeled. The challenge ...Show More

Abstract:

This paper presents a semi-supervised domain adaptation method for SAR target recognition. The proposed method only requires a few real data to be labeled. The challenge is that due to the high angle sensitivity of SAR images, a network can easily overfit the training data at seen angles and fails to classify data at unseen angles. To overcome this, we design a conversion module that can infer what CNN features of images at unseen angles look like. This conversion module is designed as a rigid-body transformation followed by a conditional generative network. This design enables the network to gain high-level 3D understanding from 2D images. Thus, we name our network 3D converter. The network mainly learns from fully-labeled simulated SAR data and then the knowledge is adapted to fit the scarcely-labeled real SAR data. Our method improves the performance over the baseline by 3.48% on MSTAR benchmark when only 10 images/class are labeled. It also achieves comparable results to strongly supervised methods.
Date of Conference: 17-22 July 2022
Date Added to IEEE Xplore: 28 September 2022
ISBN Information:

ISSN Information:

Conference Location: Kuala Lumpur, Malaysia

1. Introduction

Synthetic aperture radar (SAR) imaging plays an important role in remote sensing due to its weather-independent, all-day, and wide-range acquisition nature. However, it is time-consuming and expensive to obtain a large number of labeled SAR images. Plenty of research on deep learning with limited data has been carried out for SAR automatic target recognition (ATR). A-ConvNet [1] is among the most popular. It replaces all parameter-dense fully-connected layers with con-volutional layers. It achieves 99.13% accuracy on the SAR ATR benchmark dataset MSTAR [2] if the model is trained with around 200 labeled images per class. Nevertheless, if the number of labeled samples decreases to a few per class, the model tends to overfit the training data at seen azimuths, as shown in Figure 1-left. This is because SAR images are highly sensitive to object poses or shooting angles.

Contact IEEE to Subscribe

References

References is not available for this document.