Loading [MathJax]/extensions/MathZoom.js
Progressive Mixup Augmented Teacher-Student Learning for Unsupervised Domain Adaptation | IEEE Conference Publication | IEEE Xplore

Progressive Mixup Augmented Teacher-Student Learning for Unsupervised Domain Adaptation


Abstract:

Unsupervised Domain Adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to an unlabeled target domain, mostly through learning a domain invar...Show More

Abstract:

Unsupervised Domain Adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to an unlabeled target domain, mostly through learning a domain invariant feature representation. Currently, the best performing UDA methods use category level domain alignment to capture fine-grained information, resulting in significantly improved performance over global alignment. While successful, category level UDA methods suffer from the unreliable pseudo-labels for target data. In this paper, we propose an UDA approach with teacher-student learning where the teacher network is used to provide more reliable target pseudo-labels for the student during training. Furthermore, we use a progressive mixup augmentation strategy which generates intermediate samples that become increasingly target-dominant as training progresses. Aligning the source and intermediate domains allows the model to gradually transfer fine-grained domain knowledge from the source to the target domain while minimizing the negative impact of noisy target pseudo-labels. This progressive mixup augmented teacher-student (PMATS) training strategy achieves state-of-the-art performance on two public UDA benchmark datasets: Office-31 and Office-Home.
Date of Conference: 08-11 October 2023
Date Added to IEEE Xplore: 11 September 2023
ISBN Information:
Conference Location: Kuala Lumpur, Malaysia

Funding Agency:

Citations are not available for this document.

1. INTRODUCTION

Unsupervised Domain Adaptation (UDA) has become a popular research topic due to its necessity in applying deep learning models to real world scenarios. Often, there exists a domain gap between training data and real world testing data that negatively affects model performance during test time. Collecting and labeling data from various domains is impractical due to being both time consuming and labor intensive.

Cites in Papers - |

Cites in Papers - IEEE (3)

Select All
1.
Xin Li, Cuiling Lan, Guoqiang Wei, Zhibo Chen, "Semantic-Aware Message Broadcasting for Efficient Unsupervised Domain Adaptation", IEEE Transactions on Image Processing, vol.33, pp.5340-5353, 2024.
2.
Jie Mei, AJ Piergiovanni, Jenq-Neng Hwang, Wei Li, "SLVP: Self-Supervised Language-Video Pre-Training for Referring Video Object Segmentation", 2024 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), pp.507-517, 2024.
3.
Jack H. Prior, Simegnew Yihunie Alaba, Farron Wallace, Matthew D. Campbell, Chiranjibi Shah, M M Nabi, Paul F. Mickle, Robert Moorhead, John E. Ball, "Optimizing and Gauging Model Performance with Metrics to Integrate with Existing Video Surveys", OCEANS 2023 - MTS/IEEE U.S. Gulf Coast, pp.1-6, 2023.
Contact IEEE to Subscribe

References

References is not available for this document.