1. Introduction
Person Reidentification (ReID) aims to match persons with the same ID across different camera views. Thanks to the development of deep convolutional neural networks (CNNs) [14], supervised ReID and unsupervised domain adaptation (UDA) [42] have achieved remarkable performance. However, they both need data of target domain for training. In real-world applications, the ReID system will inevitably search persons in unseen domains. Therefore, domain generalization (DG) ReID has attracted extensive research attention in a practical setting.
Illustration of our idea. Since target domain data (green) is unavailable during training, we cannot directly align source and target distributions. To address this issue, we align them to a prior distribution (purple) during training (source domains) and testing (target domains). Considering that high-dimensional ID features are difficult to constrain to a prior distribution, ID features are encoded into latent embedding space. The same prior distribution and decoder guarantee the same generated feature distribution.