1. Introduction
The task of person re-identification (re-ID) is to match the identities of a person under non-overlapped camera views [10], [39], [18], [41], [24]. Most existing methods assume that the training and testing images are captured from the same scenario. However, this assumption is not guaranteed in many applications. For instance, person images captured from two different campuses have distinct illumination condition and background (BG) (e.g., Market-1501 [38] and DukeMTMC-reID [29], [41] datasets). In this situation, the bias between data distributions on two domains becomes large. Directly training a classifier from one dataset (i.e., source domain) often produces a degraded performance when testing is conducted on another dataset (i.e., target domain). Therefore, it is important to investigate solutions for such a cross-domain issue. For person re-ID, the domain adaption solutions have drawn attention in recent years [4], [7], [35], [36], [42].
Comparison between different input images for cross-domain person re-ID. Images from Market-1501 and DukeMTMC-reID show distinct BG shift. Images generated by SPGAN [7] and PTGAN [36] do not suppress the BG noise, and have the BG shift problem. The hard-mask solution, i.e., JPP-Net [25] damages the FG. Our SBSGAN takes all the impacts into consideration.