摘要
Distinguishing identity-unrelated background information from discriminative identity information poses a challenge in unsupervised vehicle re-identification(Re-ID).Re-ID models suffer from varying degrees of background interference caused by continuous scene variations.The recently proposed segment anything model(SAM)has demonstrated exceptional performance in zero-shot segmentation tasks.The combination of SAM and vehicle Re-ID models can achieve efficient separation of vehicle identity and background information.This paper proposes a method that combines SAM-driven mask autoencoder(MAE)pre-training and backgroundaware meta-learning for unsupervised vehicle Re-ID.The method consists of three sub-modules.First,the segmentation capacity of SAM is utilized to separate the vehicle identity region from the background.SAM cannot be robustly employed in exceptional situations,such as those with ambiguity or occlusion.Thus,in vehicle Re-ID downstream tasks,a spatiallyconstrained vehicle background segmentation method is presented to obtain accurate background segmentation results.Second,SAM-driven MAE pre-training utilizes the aforementioned segmentation results to select patches belonging to the vehicle and to mask other patches,allowing MAE to learn identity-sensitive features in a self-supervised manner.Finally,we present a background-aware meta-learning method to fit varying degrees of background interference in different scenarios by combining different background region ratios.Our experiments demonstrate that the proposed method has state-of-the-art performance in reducing background interference variations.
基金
supported by the National Natural Science Foundation of China under Grant Nos.62076117 and 62166026
the Jiangxi Nos.20224BAB212011,20232BAB212008,and 20232BAB202051.