Self-Supervised Learning of Whole and Component-Based Semantic Representations for Person Re-Identification

Kavli Affiliate: Cheng Peng

| First 5 Authors: Siyuan Huang, Yifan Zhou, Ram Prabhakar, Xijun Liu, Yuxiang Guo

| Summary:

Person Re-Identification (ReID) is a challenging problem, focusing on
identifying individuals across diverse settings. However, previous ReID methods
primarily concentrated on a single domain or modality, such as Clothes-Changing
ReID (CC-ReID) and video ReID. Real-world ReID is not constrained by factors
like clothes or input types. Recent approaches emphasize on learning semantics
through pre-training to enhance ReID performance but are hindered by coarse
granularity, on-clothes focus and pre-defined areas. To address these
limitations, we propose a Local Semantic Extraction (LSE) module inspired by
Interactive Segmentation Models. The LSE module captures fine-grained,
biometric, and flexible local semantics, enhancing ReID accuracy. Additionally,
we introduce Semantic ReID (SemReID), a pre-training method that leverages LSE
to learn effective semantics for seamless transfer across various ReID domains
and modalities. Extensive evaluations across nine ReID datasets demonstrates
SemReID’s robust performance across multiple domains, including
clothes-changing ReID, video ReID, unconstrained ReID, and short-term ReID. Our
findings highlight the importance of effective semantics in ReID, as SemReID
can achieve great performances without domain-specific designs.

| Search Query: ArXiv Query: search_query=au:”Cheng Peng”&id_list=&start=0&max_results=3

Read More