Deep Representation Learning With Part Loss for Person Re-Identification
Citations Over TimeTop 1% of 2019 papers
Abstract
Learning discriminative representations for unseen person images is critical for person Re-Identification (ReID). Most of current approaches learn deep representations in classification tasks, which essentially minimizes the empirical classification risk on the training set. As shown in our experiments, such representations easily get over-fitted on a discriminative human body part on the training set. To gain the discriminative power on unseen person images, we propose a deep representation learning procedure named Part Loss Network (PL-Net), to minimize both the empirical classification risk on training person images and the representation learning risk on unseen person images. The representation learning risk is evaluated by the proposed part loss, which automatically detects human body parts, and computes the person classification loss on each part separately. Compared with traditional global classification loss, simultaneously considering part loss enforces the deep network to learn representations for different body parts and gain the discriminative power on unseen persons. Experimental results on three person ReID datasets, i.e., Market1501, CUHK03, VIPeR, show that our representation outperforms existing deep representations.
Related Papers
- → Joint Domain Alignment and Discriminative Feature Learning for Unsupervised Deep Domain Adaptation(2019)231 cited
- → Fast Fine-Grained Image Classification via Weakly Supervised Discriminative Localization(2018)85 cited
- → Group Softmax Loss with Discriminative Feature Grouping(2021)2 cited
- → Joint Domain Alignment and Discriminative Feature Learning for Unsupervised Deep Domain Adaptation(2018)27 cited
- → MixSiam: A Mixture-based Approach to Self-supervised Representation Learning(2021)8 cited