Video Person Re-identification with Competitive Snippet-Similarity Aggregation and Co-attentive Snippet Embedding
Citations Over TimeTop 1% of 2018 papers
Abstract
In this paper, we address video-based person re-identification with competitive snippet-similarity aggregation and co-attentive snippet embedding. Our approach divides long person sequences into multiple short video snippets and aggregates the top-ranked snippet similarities for sequence-similarity estimation. With this strategy, the intra-person visual variation of each sample could be minimized for similarity estimation, while the diverse appearance and temporal information are maintained. The snippet similarities are estimated by a deep neural network with a novel temporal co-attention for snippet embedding. The attention weights are obtained based on a query feature, which is learned from the whole probe snippet by an LSTM network, making the resulting embeddings less affected by noisy frames. The gallery snippet shares the same query feature with the probe snippet. Thus the embedding of gallery snippet can present more relevant features to compare with the probe snippet, yielding more accurate snippet similarity. Extensive ablation studies verify the effectiveness of competitive snippet-similarity aggregation as well as the temporal co-attentive embedding. Our method significantly outperforms the current state-of-the-art approaches on multiple datasets.
Related Papers
- → Video Person Re-identification with Competitive Snippet-Similarity Aggregation and Co-attentive Snippet Embedding(2018)230 cited
- → Expression Snippet Transformer for Robust Video-based Facial Expression Recognition(2021)3 cited
- Investigation on effect of snippet on user's relevance judgment of documents(2010)
- PCA를 이용한 Snippet 생성 방법(2009)
- → Learning Snippet Relatedness Based on LSTM for Temporal Action Proposal Generation(2020)