Adversarial Metric Attack and Defense for Person Re-Identification
Citations Over TimeTop 10% of 2020 papers
Abstract
Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. At last, by benchmarking various adversarial settings, we expect that our work can facilitate the development of adversarial attack and defense in metric-based applications.
Related Papers
- Strategic Benchmarking: How to Rate Your Company's Performance against the World's Best(1993)
- → COMPARISON OF BEST PRACTICE BENCHMARKING MODELS(2011)11 cited
- → A guide for mental health clinicians to develop and undertake benchmarking activities(2010)2 cited
- Theoretical Aspects of Benchmarking Theory(2004)
- → Comparing ourselves: using benchmarking techniques to measure performance between academic libraries(2009)1 cited