Hierarchical Convolutional Features for Visual Tracking
Citations Over TimeTop 1% of 2015 papers
Abstract
Visual object tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we exploit features extracted from deep convolutional neural networks trained on object recognition datasets to improve tracking accuracy and robustness. The outputs of the last convolutional layers encode the semantic information of targets and such representations are robust to significant appearance variations. However, their spatial resolution is too coarse to precisely localize targets. In contrast, earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchies of convolutional layers as a nonlinear counterpart of an image pyramid representation and exploit these multiple levels of abstraction for visual tracking. Specifically, we adaptively learn correlation filters on each convolutional layer to encode the target appearance. We hierarchically infer the maximum response of each layer to locate targets. Extensive experimental results on a largescale benchmark dataset show that the proposed algorithm performs favorably against state-of-the-art methods.
Related Papers
- → Review of visual clutter and its effects on pilot performance: A new look at past research(2012)9 cited
- → Evidence of Clutter Avoidance in Complex Scenes(2010)7 cited
- → Clutter rejection limitations from ambiguous range clutter(2002)5 cited
- The Study of OCDMA Encode/Decode Using Multilayer Films(2001)
- The ENCODE Project Decoded(2012)