Dual Deep Network for Visual Tracking
Citations Over TimeTop 10% of 2017 papers
Abstract
Visual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm (ICA-R) is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favourably against the stateof- the-arts.
Related Papers
- → AttentionNet: Aggregating Weak Directions for Accurate Object Detection(2015)181 cited
- → Multi-Task Self-Supervised Object Detection via Recycling of Bounding Box Annotations(2019)51 cited
- → Small Object Detection Method based on Improved YOLOv5(2022)7 cited
- → Object tracking via collaborative multi-task learning and appearance model updating(2015)12 cited
- → Fast and Accurate Object Detection Based on Fusion of YOLOv2 and R-CNN Predicted Result for Autonomous Driving(2022)1 cited