Learning Cross-Modal Deep Representations for Robust Pedestrian Detection
Citations Over TimeTop 10% of 2017 papers
Abstract
This paper presents a novel method for detecting pedestrians under adverse illumination conditions. Our approach relies on a novel cross-modality learning framework and it is based on two main phases. First, given a multimodal dataset, a deep convolutional network is employed to learn a non-linear mapping, modeling the relations between RGB and thermal data. Then, the learned feature representations are transferred to a second deep network, which receives as input an RGB image and outputs the detection results. In this way, features which are both discriminative and robust to bad illumination conditions are learned. Importantly, at test time, only the second pipeline is considered and no thermal data are required. Our extensive evaluation demonstrates that the proposed approach outperforms the state-of-the-art on the challenging KAIST multispectral pedestrian dataset and it is competitive with previous methods on the popular Caltech dataset.
Related Papers
- → All-Weather Pedestrian Detection Based on Double-Stream Multispectral Network(2023)7 cited
- → Multispectral image fusion based pedestrian detection using a multilayer fused deconvolutional single-shot detector(2020)16 cited
- → Detection of Pedestrian Actions Based on Deep Learning Approach(2019)8 cited
- → Low-cost system for multispectral image acquisition and its applicability to analysis of the physiological potential of soybean seeds(2022)8 cited
- → Multispectral Pedestrian Detection in Autonomous Driving: A Review(2021)1 cited