Transfer Learning for Driver Pose Estimation from Synthetic Data
Citations Over Time
Abstract
The training of computer vision models for human pose estimation requires large amounts of data. Since labelling image data with pose keypoints is very time consuming and costly, we aim to alleviate this requirement by using synthetic data during pre-training and thus relax the need for large amounts of real data samples during fine-tuning. To this end, we investigate the impact of synthetic data on the performance of a 2D keypoint detection model in the context of driver body pose estimation. We present our approach for synthetic data generation to automatically provide large amounts of in-cabin views as training data. The utilization of the generated synthetic data is evaluated in different learning schemes. We achieve a notable performance gain of +30.5% by pre-training with our in-cabin synthetic data when only 1% of real training data from the DriPE dataset is available. The proposed approach also outperforms pre-training with PeopleSansPeople by +8.3% when the reduced DriPE dataset is used for fine-tuning.
Related Papers
- → Self-Training with Selection-by-Rejection(2012)19 cited
- → Automatic Labeling of Tweets for Crisis Response Using Distant Supervision(2020)13 cited
- → A new cross-training approach by using labeled data(2009)
- → Generation and Study of the Synthetic Brain Electron Microscopy Dataset for Segmentation Purpose(2022)
- → Enhancing Self-Training Methods(2023)