Hand detection in American Sign Language depth data using domain-driven random forest regression
2015pp. 1–7
Citations Over Time
Abstract
In Automatic Sign Language Recognition (ASLR), robust hand tracking and detection is key to good recognition accuracy. We introduce a new dataset of depth data from continuously signed American Sign Language (ASL) sentences. We present analysis showing numerous errors of the Microsoft Kinect Skeleton Tracker (MKST) in cases where hands are close to the body, close to each other, or when the arms cross. We also propose a method based on domain-driven random forest regression, which predicts real world 3D hand locations using features generated from depth images. We show that our hand detector (DDRFR) has >20% improvement over the MKST within a margin of error of 5 cm from the ground truth.
Related Papers
- → Enriched Random Forest for High Dimensional Genomic Data(2021)115 cited
- → Bias corrections for Random Forest in regression using residual rotation(2015)77 cited
- → An Improved Coronary Heart Disease Predictive System Using Random Forest(2021)2 cited
- → Guided Random Forest in the RRF Package(2013)61 cited
- → Prediction and Characteristic Exploration of Military Specialized High School Trainee Selection Using Machine Learning(2023)