ASL recognition based on a coupling between HMMs and 3D motion analysis
Citations Over TimeTop 1% of 2002 papers
Abstract
We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that context-dependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.
Related Papers
- → Adapting the Assessing British Sign Language Development: Receptive Skills Test Into American Sign Language(2011)66 cited
- → Stacked hidden Markov model for motion intention recognition(2017)4 cited
- → Code Switching in Deaf Adults(1987)10 cited
- → American Sign Language in Education of the Deaf(1986)4 cited
- → The automation of spinal non-manual signals in American Sign Language(2011)