Large Scale Sign Language Interpretation
Citations Over TimeTop 11% of 2019 papers
Abstract
Sign language is the primary way of communication between deaf people, but the majority of hearing people do not know how to sign. The reliance of deaf people on interpreters is both inconvenient and cost inefficient. Many research groups have experimented with using machine learning to develop automatic translators. Largely, these efforts have been constrained to restrictive dictionaries or insufficiently small signers or signed content. We introduce the world's largest sign language dataset to date- a collection of 50,000 video snippets taken from a pool of 10,000 unique utterances signed by 50 signers. We further propose several sequence-to-sequence deep learning approaches to automatically translate from Chinese sign language to both English and Mandarin written text. These methods utilize body joint position, facial expression, as well as finger articulation. While models can overfit on training sets, generalization to unforeseen utterances remains challenging with real-world data. The introduced dataset and methods demonstrate how modern machine learning methods are able to close the communication gap between deaf and hearing people.
Related Papers
- → Training sign language interpreters in Australia(2005)21 cited
- → Głusi i tłumacze PJM o tłumaczeniu języka migowego w Polsce kiedyś i dziś(2021)10 cited
- → Sign language interpreter quality: the perspective of deaf sign language users in the Netherlands(2014)16 cited
- → A phenomenological study of the relationship between deaf students in higher education and their sign language interpreters(2013)3 cited
- On Training of Sign-Language Interpreters(2009)