Hand Gesture Feature Extraction Using Deep Convolutional Neural Network for Recognizing American Sign Language
Citations Over TimeTop 10% of 2018 papers
Abstract
In this era, Human-Computer Interaction (HCI) is a fascinating field about the interaction between humans and computers. Interacting with computers, human Hand Gesture Recognition (HGR) is the most significant way and the major part of HCI. Extracting features and detecting hand gesture from inputted color videos is more challenging because of the huge variation in the hands. For resolving this issue, this paper introduces an effective HGR system for low-cost color video using webcam. In this proposed model, Deep Convolutional Neural Network (DCNN) is used for extracting efficient hand features to recognize the American Sign Language (ASL) using hand gestures. Finally, the Multi-class Support Vector Machine (MCSVM) is used for identifying the hand sign, where CNN extracted features are used to train up the machine. Distinct person hand gesture is used for validation in this paper. The proposed model shows satisfactory performance in terms of classification accuracy, i.e., 94.57%.
Related Papers
- → Real‐time hand gestures system based on leap motion(2018)11 cited
- → A Review of Sign Language Hand Gesture Recognition Algorithms(2020)4 cited
- → Methods to describe and recognize sign language based on gesture components represented by symbols and numerical values(1998)10 cited
- → Systematic Literature Survey on Sign Language Recognition Systems(2022)
- → Gesture Vocalizer for Assisting Deaf and Dumb(2023)