A 3D-CNN Siamese Network for Motion Gesture Sign Language Alphabets Recognition
Conference proceedings article
Authors/Editors
Strategic Research Themes
Publication Details
Author list: Jirathampradub H., Nukoolkit C., Suriyathumrongkul K., Watanapa B.
Publisher: Hindawi
Publication year: 2020
Title of series: IAIT
Number in series: 11
Volume number: 1
Start page: 1
End page: 6
Number of pages: 6
ISBN: 9781450377591
ISSN: 0146-9428
eISSN: 1745-4557
Languages: English-Great Britain (EN-GB)
Abstract
Sign language recognition is developed for assisting hearing-and speech-impaired people who communicate with others by using hand gestures as a nonverbal communication. The system aims to translate still images or video frames of sign gestures to a corresponding message, word or alphabet of a designated language. For the American Sign Language (ASL), there are alphabets of J and Z out of 26 letters which require motion gesture and cannot be recognized using static 2-dimension images as usually possible for the others. Their starting gesture also causes a confusion with the other alphabets, esp. I and D. Samples of these two alphabets J and Z, are scarce. This paper developed a n-shot 3D-CNN deep learning Siamese network model to recognize J and Z and address the said problems. Two experiments were conducted to give insight into the performance of the proposed 3D-CNN Siamese network and the improvement using regularization and dilation techniques. The effect of the number of shot in the Siamese model to the performance of the recognition is also provided in the second experiment. © 2020 ACM.
Keywords
3D-Convolutional Neural Network, American Sign Language