A 3D-CNN Siamese Network for Motion Gesture Sign Language Alphabets Recognition

Conference proceedings article


Authors/Editors


Strategic Research Themes


Publication Details

Author listJirathampradub H., Nukoolkit C., Suriyathumrongkul K., Watanapa B.

PublisherHindawi

Publication year2020

Title of seriesIAIT

Number in series11

Volume number1

Start page1

End page6

Number of pages6

ISBN9781450377591

ISSN0146-9428

eISSN1745-4557

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85089186001&doi=10.1145%2f3406601.3406634&partnerID=40&md5=cd6afa002df0b88fb6e923aed40cd862

LanguagesEnglish-Great Britain (EN-GB)


View on publisher site


Abstract

Sign language recognition is developed for assisting hearing-and speech-impaired people who communicate with others by using hand gestures as a nonverbal communication. The system aims to translate still images or video frames of sign gestures to a corresponding message, word or alphabet of a designated language. For the American Sign Language (ASL), there are alphabets of J and Z out of 26 letters which require motion gesture and cannot be recognized using static 2-dimension images as usually possible for the others. Their starting gesture also causes a confusion with the other alphabets, esp. I and D. Samples of these two alphabets J and Z, are scarce. This paper developed a n-shot 3D-CNN deep learning Siamese network model to recognize J and Z and address the said problems. Two experiments were conducted to give insight into the performance of the proposed 3D-CNN Siamese network and the improvement using regularization and dilation techniques. The effect of the number of shot in the Siamese model to the performance of the recognition is also provided in the second experiment. © 2020 ACM.


Keywords

3D-Convolutional Neural NetworkAmerican Sign Language


Last updated on 2023-25-09 at 07:36