A 3D-CNN Siamese Network for Motion Gesture Sign Language Alphabets Recognition

Conference proceedings article


ผู้เขียน/บรรณาธิการ


กลุ่มสาขาการวิจัยเชิงกลยุทธ์


รายละเอียดสำหรับงานพิมพ์

รายชื่อผู้แต่งJirathampradub H., Nukoolkit C., Suriyathumrongkul K., Watanapa B.

ผู้เผยแพร่Hindawi

ปีที่เผยแพร่ (ค.ศ.)2020

ชื่อชุดIAIT

เลขในชุด11

Volume number1

หน้าแรก1

หน้าสุดท้าย6

จำนวนหน้า6

ISBN9781450377591

นอก0146-9428

eISSN1745-4557

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85089186001&doi=10.1145%2f3406601.3406634&partnerID=40&md5=cd6afa002df0b88fb6e923aed40cd862

ภาษาEnglish-Great Britain (EN-GB)


ดูบนเว็บไซต์ของสำนักพิมพ์


บทคัดย่อ

Sign language recognition is developed for assisting hearing-and speech-impaired people who communicate with others by using hand gestures as a nonverbal communication. The system aims to translate still images or video frames of sign gestures to a corresponding message, word or alphabet of a designated language. For the American Sign Language (ASL), there are alphabets of J and Z out of 26 letters which require motion gesture and cannot be recognized using static 2-dimension images as usually possible for the others. Their starting gesture also causes a confusion with the other alphabets, esp. I and D. Samples of these two alphabets J and Z, are scarce. This paper developed a n-shot 3D-CNN deep learning Siamese network model to recognize J and Z and address the said problems. Two experiments were conducted to give insight into the performance of the proposed 3D-CNN Siamese network and the improvement using regularization and dilation techniques. The effect of the number of shot in the Siamese model to the performance of the recognition is also provided in the second experiment. © 2020 ACM.


คำสำคัญ

3D-Convolutional Neural NetworkAmerican Sign Language


อัพเดทล่าสุด 2023-25-09 ถึง 07:36