American Sign Language Words Recognition of Skeletal Videos Using Processed Video Driven Multi-Stacked Deep LSTM
บทความในวารสาร
ผู้เขียน/บรรณาธิการ
กลุ่มสาขาการวิจัยเชิงกลยุทธ์
รายละเอียดสำหรับงานพิมพ์
รายชื่อผู้แต่ง: Sunusi Bala Abdullahi and Kosin Chamnongthai
ผู้เผยแพร่: MDPI
ปีที่เผยแพร่ (ค.ศ.): 2022
Volume number: 22
Issue number: 4
หน้าแรก: 1
หน้าสุดท้าย: 28
จำนวนหน้า: 28
นอก: 1424-8220
eISSN: 1424-8220
URL: https://www.mdpi.com/1424-8220/22/4/1406
ภาษา: English-United States (EN-US)
บทคัดย่อ
Complex hand gesture interactions among dynamic sign words may lead to misclassifica- tion, which affects the recognition accuracy of the ubiquitous sign language recognition system. This paper proposes to augment the feature vector of dynamic sign words with knowledge of hand dynam- ics as a proxy and classify dynamic sign words using motion patterns based on the extracted feature vector. In this method, some double-hand dynamic sign words have ambiguous or similar features across a hand motion trajectory, which leads to classification errors. Thus, the similar/ambiguous hand motion trajectory is determined based on the approximation of a probability density function over a time frame. Then, the extracted features are enhanced by transformation using maximal information correlation. These enhanced features of 3D skeletal videos captured by a leap motion controller are fed as a state transition pattern to a classifier for sign word classification. To evaluate the performance of the proposed method, an experiment is performed with 10 participants on 40 double hands dynamic ASL words, which reveals 97.98% accuracy. The method is further developed on challenging ASL, SHREC, and LMDHG data sets and outperforms conventional methods by 1.47%, 1.56%, and 0.37%, respectively.
คำสำคัญ
ไม่พบข้อมูลที่เกี่ยวข้อง