American Sign Language Words Recognition of Skeletal Videos Using Processed Video Driven Multi-Stacked Deep LSTM

บทความในวารสาร


ผู้เขียน/บรรณาธิการ


กลุ่มสาขาการวิจัยเชิงกลยุทธ์


รายละเอียดสำหรับงานพิมพ์

รายชื่อผู้แต่งSunusi Bala Abdullahi and Kosin Chamnongthai

ผู้เผยแพร่MDPI

ปีที่เผยแพร่ (ค.ศ.)2022

Volume number22

Issue number4

หน้าแรก1

หน้าสุดท้าย28

จำนวนหน้า28

นอก1424-8220

eISSN1424-8220

URLhttps://www.mdpi.com/1424-8220/22/4/1406

ภาษาEnglish-United States (EN-US)


ดูบนเว็บไซต์ของสำนักพิมพ์


บทคัดย่อ

Complex hand gesture interactions among dynamic sign words may lead to misclassifica- tion, which affects the recognition accuracy of the ubiquitous sign language recognition system. This paper proposes to augment the feature vector of dynamic sign words with knowledge of hand dynam- ics as a proxy and classify dynamic sign words using motion patterns based on the extracted feature vector. In this method, some double-hand dynamic sign words have ambiguous or similar features across a hand motion trajectory, which leads to classification errors. Thus, the similar/ambiguous hand motion trajectory is determined based on the approximation of a probability density function over a time frame. Then, the extracted features are enhanced by transformation using maximal information correlation. These enhanced features of 3D skeletal videos captured by a leap motion controller are fed as a state transition pattern to a classifier for sign word classification. To evaluate the performance of the proposed method, an experiment is performed with 10 participants on 40 double hands dynamic ASL words, which reveals 97.98% accuracy. The method is further developed on challenging ASL, SHREC, and LMDHG data sets and outperforms conventional methods by 1.47%, 1.56%, and 0.37%, respectively.


คำสำคัญ

ไม่พบข้อมูลที่เกี่ยวข้อง


อัพเดทล่าสุด 2024-24-09 ถึง 00:00