Novel american sign language fingerspelling recognition in the wild with weakly supervised learning and feature embedding

Conference proceedings article


Authors/Editors


Strategic Research Themes


Publication Details

Author listPannattee P., Kumwilaisak W., Hansakunbuntheung C., Thatphithakkul N.

PublisherElsevier

Publication year2021

Start page291

End page294

Number of pages4

ISBN9780738111278

ISSN0928-4931

eISSN1873-0191

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85112848412&doi=10.1109%2fECTI-CON51831.2021.9454677&partnerID=40&md5=9a2e4940120d112d316562a7e25928e8

LanguagesEnglish-Great Britain (EN-GB)


View on publisher site


Abstract

This paper presents a new method in fingerspelling recognition in highly dynamic video sequences. Sign language videos are labeled only in a video sequence level. A deep learning network extracts spatial features of video frames with the AlexNet and uses them to derive a language model with the Long-Short Term Memory (LSTM) network. The results of this deep learning network are the predicted fingerspelling gestures at a frame level. The recognition results of testing video sequences with 100 percent accuracy are used to improve spatial features of video frames. We construct a Siamese network from the recognition results in the first recognition pass. A network deployed in the Siamese network is the ResNet-50. We employ the Siamese network to derive the efficient representation of each fingerspelling gesture. The derived features corresponding to each video frame are fed to the LSTM network to predict fingerspelling gestures. Our proposed method can outperform state of the art fingerspelling recognition algorithms by almost four percent in recognition accuracy from our experimental results. © 2021 IEEE.


Keywords

Feature embeddingFingerspelling recognitionSiamese networkWeakly supervised learning


Last updated on 2023-06-10 at 07:37