Novel american sign language fingerspelling recognition in the wild with weakly supervised learning and feature embedding
Conference proceedings article
Authors/Editors
Strategic Research Themes
Publication Details
Author list: Pannattee P., Kumwilaisak W., Hansakunbuntheung C., Thatphithakkul N.
Publisher: Elsevier
Publication year: 2021
Start page: 291
End page: 294
Number of pages: 4
ISBN: 9780738111278
ISSN: 0928-4931
eISSN: 1873-0191
Languages: English-Great Britain (EN-GB)
Abstract
This paper presents a new method in fingerspelling recognition in highly dynamic video sequences. Sign language videos are labeled only in a video sequence level. A deep learning network extracts spatial features of video frames with the AlexNet and uses them to derive a language model with the Long-Short Term Memory (LSTM) network. The results of this deep learning network are the predicted fingerspelling gestures at a frame level. The recognition results of testing video sequences with 100 percent accuracy are used to improve spatial features of video frames. We construct a Siamese network from the recognition results in the first recognition pass. A network deployed in the Siamese network is the ResNet-50. We employ the Siamese network to derive the efficient representation of each fingerspelling gesture. The derived features corresponding to each video frame are fed to the LSTM network to predict fingerspelling gestures. Our proposed method can outperform state of the art fingerspelling recognition algorithms by almost four percent in recognition accuracy from our experimental results. © 2021 IEEE.
Keywords
Feature embedding, Fingerspelling recognition, Siamese network, Weakly supervised learning