Trigger detection system for American sign language using deep convolutional neural networks

Conference proceedings article


Authors/Editors


Strategic Research Themes


Publication Details

Author listDebasrita Chakraborty, Deepankar Garg, Ashish Ghosh, Jonathan H Chan

Publication year2018

Title of seriesProceedings of the 10th International Conference on Advances in Information Technology

URLhttps://dl.acm.org/doi/10.1145/3291280.3291783


View on publisher site


Abstract

Automatic trigger-word detection in speech is a well known technology nowadays. However, for people who are incapable of speech or are in some silence zone, such voice activated trigger detection systems find no use. We have developed a trigger detection system using the 24 static hand gestures of the American Sign Language (ASL). Our model is primarily based on Deep Convolutional Neural Network (Deep CNN) as they are capable of capturing interesting visual features at each hidden layer. We aim at constructing a customisable switch that can turn 'on' if it finds a given trigger gesture in any video that it receives and stays 'off' if it does not. The model was trained on images of various hand gestures in a multi-class classification setting. This allows the user to choose a custom trigger gesture for oneself. To test the efficiency of such a model in the trigger detection process, we have made 7,000 videos (each 10s long) consisting of random images from the test set which were never shown to the model during the training process. It is experimentally shown that such a system has a better performance than the other state-of-the art techniques used in static hand gesture image recognition tasks. This approach also finds real-time application and can be applied to develop small scale devices which trigger any particular response by capturing the gestures made by the people.


Keywords

No matching items found.


Last updated on 2024-23-02 at 23:05