Regularizing the loss layer of CNNs for facial expression recognition using crowdsourced labels
Conference proceedings article
Authors/Editors
Strategic Research Themes
No matching items found.
Publication Details
Author list: Lu P., Li B., Shama S., King I., Chan J.H.
Publisher: Hindawi
Publication year: 2017
Volume number: 2017-January
Start page: 31
End page: 36
Number of pages: 6
ISBN: 9781538607435
ISSN: 0146-9428
eISSN: 1745-4557
Languages: English-Great Britain (EN-GB)
Abstract
Deep, convolutional neural networks have become the state-of-the-art method for automatic Facial Expression Recognition (FER). Because of the small size and controlled conditions of most FER datasets, however, models can still overfit to the training dataset and struggle to generalize well to new data. We present a novel approach of using crowdsourced label distributions for improving the generalization performance of convolutional neural networks for FER. We implement this as a loss layer regularizer, where the ground truth labels are combined with crowdsourced labels in order to construct a noisy output distribution during training. We use a label disturbance method in which training examples are randomly replaced with incorrect labels drawn from the combined label probability distribution. We compare the performance of our disturbed and undisturbed models in cross-validation testing on the extended Cohn-Kanade dataset and cross-dataset experiments on the MMI, JAFFE, and FER2013 datasets. We find that using our proposed method, test performance is improved on both the MMI and JAFFE datasets. Our results suggest that using nonuniform probability distributions to disturb training can improve generalization performance of CNNs on other FER datasets. ฉ 2017 IEEE.
Keywords
No matching items found.