Regularizing the loss layer of CNNs for facial expression recognition using crowdsourced labels

Conference proceedings article


Authors/Editors


Strategic Research Themes

No matching items found.


Publication Details

Author listLu P., Li B., Shama S., King I., Chan J.H.

PublisherHindawi

Publication year2017

Volume number2017-January

Start page31

End page36

Number of pages6

ISBN9781538607435

ISSN0146-9428

eISSN1745-4557

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85049244980&doi=10.1109%2fIESYS.2017.8233557&partnerID=40&md5=1be94740c1ad4939d4d41740213bef5d

LanguagesEnglish-Great Britain (EN-GB)


View on publisher site


Abstract

Deep, convolutional neural networks have become the state-of-the-art method for automatic Facial Expression Recognition (FER). Because of the small size and controlled conditions of most FER datasets, however, models can still overfit to the training dataset and struggle to generalize well to new data. We present a novel approach of using crowdsourced label distributions for improving the generalization performance of convolutional neural networks for FER. We implement this as a loss layer regularizer, where the ground truth labels are combined with crowdsourced labels in order to construct a noisy output distribution during training. We use a label disturbance method in which training examples are randomly replaced with incorrect labels drawn from the combined label probability distribution. We compare the performance of our disturbed and undisturbed models in cross-validation testing on the extended Cohn-Kanade dataset and cross-dataset experiments on the MMI, JAFFE, and FER2013 datasets. We find that using our proposed method, test performance is improved on both the MMI and JAFFE datasets. Our results suggest that using nonuniform probability distributions to disturb training can improve generalization performance of CNNs on other FER datasets. ฉ 2017 IEEE.


Keywords

No matching items found.


Last updated on 2024-19-02 at 19:49