Reducing Overfitting and Improving Generalization in Training Convolutional Neural Network (CNN) under Limited Sample Sizes in Image Recognition

Conference proceedings article


Authors/Editors

No matching items found.


Strategic Research Themes


Publication Details

Author listThanapol, Panissara; Lavangnananda, Kittichai; Bouvry, Pascal; Pinel, Frederic; Leprevost, Franck;

PublisherHindawi

Publication year2020

Start page300

End page305

Number of pages6

ISBN9781728166940

ISSN0146-9428

eISSN1745-4557

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85100158569&doi=10.1109%2fInCIT50588.2020.9310787&partnerID=40&md5=5fc04ad9073ef687d240d0bab2b1af30

LanguagesEnglish-Great Britain (EN-GB)


View on publisher site


Abstract

In deep learning, application of Convolutional Neural Network (CNN) is prolific in image recognition. CNN assumes that large amount of samples are available in the dataset in order to implement an effective CNN model. However, this assumption may not be practical or possible in some real world applications. It is commonly known that training a CNN model under limited samples available often leads to overfitting and inability to generalize. Data augmentation, batch normalization and dropout techniques have been suggested to mitigate such problems. This work studies the effect of overfitting and generalization in image recognition of intentionally contracted CIF AR-10 dataset. Application of these techniques and their combination are considered as well as injection of data augmentation at different epochs. The result of this work reveals that utilizing injection at 30 epoch in the application of width and height shift data augmentation together with dropout yields the best performance and can overcome the overfitting effect best. © 2020 IEEE.


Keywords

CIFAR-10 DatasetConvolutional neural networks (CNN)GeneralizationImage recognitionOverfitting


Last updated on 2023-17-10 at 07:36