Reducing Overfitting and Improving Generalization in Training Convolutional Neural Network (CNN) under Limited Sample Sizes in Image Recognition
Conference proceedings article
Authors/Editors
No matching items found.
Strategic Research Themes
Publication Details
Author list: Thanapol, Panissara; Lavangnananda, Kittichai; Bouvry, Pascal; Pinel, Frederic; Leprevost, Franck;
Publisher: Hindawi
Publication year: 2020
Start page: 300
End page: 305
Number of pages: 6
ISBN: 9781728166940
ISSN: 0146-9428
eISSN: 1745-4557
Languages: English-Great Britain (EN-GB)
Abstract
In deep learning, application of Convolutional Neural Network (CNN) is prolific in image recognition. CNN assumes that large amount of samples are available in the dataset in order to implement an effective CNN model. However, this assumption may not be practical or possible in some real world applications. It is commonly known that training a CNN model under limited samples available often leads to overfitting and inability to generalize. Data augmentation, batch normalization and dropout techniques have been suggested to mitigate such problems. This work studies the effect of overfitting and generalization in image recognition of intentionally contracted CIF AR-10 dataset. Application of these techniques and their combination are considered as well as injection of data augmentation at different epochs. The result of this work reveals that utilizing injection at 30 epoch in the application of width and height shift data augmentation together with dropout yields the best performance and can overcome the overfitting effect best. © 2020 IEEE.
Keywords
CIFAR-10 Dataset, Convolutional neural networks (CNN), Generalization, Image recognition, Overfitting