Evaluating Improvement on Feature Selection for Classification of Implicit Learning on EEG’s Multiscale Entropy Data using BMNABC

Conference proceedings article


Authors/Editors


Strategic Research Themes


Publication Details

Author listChayapol Chaiyanan, Boonserm Kaewkamnerdpong

Publication year2022


Abstract

Those who are good at implicit learning can learn things faster and are more adaptable in the fast pace age of information. Implicit learning is a type of learning without being explicitly taught. It’s commonly seen in younger children when they develop their ability to speak their native language without learning grammar. The human brain can be trained to be good at learning by training the brain to be in the state of learning more often. By using neurofeedback to regulate the human brain state, educators and learners can help each other in training the brain to be better implicit learners. Our research aims to classify implicit learning events from EEG signals to help identify and moderate such states. This paper analyzed the feature selection process section to improve classification performance. We used previously measured participants' EEG signals while performing cognitive task experiments. Those signals were then getting feature extracted into Multiscale Entropy. Previously, Artificial Bee Colony (ABC) was used on the Multiscale Entropy to help classify the implicit learning events with reasonable success. However, an improvement was required to make the entire system more optimized due to how features being selected were in a binary search space. Binary Multi-Neighborhood Artificial Bee Colony (BMNABC) was chosen as an alternative. The comparison indicated that BMNABC increased the accuracy to as high as 90.57% and can be regarded as a promising method for identifying implicit learning events.


Keywords

Binary Multi Neighborhood Artificial Bee ColonyEEGImplicit LearningMultiscale Entropy


Last updated on 2023-11-01 at 23:05