Testing the Effectiveness of CNN and GNN and Exploring the Influence of Different Channels on Decoding Covert Speech from EEG Signals

Conference proceedings article


Authors/Editors


Strategic Research Themes


Publication Details

Author listSerena Liu and Jonathan H. Chan

Publication year2021

Start page17

End page26

Number of pages10

URLhttps://dl.acm.org/doi/proceedings/10.1145/3486713

LanguagesEnglish-United States (EN-US)


View on publisher site


Abstract

In this paper, the effectiveness of two deep learning models was tested and the significance of 62 different electroencephalogram (EEG) channels were explored on covert speech classification tasks using time series EEG signals. Experiments were done on the classification between the words “in” and “cooperate” from the ASU dataset and the classification between 11 different prompts from the KaraOne dataset. The types of deep learning models used are the 1D convolutional neural network (CNN) and the graphical neural network (GNN). Overall, the CNN model showed decent performance with an accuracy of around 80% on the classification between “in” and “cooperate”, while the GNN seemed to be unsuitable for time series data. By examining the accuracy of the CNN model trained on different EEG channels, the prefrontal and frontal regions appeared to be the most relevant to the performance of the model.  Although this finding is noticeably different from various previous works, it could provide possible insights into the cortical activities behind covert speech.


Keywords

Brain-computer interfaces (BCIs)Convolutional neural networks (CNN)covert speechelectroencephalography (EEG)graphical neural networks (GNN)imagined speech


Last updated on 2023-03-10 at 07:36