Testing the Effectiveness of CNN and GNN and Exploring the Influence of Different Channels on Decoding Covert Speech from EEG Signals
Conference proceedings article
ผู้เขียน/บรรณาธิการ
กลุ่มสาขาการวิจัยเชิงกลยุทธ์
รายละเอียดสำหรับงานพิมพ์
รายชื่อผู้แต่ง: Serena Liu and Jonathan H. Chan
ปีที่เผยแพร่ (ค.ศ.): 2021
หน้าแรก: 17
หน้าสุดท้าย: 26
จำนวนหน้า: 10
URL: https://dl.acm.org/doi/proceedings/10.1145/3486713
ภาษา: English-United States (EN-US)
บทคัดย่อ
In this paper, the effectiveness of two deep learning models was tested and the significance of 62 different electroencephalogram (EEG) channels were explored on covert speech classification tasks using time series EEG signals. Experiments were done on the classification between the words “in” and “cooperate” from the ASU dataset and the classification between 11 different prompts from the KaraOne dataset. The types of deep learning models used are the 1D convolutional neural network (CNN) and the graphical neural network (GNN). Overall, the CNN model showed decent performance with an accuracy of around 80% on the classification between “in” and “cooperate”, while the GNN seemed to be unsuitable for time series data. By examining the accuracy of the CNN model trained on different EEG channels, the prefrontal and frontal regions appeared to be the most relevant to the performance of the model. Although this finding is noticeably different from various previous works, it could provide possible insights into the cortical activities behind covert speech.
คำสำคัญ
Brain-computer interfaces (BCIs), Convolutional neural networks (CNN), covert speech, electroencephalography (EEG), graphical neural networks (GNN), imagined speech