摘要
运动想象是一种认知神经科学领域的概念,指的是在不实际运动的情况下,通过想象运动来激活大脑相应区域的神经元。传统的CNN在处理EEG信号时存在劣势,因为EEG信号是一种时间序列数据,而CNN并不擅长处理这种类型的数据,导致无法充分挖掘时间相关性和特征信息,影响了模型的性能和准确性。为了解决这一问题,本文使用动态图卷积和时间卷积来处理EEG数据,该方法能够有效地捕捉信号之间的时间依赖关系和动态变化,从而提高了模型在处理EEG信号时的性能和准确性。动态图卷积的优势在于能够更好地适应时间序列数据的特点,提高了模型在提取特征和预测方面的效果,有效解决了传统CNN在处理EEG信号时的劣势,为脑机接口技术等领域的发展带来了新的可能性。该方法主要过程如下:首先,EEG信号被输入到卷积滤波器进行处理,过滤成八个子频带后,分别输入到八个动态图卷积神经网络(DGCNN)中。最后,这些网络被串联起来,输入到一个时域卷积网络(TCN)中进行特征提取。在公开数据集上,DGCNN模型的平均分类准确率(82.5 ± 4.3%)优于传统的CNN模型(68.9 ± 3.6%)。
Motor imagery is a concept in the cognitive neuroscience field, referring to the activation of corresponding brain regions’ neurons through imagining movements without actual execution. Traditional CNNs have limitations in processing EEG signals because EEG data is a type of time-series data, which CNNs are not adept at handling, leading to insufficient exploitation of temporal correlations and feature information, thus affecting model performance and accuracy. To address this issue, this study employs dynamic graph convolution and temporal convolution to process EEG data. This method effectively captures the temporal dependencies and dynamic changes between signals, thereby enhancing model performance and accuracy in handling EEG signals. The advantage of dynamic graph convolution lies in its better adaptation to the characteristics of time-series data, improving feature extraction and prediction, effectively overcoming the limitations of traditional CNNs in processing EEG signals, and bringing new possibilities to the development of brain-computer interface technology and other fields. The main process of this method is as follows: first, EEG signals are inputted into convolutional filters for processing, filtered into eight sub-bands, and then inputted into eight dynamic graph convolutional neural networks (DGCNNs) respectively. Finally, these networks are concatenated and inputted into a temporal convolutional network (TCN) for feature extraction. On publicly available datasets, the DGCNN model achieves a higher average classification accuracy (82.5 ± 4.3%) compared to the traditional CNN model (68.9 ± 3.6%).
出处
《计算机科学与应用》
2024年第4期268-275,共8页
Computer Science and Application