期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
EEG Emotion Recognition Using an Attention Mechanism Based on an Optimized Hybrid Model 被引量:2
1
作者 Huiping Jiang demeng wu +2 位作者 Xingqun Tang Zhongjie Li Wenbo wu 《Computers, Materials & Continua》 SCIE EI 2022年第11期2697-2712,共16页
Emotions serve various functions.The traditional emotion recognition methods are based primarily on readily accessible facial expressions,gestures,and voice signals.However,it is often challenging to ensure that these... Emotions serve various functions.The traditional emotion recognition methods are based primarily on readily accessible facial expressions,gestures,and voice signals.However,it is often challenging to ensure that these non-physical signals are valid and reliable in practical applications.Electroencephalogram(EEG)signals are more successful than other signal recognition methods in recognizing these characteristics in real-time since they are difficult to camouflage.Although EEG signals are commonly used in current emotional recognition research,the accuracy is low when using traditional methods.Therefore,this study presented an optimized hybrid pattern with an attention mechanism(FFT_CLA)for EEG emotional recognition.First,the EEG signal was processed via the fast fourier transform(FFT),after which the convolutional neural network(CNN),long short-term memory(LSTM),and CNN-LSTM-attention(CLA)methods were used to extract and classify the EEG features.Finally,the experiments compared and analyzed the recognition results obtained via three DEAP dataset models,namely FFT_CNN,FFT_LSTM,and FFT_CLA.The final experimental results indicated that the recognition rates of the FFT_CNN,FFT_LSTM,and FFT_CLA models within the DEAP dataset were 87.39%,88.30%,and 92.38%,respectively.The FFT_CLA model improved the accuracy of EEG emotion recognition and used the attention mechanism to address the often-ignored importance of different channels and samples when extracting EEG features. 展开更多
关键词 Emotion recognition EEG signal optimized hybrid model attention mechanism
下载PDF
Emotion Analysis: Bimodal Fusion of Facial Expressions and EEG 被引量:1
2
作者 Huiping Jiang Rui Jiao +1 位作者 demeng wu Wenbo wu 《Computers, Materials & Continua》 SCIE EI 2021年第8期2315-2327,共13页
With the rapid development of deep learning and artificial intelligence,affective computing,as a branch field,has attracted increasing research attention.Human emotions are diverse and are directly expressed via nonph... With the rapid development of deep learning and artificial intelligence,affective computing,as a branch field,has attracted increasing research attention.Human emotions are diverse and are directly expressed via nonphysiological indicators,such as electroencephalogram(EEG)signals.However,whether emotion-based or EEG-based,these remain single-modes of emotion recognition.Multi-mode fusion emotion recognition can improve accuracy by utilizing feature diversity and correlation.Therefore,three different models have been established:the single-mode-based EEG-long and short-term memory(LSTM)model,the Facial-LSTM model based on facial expressions processing EEG data,and the multi-mode LSTM-convolutional neural network(CNN)model that combines expressions and EEG.Their average classification accuracy was 86.48%,89.42%,and 93.13%,respectively.Compared with the EEG-LSTM model,the Facial-LSTM model improved by about 3%.This indicated that the expression mode helped eliminate EEG signals that contained few or no emotional features,enhancing emotion recognition accuracy.Compared with the Facial-LSTM model,the classification accuracy of the LSTM-CNN model improved by 3.7%,showing that the addition of facial expressions affected the EEG features to a certain extent.Therefore,using various modal features for emotion recognition conforms to human emotional expression.Furthermore,it improves feature diversity to facilitate further emotion recognition research. 展开更多
关键词 Single-mode and multi-mode expressions and EEG deep learning LSTM
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部