期刊文献+

基于多尺度卷积和自注意力特征融合的多模态情感识别方法

Multimodal emotion recognition method based on multiscale convolution and self-attention feature fusion
下载PDF
导出
摘要 基于生理信号的情感识别受噪声等因素影响,存在准确率低和跨个体泛化能力弱的问题。对此,提出一种基于脑电(EEG)、心电(ECG)和眼动信号的多模态情感识别方法。首先,对生理信号进行多尺度卷积,获取更高维度的信号特征并减少参数量;其次,在多模态信号特征的融合中使用自注意力机制,以提升关键特征的权重并减少模态之间的特征干扰;最后,使用双向长短期记忆(Bi-LSTM)网络提取融合特征的时序信息并进行分类。实验结果表明,所提方法在效价、唤醒度和效价/唤醒度四分类任务上分别取得90.29%、91.38%和83.53%的识别准确率,相较于脑电单模态和脑电/心电双模态方法,准确率上提升了3.46~7.11和0.92~3.15个百分点。所提方法能够准确识别情感,在个体间的识别稳定性更好。 Emotion recognition based on physiological signals is affected by noise and other factors,resulting in low accuracy and weak cross-individual generalization ability.Concerning the issue,a multimodal emotion recognition method based on ElectroEncephaloGram(EEG),ElectroCardioGram(ECG),and eye movement signals was proposed.Firstly,physiological signals were performed multi-scale convolution to obtain higher-dimensional signal features and reduce parameter size.Secondly,self-attention was employed in the fusion of multimodal signal features to enhance the weights of key features and reduce feature interference between modalities.Finally,a Bi-directional Long Short-Term Memory(Bi-LSTM)network was used for extraction of temporal information of fused features and classification.Experimental results show that,the proposed method achieves recognition accuracies of 90.29%,91.38%,and 83.53%for valence,arousal,and valence/arousal four-class recognition tasks,respectively,with improvements of 3.46-7.11 and 0.92-3.15 percentage points compared to the EEG single-modality and EEG+ECG bimodal methods.The proposed method can accurately recognize emotion with better recognition stability between individuals.
作者 陈田 蔡从虎 袁晓辉 罗蓓蓓 CHEN Tian;CAI Conghu;YUAN Xiaohui;LUO Beibei(School of Computer Science and Information Engineering,Hefei University of Technology,Hefei Anhui 230009,China;Intelligent Interconnected Systems Laboratory of Anhui Province,Hefei Anhui 230009,China;Anhui Province Key Laboratory of Affective Computing and Advanced Intelligent Machine,Hefei Anhui 230009,China;Department of Computer Science and Engineering,University of North Texas,Denton Texas 76207,USA)
出处 《计算机应用》 CSCD 北大核心 2024年第2期369-376,共8页 journal of Computer Applications
基金 国家自然科学基金资助项目(62174048,62027815)。
关键词 脑电 自注意力 心电 眼动 多模态 情感识别 ElectroEncephaloGram(EEG) self-attention ElectroCardioGram(ECG) eye movement multimodal emotion recognition
  • 相关文献

参考文献2

二级参考文献1

共引文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部