摘要
针对当前多模态情感识别算法在模态特征提取、模态间信息融合等方面存在识别准确率偏低、泛化能力较差的问题,提出了一种基于语音、文本和表情的多模态情感识别算法。首先,设计了一种浅层特征提取网络(Sfen)和并行卷积模块(Pconv)提取语音和文本中的情感特征,通过改进的Inception-ResnetV2模型提取视频序列中的表情情感特征;其次,为强化模态间的关联性,设计了一种用于优化语音和文本特征融合的交叉注意力模块;最后,利用基于注意力的双向长短期记忆(BiLSTM based on attention mechanism,BiLSTM-Attention)模块关注重点信息,保持模态信息之间的时序相关性。实验通过对比3种模态不同的组合方式,发现预先对语音和文本进行特征融合可以显著提高识别精度。在公开情感数据集CH-SIMS和CMU-MOSI上的实验结果表明,所提出的模型取得了比基线模型更高的识别准确率,三分类和二分类准确率分别达到97.82%和98.18%,证明了该模型的有效性。
Aiming at the problems of low recognition accuracy and poor generalization ability of current multimodal emotion recognition algorithms in modal feature extraction and information fusion between modalities,a multimodal emotion recognition algorithm based on speech,text and expression is proposed.Firstly,a shallow feature extraction network(Sfen)combined with parallel convolution module(Pconv)is designed to extract the emotional features in speech and text.A modified Inception-ResnetV2 model is adopted to capture the emotional features of expression in video stream.Secondly,in order to strengthen the correlation among modalities,a cross attention module is designed to optimize the fusion between speech and text modalities.Finally,a bidirectional long and short-term memory module based on attention mechanism(BiLSTM-Attention)is used to focus on key information and maintain the temporal correlation between modalities.By comparing the different combinations of the three modalities,it is found that the hierarchical fusion strategy that processes speech and text in advance can obviously improve the accuracy of the model.Experimental results on the public emotion datasets CH-SIMS and CMU-MOSI show that the proposed model achieves higher recognition accuracy than the baseline model,with three-class and two-class accuracy reaching 97.82%and 98.18%respectively,which proves the effectiveness of the model.
作者
吴晓
牟璇
刘银华
刘晓瑞
WU Xiao;MOU Xuan;LIU Yinhua;LIU Xiaorui(Automation School,Qingdao University,Qingdao 266071,China;Institute of Future,Qingdao University,Qingdao 266071,China;Shandong Key Laboratory of Industrial Control Technology,Qingdao 266071,China)
出处
《西北大学学报(自然科学版)》
CAS
CSCD
北大核心
2024年第2期177-187,共11页
Journal of Northwest University(Natural Science Edition)
基金
国家重点研发计划“智能机器人”专项资助项目(2020YFB1313600)
青岛市自然科学基金资助项目(23-2-1-126-zyyd-jch)
山东省高等学校优秀青年创新团队支持计划项目(2022KJ142)。
关键词
多模态
情感识别
并行卷积
交叉注意力
multimodal
emotion recognition
parallel convolution
cross attention