期刊文献+

融合坐标与多头注意力机制的交互语音情感识别

Fusion of coordinate and multi-head attention mechanisms for interactive speech emotion recognition
下载PDF
导出
摘要 语音情感识别(SER)是人机交互系统中一项重要且充满挑战性的任务。针对目前SER系统中存在特征单一和特征间交互性较弱的问题,提出多输入交互注意力网络MIAN。该网络由特定特征坐标残差注意力网络和共享特征多头注意力网络两个子网络组成。前者利用Res2Net和坐标注意力模块学习从原始语音中获取的特定特征,并生成多尺度特征表示,增强模型对情感相关信息的表征能力;后者融合前向网络所获取的特征,组成共享特征,并经双向长短时记忆(BiLSTM)网络输入至多头注意力模块,能同时关注不同特征子空间中的相关信息,增强特征之间的交互性,以捕获判别性强的特征。通过2个子网络间的协同作用,能增加模型特征的多样性,增强特征之间的交互能力。在训练过程中,应用双损失函数共同监督,使同类样本更紧凑、不同类样本更分离。实验结果表明,MIAN在EMO-DB和IEMOCAP语料库上分别取得了91.43%和76.33%的加权平均精度,相较于其他主流模型,具有更好的分类性能。 Speech Emotion Recognition(SER)is an important and challenging task in human-computer interaction systems.To address the issues of single-feature representation and weak feature interaction in current SER systems,a Multiinput Interactive Attention Network(MIAN)was proposed.The proposed network consists of two sub-networks,namely the specific feature coordinate residual attention network and the shared feature multi-head attention network.The former utilized Res2Net and coordinate attention modules to learn specific features extracted from raw speech and generate multiscale feature representations,enhancing the model’s ability to represent emotion-related information.The latter integrated the features obtained from the forward network to form shared features,which were then input into the multi-head attention module via Bidirectional Long Short-Term Memory(BiLSTM)network.This setup allowed for simultaneous attention to relevant information in different feature subspaces,enhancing the interaction among features and capturing highly discriminative features.The collaboration of the two sub-networks mentioned above increased the diversity of features and improve the interaction capability among features.During the training process,a dual-loss function was applied for joint supervision,aiming to make the samples of the same class more compact and the samples of different classes more separated.The experimental results demonstrate that the proposed model achieves a weighted average accuracy of 91.43%on EMO-DB corpus and 76.33%on IEMOCAP corpus.Compared to other state-of-the-art models,the proposed model exhibits superior classification performance.
作者 高鹏淇 黄鹤鸣 樊永红 GAO Pengqi;HUANG Heming;FAN Yonghong(College of Computer,Qinghai Normal University,Xining Qinghai 810008,China;The State Key Laboratory of Tibetan Intelligent Information Processing and Application,Xining Qinghai 810008,China)
出处 《计算机应用》 CSCD 北大核心 2024年第8期2400-2406,共7页 journal of Computer Applications
基金 国家自然科学基金资助项目(620660039) 青海省自然科学基金资助项目(2022-ZJ-925) 高等学校学科创新引智计划项目(D20035)。
关键词 语音情感识别 坐标注意力机制 多头注意力机制 特定特征学习 共享特征学习 Speech Emotion Recognition(SER) coordinate attention mechanism multi-head attention mechanism specific feature learning shared feature learning
  • 相关文献

参考文献4

二级参考文献31

  • 1高维深.基于HMM/ANN混合模型的非特定人语音识别研究[D].电子科技大学2013
  • 2尤鸣宇.语音情感识别的关键技术研究[D].浙江大学2007
  • 3郅菲菲.字词认知N170成分发展的人工语言训练研究[D].浙江师范大学2013
  • 4王魁.汉字视知觉左侧化N170-反映字形加工还是语音编码[D].西南大学2012
  • 5聂聃.基于脑电的情感识别[D].上海交通大学2012
  • 6赵仑,著.ERPs实验教程[M]. 东南大学出版社, 2010
  • 7Nia Cason,Corine Astésano,Daniele Sch?n.Bridging music and speech rhythm: Rhythmic priming and audio-motor training affect speech perception[J]. Acta Psychologica . 2014
  • 8Lauri Nummenmaa,Heini Saarim?ki,Enrico Glerean,Athanasios Gotsopoulos,Iiro P. J??skel?inen,Riitta Hari,Mikko Sams.Emotional speech synchronizes brains across listeners and engages large-scale dynamic brain networks[J]. NeuroImage . 2014
  • 9K. Sreenivasa Rao,Shashidhar G. Koolagudi,Ramu Reddy Vempada.Emotion recognition from speech using global and local prosodic features[J]. International Journal of Speech Technology . 2013 (2)
  • 10Ferenc Honbolygó,Valéria Csépe.Saliency or template? ERP evidence for long-term representation of word stress[J]. International Journal of Psychophysiology . 2012

共引文献30

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部