摘要
为了充分学习文本和语音两种模态之间的交互信息,本文提出一种基于跨模态交互信息的双模态情感识别方法。该方法首先基于深度学习模型对文本和语音单模态信息进行学习,并采用信息交互的方式进一步得到各模态的上下文信息。最后,在情感数据集中进行实验,通过对比单模态和双模态情感识别模型,该模型在各评价指标上表现出更好的情感分类性能。
In order to fully learn the interaction information between the text and speech modalities, a bimodal emotion recognition method based on cross-modal interaction information is proposed. The method first learns unimodal information about text and speech based on a deep learning model, and uses information interaction to further obtain contextual information about each modality. Finally, experiments are conducted on the sentiment dataset, and by comparing the unimodal and bimodal sentiment recognition models, the model shows better performance in sentiment classification on each evaluation metric.
作者
辛苗苗
马丽
胡博发
XIN Miaomiao;MA Li;HU Bofa(School of Information Engineering,Hebei GEO University,Shijiazhuang,China,050031;Laboratory of Artificial Intelligence and Machine Learning,Hebei GEO University,Shijiazhuang,China,050031)
出处
《福建电脑》
2022年第11期82-84,共3页
Journal of Fujian Computer
基金
河北省高等学校科学技术研究重点项目(No.ZD2018043)
河北地质大学博士基金项目(No.BQ2017045)资助。
关键词
双模态
情感识别
交互信息
Bimodality
Emotion Recognition
Interaction Information