期刊文献+

基于跨模态交叉注意力网络的多模态情感分析方法 被引量:1

Multimodal Sentiment Analysis Based on Cross-Modal Cross-Attention Network
下载PDF
导出
摘要 挖掘不同模态内信息和模态间信息有助于提升多模态情感分析的性能,本文为此提出一种基于跨模态交叉注意力网络的多模态情感分析方法。首先,利用VGG-16网络将多模态数据映射到全局特征空间;同时,利用Swin Transformer网络将多模态数据映射到局部特征空间;其次,构造模态内自注意力和模态间交叉注意力特征;然后,设计一种跨模态交叉注意力融合模块实现不同模态内和模态间特征的深度融合,提升多模态特征表达的可靠性;最后,通过Softmax获得最终预测结果。在2个开源数据集CMU-MOSI和CMU-MSOEI上进行测试,本文模型在七分类任务上获得45.9%和54.1%的准确率,相比当前MCGMF模型,提升了0.66%和2.46%,综合性能提升显著。 Exploiting intra-modal and inter-modal information is helpful for improving the performance of multimodal sen-timent analysis.So,a multimodal sentiment analysis based on cross-modal cross-attention network is proposed.Firstly,VGG-16 network is utilized to map the multimodal data into the global feature space.Simultaneously,the Swin Transformer network is used to map the multimodal data into the local feature space.And the intra-modal self-attention and inter-modal cross-attention features are constructed.Then,a cross-modal cross-attention fusion module is designed to achieve the deep fusion of the intra-modal and inter-modal features,enhancing the represen-tation reliability of the multimodal feature.Finally,the softmax function is used to obtain the results of the sentiment analysis.The experimental results on two open source datasets CMU-MOSI and CMU-MSOEI show that the proposed model can achieve an accuracy of 45.9%and 54.1%respectively in the seven-classification task.Compared with the current classical MCGMF model,the accuracy of the proposed model has improved by 0.66%and 2.46%,and the overall performance improvement is significant.
作者 王旭阳 王常瑞 张金峰 邢梦怡 WANG Xuyang;WANG Changrui;ZHANG Jinfeng;XING Mengyi(School of Computer and Communication,Lanzhou University of Technology,Lanzhou Gansu 730050,China;School of Mechanical and Electrical Engineering,Lanzhou University of Technology,Lanzhou Gansu 730050,China)
出处 《广西师范大学学报(自然科学版)》 CAS 北大核心 2024年第2期84-93,共10页 Journal of Guangxi Normal University:Natural Science Edition
基金 国家自然科学基金(62161019)。
关键词 情感分析 多模态 跨模态交叉注意力 自注意力 局部和全局特征 sentiment analysis multimodal cross-modal cross-attention self-attention global and local feature
  • 相关文献

参考文献11

二级参考文献53

  • 1王开心,徐秀娟,刘宇,赵哲焕,赵小薇.在线评论的静态多模态情感分析[J].应用科学学报,2022,40(1):25-35. 被引量:2
  • 2丁俊良.英汉语言幽默表达的类似特点[J].河南大学学报(社会科学版),1995,35(1):77-81. 被引量:2
  • 3吴勇.试析幽默语言的模糊性[J].西南民族大学学报(人文社会科学版),2004,25(12):484-487. 被引量:8
  • 4TURNEY P D,LITTMAN M L.Measuring praise and criticism:inference of semantic orientation from association[J].ACM Transactions on Information Systems,2003,21 (4):315-346.
  • 5TSOU B K Y,YUEN R W M,KWONG O Y,et al.Polarity classification of celebrity coverage in the Chinese press[C/OL]//Proceeding of the International Conference on Intelligence Analysis.McLean VA,2-6 May,2005[2009-11-15].https://analysis.mitre.org/proceedings/Final _ Papers _ Files/109 _ Camera _ Ready _ Paper.pdf.
  • 6PANG Bo,LEE L,VAITHYANATHAN S.Thumbs up? Sentiment classification using machine learning techniques[C]//Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing.Morristown,NJ:Association for Computational Linguistics,2002:79-86.
  • 7WILSON T,WIEBE J,HWA R.Just how mad are you7 Finding strong and weak opinion clauses[C]//Proceedings of the 19 th National Conference on Artificial Intelligence.Menlo Park,CA:AAAI Press,2004:761-767.
  • 8LIU Bing.Web data mining:exploring hyperlinks,contents,and usage data[M].Berlin:Springer,2007:85.
  • 9FREUND Y,SEHAPIRE R E.Experiments with a new Boosting algorithm[C]//Proceedings of the Thirteenth International Conference on Machine Learning.San Fransisco,CA:Morgan Kaufmann Publishers Inc,1996:148-156.
  • 10余伶俐,蔡自兴,陈明义.语音信号的情感特征分析与识别研究综述[J].电路与系统学报,2007,12(4):76-84. 被引量:27

共引文献55

同被引文献17

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部