期刊文献+

Cross-Modal Complementary Network with Hierarchical Fusion for Multimodal Sentiment Classification 被引量:4

原文传递
导出
摘要 Multimodal Sentiment Classification(MSC)uses multimodal data,such as images and texts,to identify the users'sentiment polarities from the information posted by users on the Internet.MSC has attracted considerable attention because of its wide applications in social computing and opinion mining.However,improper correlation strategies can cause erroneous fusion as the texts and the images that are unrelated to each other may integrate.Moreover,simply concatenating them modal by modal,even with true correlation,cannot fully capture the features within and between modals.To solve these problems,this paper proposes a Cross-Modal Complementary Network(CMCN)with hierarchical fusion for MSC.The CMCN is designed as a hierarchical structure with three key modules,namely,the feature extraction module to extract features from texts and images,the feature attention module to learn both text and image attention features generated by an image-text correlation generator,and the cross-modal hierarchical fusion module to fuse features within and between modals.Such a CMCN provides a hierarchical fusion framework that can fully integrate different modal features and helps reduce the risk of integrating unrelated modal features.Extensive experimental results on three public datasets show that the proposed approach significantly outperforms the state-of-the-art methods.
出处 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第4期664-679,共16页 清华大学学报(自然科学版(英文版)
基金 supported by the National Key Research and Development Program of China(No.2020AAA0104903)。
  • 相关文献

参考文献1

二级参考文献1

共引文献6

同被引文献24

引证文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部