Recent years have witnessed a rapid spread of multi-modality microblogs like Twitter and Sina Weibo composed of image, text and emoticon. Visual sentiment prediction of such microblog based social media has recently a...Recent years have witnessed a rapid spread of multi-modality microblogs like Twitter and Sina Weibo composed of image, text and emoticon. Visual sentiment prediction of such microblog based social media has recently attracted ever-increasing research focus with broad application prospect. In this paper, we give a systematic review of the recent advances and cutting-edge techniques for visual senti- ment analysis. To this end, in this paper we review the most recent works in this topic, in which detailed comparison as well as experimental evaluation are given over the cuttingedge methods. We further reveal and discuss the future trends and potential directions for visual sentiment prediction.展开更多
Large-scale datasets are driving the rapid developments of deep convolutional neural networks for visual sentiment analysis.However,the annotation of large-scale datasets is expensive and time consuming.Instead,it ise...Large-scale datasets are driving the rapid developments of deep convolutional neural networks for visual sentiment analysis.However,the annotation of large-scale datasets is expensive and time consuming.Instead,it iseasy to obtain weakly labeled web images from the Internet.However,noisy labels st.ill lead to seriously degraded performance when we use images directly from the web for training networks.To address this drawback,we propose an end-to-end weakly supervised learning network,which is robust to mislabeled web images.Specifically,the proposed attention module automatically eliminates the distraction of those samples with incorrect labels bv reducing their attention scores in the training process.On the other hand,the special-class activation map module is designed to stimulate the network by focusing on the significant regions from the samples with correct labels in a weakly supervised learning approach.Besides the process of feature learning,applying regularization to the classifier is considered to minimize the distance of those samples within the same class and maximize the distance between different class centroids.Quantitative and qualitative evaluations on well-and mislabeled web image datasets demonstrate that the proposed algorithm outperforms the related methods.展开更多
Purpose:Nowadays,public opinions during public emergencies involve not only textual contents but also contain images.However,the existing works mainly focus on textual contents and they do not provide a satisfactory a...Purpose:Nowadays,public opinions during public emergencies involve not only textual contents but also contain images.However,the existing works mainly focus on textual contents and they do not provide a satisfactory accuracy of sentiment analysis,lacking the combination of multimodal contents.In this paper,we propose to combine texts and images generated in the social media to perform sentiment analysis.Design/methodology/approach:We propose a Deep Multimodal Fusion Model(DMFM),which combines textual and visual sentiment analysis.We first train word2vec model on a large-scale public emergency corpus to obtain semantic-rich word vectors as the input of textual sentiment analysis.BiLSTM is employed to generate encoded textual embeddings.To fully excavate visual information from images,a modified pretrained VGG16-based sentiment analysis network is used with the best-performed fine-tuning strategy.A multimodal fusion method is implemented to fuse textual and visual embeddings completely,producing predicted labels.Findings:We performed extensive experiments on Weibo and Twitter public emergency datasets,to evaluate the performance of our proposed model.Experimental results demonstrate that the DMFM provides higher accuracy compared with baseline models.The introduction of images can boost the performance of sentiment analysis during public emergencies.Research limitations:In the future,we will test our model in a wider dataset.We will also consider a better way to learn the multimodal fusion information.Practical implications:We build an efficient multimodal sentiment analysis model for the social media contents during public emergencies.Originality/value:We consider the images posted by online users during public emergencies on social platforms.The proposed method can present a novel scope for sentiment analysis during public emergencies and provide the decision support for the government when formulating policies in public emergencies.展开更多
Targeted multimodal sentiment classification(TMSC)aims to identify the sentiment polarity of a target mentioned in a multimodal post.The majority of current studies on this task focus on mapping the image and the text...Targeted multimodal sentiment classification(TMSC)aims to identify the sentiment polarity of a target mentioned in a multimodal post.The majority of current studies on this task focus on mapping the image and the text to a high-dimensional space in order to obtain and fuse implicit representations,ignoring the rich semantic information contained in the images and not taking into account the contribution of the visual modality in the multimodal fusion representation,which can potentially influence the results of TMSC tasks.This paper proposes a general model for Improving Targeted Multimodal Sentiment Classification with Semantic Description of Images(ITMSC)as a way to tackle these issues and improve the accu-racy of multimodal sentiment analysis.Specifically,the ITMSC model can automatically adjust the contribution of images in the fusion representation through the exploitation of semantic descriptions of images and text similarity relations.Further,we propose a target-based attention module to capture the target-text relevance,an image-based attention module to capture the image-text relevance,and a target-image matching module based on the former two modules to properly align the target with the image so that fine-grained semantic information can be extracted.Our experimental results demonstrate that our model achieves comparable performance with several state-of-the-art approaches on two multimodal sentiment datasets.Our findings indicate that incorporating semantic descriptions of images can enhance our understanding of multimodal content and lead to improved sentiment analysis performance.展开更多
为更好地利用单词词性包含的语义信息和伴随单词出现时的非自然语言上下文信息,提出动态调整语义的词性加权多模态情感分析(part of speech weighted multi-modal sentiment analysis model with dynamic semantics adjustment,PW-DS)模...为更好地利用单词词性包含的语义信息和伴随单词出现时的非自然语言上下文信息,提出动态调整语义的词性加权多模态情感分析(part of speech weighted multi-modal sentiment analysis model with dynamic semantics adjustment,PW-DS)模型.该模型以自然语言为主体,分别使用基于Transformer的双向编码器表示(bidirectional encoder representation from Transformers,BERT)模型、广义自回归预训练(generalized autoregressive pretraining for language understanding,XLNet)模型和一种鲁棒优化的BERT预训练(robustly optimized BERT pretraining approach,RoBERTa)模型为文本模态做词嵌入编码;创建动态调整语义模块将自然语言和非自然语言信息有效结合;设计词性加权模块,提取单词词性并赋权以优化情感判别.与张量融合网络和低秩多模态融合等当前先进模型的对比实验结果表明,PW-DS模型在公共数据集CMU-MOSI和CMU-MOSEI上的平均绝对误差分别达到了0.607和0.510,二分类准确率分别为89.02%和86.93%,优于对比模型.通过消融实验分析了不同模块对模型效果的影响,验证了模型的有效性.展开更多
为了研究微博用户表达情感的特性,从个人化的情感表达和对社会性事件的态度反映两类文本出发,分别对个人情感变化以及热点事件中的用户情绪进行分析,设计并实现了微博情感可视化系统(sentiment visualization system for microblog,SVSM...为了研究微博用户表达情感的特性,从个人化的情感表达和对社会性事件的态度反映两类文本出发,分别对个人情感变化以及热点事件中的用户情绪进行分析,设计并实现了微博情感可视化系统(sentiment visualization system for microblog,SVSM)。个人化情感研究记录用户在时间轴上的情绪波动,并且从性别及地域属性上分析个人情感差异;热点事件情感研究监测用户情绪的群体表达,从时间、空间、热词、用户属性、事件属性以及传播特性等角度进行特性分析。展开更多
文摘Recent years have witnessed a rapid spread of multi-modality microblogs like Twitter and Sina Weibo composed of image, text and emoticon. Visual sentiment prediction of such microblog based social media has recently attracted ever-increasing research focus with broad application prospect. In this paper, we give a systematic review of the recent advances and cutting-edge techniques for visual senti- ment analysis. To this end, in this paper we review the most recent works in this topic, in which detailed comparison as well as experimental evaluation are given over the cuttingedge methods. We further reveal and discuss the future trends and potential directions for visual sentiment prediction.
基金Project supported by the Key Project of the National Natural Science Foundation of China(No.U1836220)the National Nat-ural Science Foundation of China(No.61672267)+1 种基金the Qing Lan Talent Program of Jiangsu Province,China,the Jiangsu Key Laboratory of Security Technology for Industrial Cyberspace,China,the Finnish Cultural Foundation,the Jiangsu Specially-Appointed Professor Program,China(No.3051107219003)the liangsu Joint Research Project of Sino-Foreign Cooperative Education Platform,China,and the Talent Startup Project of Nanjing Institute of Technology,China(No.YKJ201982)。
文摘Large-scale datasets are driving the rapid developments of deep convolutional neural networks for visual sentiment analysis.However,the annotation of large-scale datasets is expensive and time consuming.Instead,it iseasy to obtain weakly labeled web images from the Internet.However,noisy labels st.ill lead to seriously degraded performance when we use images directly from the web for training networks.To address this drawback,we propose an end-to-end weakly supervised learning network,which is robust to mislabeled web images.Specifically,the proposed attention module automatically eliminates the distraction of those samples with incorrect labels bv reducing their attention scores in the training process.On the other hand,the special-class activation map module is designed to stimulate the network by focusing on the significant regions from the samples with correct labels in a weakly supervised learning approach.Besides the process of feature learning,applying regularization to the classifier is considered to minimize the distance of those samples within the same class and maximize the distance between different class centroids.Quantitative and qualitative evaluations on well-and mislabeled web image datasets demonstrate that the proposed algorithm outperforms the related methods.
基金This paper is supported by the National Natural Science Foundation of China under contract No.71774084,72274096the National Social Science Fund of China under contract No.16ZDA224,17ZDA291.
文摘Purpose:Nowadays,public opinions during public emergencies involve not only textual contents but also contain images.However,the existing works mainly focus on textual contents and they do not provide a satisfactory accuracy of sentiment analysis,lacking the combination of multimodal contents.In this paper,we propose to combine texts and images generated in the social media to perform sentiment analysis.Design/methodology/approach:We propose a Deep Multimodal Fusion Model(DMFM),which combines textual and visual sentiment analysis.We first train word2vec model on a large-scale public emergency corpus to obtain semantic-rich word vectors as the input of textual sentiment analysis.BiLSTM is employed to generate encoded textual embeddings.To fully excavate visual information from images,a modified pretrained VGG16-based sentiment analysis network is used with the best-performed fine-tuning strategy.A multimodal fusion method is implemented to fuse textual and visual embeddings completely,producing predicted labels.Findings:We performed extensive experiments on Weibo and Twitter public emergency datasets,to evaluate the performance of our proposed model.Experimental results demonstrate that the DMFM provides higher accuracy compared with baseline models.The introduction of images can boost the performance of sentiment analysis during public emergencies.Research limitations:In the future,we will test our model in a wider dataset.We will also consider a better way to learn the multimodal fusion information.Practical implications:We build an efficient multimodal sentiment analysis model for the social media contents during public emergencies.Originality/value:We consider the images posted by online users during public emergencies on social platforms.The proposed method can present a novel scope for sentiment analysis during public emergencies and provide the decision support for the government when formulating policies in public emergencies.
文摘Targeted multimodal sentiment classification(TMSC)aims to identify the sentiment polarity of a target mentioned in a multimodal post.The majority of current studies on this task focus on mapping the image and the text to a high-dimensional space in order to obtain and fuse implicit representations,ignoring the rich semantic information contained in the images and not taking into account the contribution of the visual modality in the multimodal fusion representation,which can potentially influence the results of TMSC tasks.This paper proposes a general model for Improving Targeted Multimodal Sentiment Classification with Semantic Description of Images(ITMSC)as a way to tackle these issues and improve the accu-racy of multimodal sentiment analysis.Specifically,the ITMSC model can automatically adjust the contribution of images in the fusion representation through the exploitation of semantic descriptions of images and text similarity relations.Further,we propose a target-based attention module to capture the target-text relevance,an image-based attention module to capture the image-text relevance,and a target-image matching module based on the former two modules to properly align the target with the image so that fine-grained semantic information can be extracted.Our experimental results demonstrate that our model achieves comparable performance with several state-of-the-art approaches on two multimodal sentiment datasets.Our findings indicate that incorporating semantic descriptions of images can enhance our understanding of multimodal content and lead to improved sentiment analysis performance.
文摘为更好地利用单词词性包含的语义信息和伴随单词出现时的非自然语言上下文信息,提出动态调整语义的词性加权多模态情感分析(part of speech weighted multi-modal sentiment analysis model with dynamic semantics adjustment,PW-DS)模型.该模型以自然语言为主体,分别使用基于Transformer的双向编码器表示(bidirectional encoder representation from Transformers,BERT)模型、广义自回归预训练(generalized autoregressive pretraining for language understanding,XLNet)模型和一种鲁棒优化的BERT预训练(robustly optimized BERT pretraining approach,RoBERTa)模型为文本模态做词嵌入编码;创建动态调整语义模块将自然语言和非自然语言信息有效结合;设计词性加权模块,提取单词词性并赋权以优化情感判别.与张量融合网络和低秩多模态融合等当前先进模型的对比实验结果表明,PW-DS模型在公共数据集CMU-MOSI和CMU-MOSEI上的平均绝对误差分别达到了0.607和0.510,二分类准确率分别为89.02%和86.93%,优于对比模型.通过消融实验分析了不同模块对模型效果的影响,验证了模型的有效性.
文摘为了研究微博用户表达情感的特性,从个人化的情感表达和对社会性事件的态度反映两类文本出发,分别对个人情感变化以及热点事件中的用户情绪进行分析,设计并实现了微博情感可视化系统(sentiment visualization system for microblog,SVSM)。个人化情感研究记录用户在时间轴上的情绪波动,并且从性别及地域属性上分析个人情感差异;热点事件情感研究监测用户情绪的群体表达,从时间、空间、热词、用户属性、事件属性以及传播特性等角度进行特性分析。