Joint Multimodal Aspect-based Sentiment Analysis(JMASA)is a significant task in the research of multimodal fine-grained sentiment analysis,which combines two subtasks:Multimodal Aspect Term Extraction(MATE)and Multimo...Joint Multimodal Aspect-based Sentiment Analysis(JMASA)is a significant task in the research of multimodal fine-grained sentiment analysis,which combines two subtasks:Multimodal Aspect Term Extraction(MATE)and Multimodal Aspect-oriented Sentiment Classification(MASC).Currently,most existing models for JMASA only perform text and image feature encoding from a basic level,but often neglect the in-depth analysis of unimodal intrinsic features,which may lead to the low accuracy of aspect term extraction and the poor ability of sentiment prediction due to the insufficient learning of intra-modal features.Given this problem,we propose a Text-Image Feature Fine-grained Learning(TIFFL)model for JMASA.First,we construct an enhanced adjacency matrix of word dependencies and adopt graph convolutional network to learn the syntactic structure features for text,which addresses the context interference problem of identifying different aspect terms.Then,the adjective-noun pairs extracted from image are introduced to enable the semantic representation of visual features more intuitive,which addresses the ambiguous semantic extraction problem during image feature learning.Thereby,the model performance of aspect term extraction and sentiment polarity prediction can be further optimized and enhanced.Experiments on two Twitter benchmark datasets demonstrate that TIFFL achieves competitive results for JMASA,MATE and MASC,thus validating the effectiveness of our proposed methods.展开更多
The Aspect-Based Sentiment Analysis(ABSA)task is designed to judge the sentiment polarity of a particular aspect in a review.Recent studies have proved that GCN can capture syntactic and semantic features from depende...The Aspect-Based Sentiment Analysis(ABSA)task is designed to judge the sentiment polarity of a particular aspect in a review.Recent studies have proved that GCN can capture syntactic and semantic features from dependency graphs generated by dependency trees and semantic graphs generated by Multi-headed self-attention(MHSA).However,these approaches do not highlight the sentiment information associated with aspect in the syntactic and semantic graphs.We propose the Aspect-Guided Multi-Graph Convolutional Networks(AGGCN)for Aspect-Based Sentiment Classification.Specifically,we reconstruct two kinds of graphs,changing the weight of the dependency graph by distance from aspect and improving the semantic graph by Aspect-guided MHSA.For interactive learning of syntax and semantics,we dynamically fuse syntactic and semantic diagrams to generate syntactic-semantic graphs to learn emotional features jointly.In addition,Multi-dropout is added to solve the overftting of AGGCN in training.The experimental results on extensive datasets show that our model AGGCN achieves particularly advanced results and validates the effectiveness of the model.展开更多
For the existing aspect category sentiment analysis research,most of the aspects are given for sentiment extraction,and this pipeline method is prone to error accumulation,and the use of graph convolutional neural net...For the existing aspect category sentiment analysis research,most of the aspects are given for sentiment extraction,and this pipeline method is prone to error accumulation,and the use of graph convolutional neural network for aspect category sentiment analysis does not fully utilize the dependency type information between words,so it cannot enhance feature extraction.This paper proposes an end-to-end aspect category sentiment analysis(ETESA)model based on type graph convolutional networks.The model uses the bidirectional encoder representation from transformers(BERT)pretraining model to obtain aspect categories and word vectors containing contextual dynamic semantic information,which can solve the problem of polysemy;when using graph convolutional network(GCN)for feature extraction,the fusion operation of word vectors and initialization tensor of dependency types can obtain the importance values of different dependency types and enhance the text feature representation;by transforming aspect category and sentiment pair extraction into multiple single-label classification problems,aspect category and sentiment can be extracted simultaneously in an end-to-end way and solve the problem of error accumulation.Experiments are tested on three public datasets,and the results show that the ETESA model can achieve higher Precision,Recall and F1 value,proving the effectiveness of the model.展开更多
The aspect-based sentiment analysis(ABSA)consists of two subtasksaspect term extraction and aspect sentiment prediction.Most methods conduct the ABSA task by handling the subtasks in a pipeline manner,whereby problems...The aspect-based sentiment analysis(ABSA)consists of two subtasksaspect term extraction and aspect sentiment prediction.Most methods conduct the ABSA task by handling the subtasks in a pipeline manner,whereby problems in performance and real application emerge.In this study,we propose an end-to-end ABSA model,namely,SSi-LSi,which fuses the syntactic structure information and the lexical semantic information,to address the limitation that existing end-to-end methods do not fully exploit the text information.Through two network branches,the model extracts syntactic structure information and lexical semantic information,which integrates the part of speech,sememes,and context,respectively.Then,on the basis of an attention mechanism,the model further realizes the fusion of the syntactic structure information and the lexical semantic information to obtain higher quality ABSA results,in which way the text information is fully used.Subsequent experiments demonstrate that the SSi-LSi model has certain advantages in using different text information.展开更多
The aspect-based sentiment analysis(ABSA) consists of two subtasks—aspect term extraction and aspect sentiment prediction. Existing methods deal with both subtasks one by one in a pipeline manner, in which there lies...The aspect-based sentiment analysis(ABSA) consists of two subtasks—aspect term extraction and aspect sentiment prediction. Existing methods deal with both subtasks one by one in a pipeline manner, in which there lies some problems in performance and real application. This study investigates the end-to-end ABSA and proposes a novel multitask multiview network(MTMVN) architecture. Specifically, the architecture takes the unified ABSA as the main task with the two subtasks as auxiliary tasks. Meanwhile, the representation obtained from the branch network of the main task is regarded as the global view, whereas the representations of the two subtasks are considered two local views with different emphases. Through multitask learning, the main task can be facilitated by additional accurate aspect boundary information and sentiment polarity information. By enhancing the correlations between the views under the idea of multiview learning, the representation of the global view can be optimized to improve the overall performance of the model. The experimental results on three benchmark datasets show that the proposed method exceeds the existing pipeline methods and end-to-end methods, proving the superiority of our MTMVN architecture.展开更多
方面情感三元组抽取(ASTE)是方面情感分析中一项极具挑战性的子任务,目的是提取所给句子中的方面项、观点项和对应的情感极性。现有的面向ASTE任务的模型分为流水线模型和端到端模型。针对流水线模型易受到错误传播的影响,且大部分现有...方面情感三元组抽取(ASTE)是方面情感分析中一项极具挑战性的子任务,目的是提取所给句子中的方面项、观点项和对应的情感极性。现有的面向ASTE任务的模型分为流水线模型和端到端模型。针对流水线模型易受到错误传播的影响,且大部分现有端到端模型忽略了句子中丰富的句法信息问题,提出一种语义和句法增强的双通道方面情感三元组抽取模型(SSED-ASTE)。首先,使用BERT(Bidirectional Encoder Representation from Transformers)编码器对上下文编码;其次,使用双向长短期记忆(Bi-LSTM)网络捕捉上下文语义依赖关系;再次,通过2个并行的图卷积网络(GCN)分别使用自注意力机制和依存句法分析提取语义特征和句法特征并融合;最后,使用网格标记方案(GTS)抽取三元组。在4个公开数据集上进行实验分析,与GTS-BERT模型相比,所提模型的F1值分别提升了0.29、1.50、2.93和0.78个百分点。实验结果表明,所提模型可以有效利用句子中隐含的语义信息和句法信息,实现较准确的三元组抽取。展开更多
基金supported by the Science and Technology Project of Henan Province(No.222102210081).
文摘Joint Multimodal Aspect-based Sentiment Analysis(JMASA)is a significant task in the research of multimodal fine-grained sentiment analysis,which combines two subtasks:Multimodal Aspect Term Extraction(MATE)and Multimodal Aspect-oriented Sentiment Classification(MASC).Currently,most existing models for JMASA only perform text and image feature encoding from a basic level,but often neglect the in-depth analysis of unimodal intrinsic features,which may lead to the low accuracy of aspect term extraction and the poor ability of sentiment prediction due to the insufficient learning of intra-modal features.Given this problem,we propose a Text-Image Feature Fine-grained Learning(TIFFL)model for JMASA.First,we construct an enhanced adjacency matrix of word dependencies and adopt graph convolutional network to learn the syntactic structure features for text,which addresses the context interference problem of identifying different aspect terms.Then,the adjective-noun pairs extracted from image are introduced to enable the semantic representation of visual features more intuitive,which addresses the ambiguous semantic extraction problem during image feature learning.Thereby,the model performance of aspect term extraction and sentiment polarity prediction can be further optimized and enhanced.Experiments on two Twitter benchmark datasets demonstrate that TIFFL achieves competitive results for JMASA,MATE and MASC,thus validating the effectiveness of our proposed methods.
基金supported by the National Natural Science Foundation of China under Grant 61976158 and Grant 61673301.
文摘The Aspect-Based Sentiment Analysis(ABSA)task is designed to judge the sentiment polarity of a particular aspect in a review.Recent studies have proved that GCN can capture syntactic and semantic features from dependency graphs generated by dependency trees and semantic graphs generated by Multi-headed self-attention(MHSA).However,these approaches do not highlight the sentiment information associated with aspect in the syntactic and semantic graphs.We propose the Aspect-Guided Multi-Graph Convolutional Networks(AGGCN)for Aspect-Based Sentiment Classification.Specifically,we reconstruct two kinds of graphs,changing the weight of the dependency graph by distance from aspect and improving the semantic graph by Aspect-guided MHSA.For interactive learning of syntax and semantics,we dynamically fuse syntactic and semantic diagrams to generate syntactic-semantic graphs to learn emotional features jointly.In addition,Multi-dropout is added to solve the overftting of AGGCN in training.The experimental results on extensive datasets show that our model AGGCN achieves particularly advanced results and validates the effectiveness of the model.
基金Supported by the National Key Research and Development Program of China(No.2018YFB1702601).
文摘For the existing aspect category sentiment analysis research,most of the aspects are given for sentiment extraction,and this pipeline method is prone to error accumulation,and the use of graph convolutional neural network for aspect category sentiment analysis does not fully utilize the dependency type information between words,so it cannot enhance feature extraction.This paper proposes an end-to-end aspect category sentiment analysis(ETESA)model based on type graph convolutional networks.The model uses the bidirectional encoder representation from transformers(BERT)pretraining model to obtain aspect categories and word vectors containing contextual dynamic semantic information,which can solve the problem of polysemy;when using graph convolutional network(GCN)for feature extraction,the fusion operation of word vectors and initialization tensor of dependency types can obtain the importance values of different dependency types and enhance the text feature representation;by transforming aspect category and sentiment pair extraction into multiple single-label classification problems,aspect category and sentiment can be extracted simultaneously in an end-to-end way and solve the problem of error accumulation.Experiments are tested on three public datasets,and the results show that the ETESA model can achieve higher Precision,Recall and F1 value,proving the effectiveness of the model.
基金This work was supported by the National Natural Science Foundation of China(No.61976247).
文摘The aspect-based sentiment analysis(ABSA)consists of two subtasksaspect term extraction and aspect sentiment prediction.Most methods conduct the ABSA task by handling the subtasks in a pipeline manner,whereby problems in performance and real application emerge.In this study,we propose an end-to-end ABSA model,namely,SSi-LSi,which fuses the syntactic structure information and the lexical semantic information,to address the limitation that existing end-to-end methods do not fully exploit the text information.Through two network branches,the model extracts syntactic structure information and lexical semantic information,which integrates the part of speech,sememes,and context,respectively.Then,on the basis of an attention mechanism,the model further realizes the fusion of the syntactic structure information and the lexical semantic information to obtain higher quality ABSA results,in which way the text information is fully used.Subsequent experiments demonstrate that the SSi-LSi model has certain advantages in using different text information.
基金supported by the National Natural Science Foundation of China(No.61976247)
文摘The aspect-based sentiment analysis(ABSA) consists of two subtasks—aspect term extraction and aspect sentiment prediction. Existing methods deal with both subtasks one by one in a pipeline manner, in which there lies some problems in performance and real application. This study investigates the end-to-end ABSA and proposes a novel multitask multiview network(MTMVN) architecture. Specifically, the architecture takes the unified ABSA as the main task with the two subtasks as auxiliary tasks. Meanwhile, the representation obtained from the branch network of the main task is regarded as the global view, whereas the representations of the two subtasks are considered two local views with different emphases. Through multitask learning, the main task can be facilitated by additional accurate aspect boundary information and sentiment polarity information. By enhancing the correlations between the views under the idea of multiview learning, the representation of the global view can be optimized to improve the overall performance of the model. The experimental results on three benchmark datasets show that the proposed method exceeds the existing pipeline methods and end-to-end methods, proving the superiority of our MTMVN architecture.
文摘方面情感三元组抽取(ASTE)是方面情感分析中一项极具挑战性的子任务,目的是提取所给句子中的方面项、观点项和对应的情感极性。现有的面向ASTE任务的模型分为流水线模型和端到端模型。针对流水线模型易受到错误传播的影响,且大部分现有端到端模型忽略了句子中丰富的句法信息问题,提出一种语义和句法增强的双通道方面情感三元组抽取模型(SSED-ASTE)。首先,使用BERT(Bidirectional Encoder Representation from Transformers)编码器对上下文编码;其次,使用双向长短期记忆(Bi-LSTM)网络捕捉上下文语义依赖关系;再次,通过2个并行的图卷积网络(GCN)分别使用自注意力机制和依存句法分析提取语义特征和句法特征并融合;最后,使用网格标记方案(GTS)抽取三元组。在4个公开数据集上进行实验分析,与GTS-BERT模型相比,所提模型的F1值分别提升了0.29、1.50、2.93和0.78个百分点。实验结果表明,所提模型可以有效利用句子中隐含的语义信息和句法信息,实现较准确的三元组抽取。