期刊文献+

利用位置增强注意力机制的属性级情感分类 被引量:1

Using Position-Enhanced Attention Mechanism for Aspect-Based Sentiment Classification
下载PDF
导出
摘要 属性级情感分类旨在准确识别评论中属性的情感极性。现有的基于长短时记忆网络(LSTM)的方法大多只利用了属性和上下文的语义信息,而忽视了属性与上下文相对位置信息。针对此问题,提出一种利用相对位置信息来增强注意力的LSTM网络模型,解决属性级情感分类问题。首先,对上下文的输入层加入位置向量,利用两个LSTM网络对上下文和属性分别进行独立语义编码;然后,对上下文的隐藏层再次拼接位置向量,并利用属性隐藏层向量参与上下文不同词注意力权重的计算;最后,利用上下文生成的有效表示进行情感分类。该模型在SemEval 2014 Task4 Restaurant和Laptop两个不同领域数据集上进行了实验,在三分类实验中,准确率分别达到79.7%和72.1%。在二分类实验中,准确率分别达到92.1%和88.3%。相比多个基线模型,在准确率上都有一定的提升。 Aspect-based sentiment classification is designed to accurately identify the emotional polarity of aspect in a comment.Most existing long short term memory(LSTM)network uses only the semantic information of aspects and contexts,while ignoring the function of relative position information between the aspect and the context.To solve this problem,this paper proposes an LSTM-based model that uses relative position information to enhance attention and solves the aspect-based sentiment classification problem.First,the position vector is added to the input layer of the context,and the context and the aspect are separately encoded by using two LSTM networks.Then,the position vector is stitched again to the hidden layer of the context,and the hidden layer vector of the aspect is used to calculate the attention weight of different words in the context.Finally,sentiment classification is performed using a valid representation generated by the context.The model is tested on the Restaurant and Laptop datasets of SemEval 2014 Task4.In the three-category experiments,the accuracies of the proposed model are 79.7%and 72.1%respectively.In the two-category experiments,the accuracies reach 92.1%and 88.3%respectively.The proposed model has a certain improvement in accuracy compared to multiple baseline models.
作者 张周彬 相艳 梁俊葛 杨嘉林 马磊 ZHANG Zhoubin;XIANG Yan;LIANG Junge;YANG Jialin;MA Lei(Faculty of Information Engineering and Automation,Kunming University of Science and Technology,Kunming 650504,China;Kunming University of Science and Technology Asset Management Company Limited,Kunming 650051,China)
出处 《计算机科学与探索》 CSCD 北大核心 2020年第4期619-627,共9页 Journal of Frontiers of Computer Science and Technology
基金 国家自然科学基金Nos.61462054,61732005,61672271,61741112 云南省自然科学基金No.2017FB098 国家博士后面上科学基金No.2016M592894XB 云南省科学技术厅项目No.2015FB135 云南省重大科技项目No.2018ZF017。
关键词 属性 情感分类 注意力机制 长短时记忆网络(LSTM) 位置信息 aspect sentiment classification attention mechanism long short term memory(LSTM)network position information
  • 相关文献

参考文献5

二级参考文献37

  • 1朱嫣岚,闵锦,周雅倩,黄萱菁,吴立德.基于HowNet的词汇语义倾向计算[J].中文信息学报,2006,20(1):14-20. 被引量:326
  • 2张启蕊,董守斌,张凌.文本分类的性能评估指标[J].广西师范大学学报(自然科学版),2007,25(2):119-122. 被引量:7
  • 3唐慧丰,谭松波,程学旗.基于监督学习的中文情感分类技术比较研究[J].中文信息学报,2007,21(6):88-94. 被引量:136
  • 4B.Pang,L.Lee.Seeing stars:Exploiting class relationships for sentiment categorization with respect to rating scales[C]Proceedings of the ACL,2005:115-124.
  • 5Y.Bengio,R.Ducharme,P.Vincent,et al.A neural probabilistic language model[J].Journal of Machine Learning Research,2003,3:1137-1155.
  • 6Collobert R,Weston J.A unified architecture for natural language processing:Deep neural networks with multitask learning[C]//Proceedings of the 25th international conference on Machine learning.ACM,2008:160-167.
  • 7Mnih A,Hinton G E.A Scalable Hierarchical Distributed Language Model[C]//Proceedings of NIPS.2008::1081-1088.
  • 8Mikolov T,Karafiát M,Burget L,et al.Recurrent neural network based language model[C]//Proceedingsof INTERSPEECH.2010:1045-1048.
  • 9Mikolov T,Kombrink S,Burget L,et al.Extensions of recurrent neural network language model[C]//Proceedings of Acoustics,Speech and Signal Processing(ICASSP),2011 IEEE International Conference on.IEEE,2011:5528-5531.
  • 10Kombrink S,Mikolov T,Karafiát M,et al.Recurrent Neural Network Based Language Modeling in Meeting Recognition[C]//Proceedings of INTERSPEECH.2011:2877-2880.

共引文献778

同被引文献18

引证文献1

二级引证文献9

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部