期刊文献+

基于PMI的递归自编码器的情感分析方法 被引量:1

Sentiment Analysis Method of Recursive Autoencoder Based on PMI
下载PDF
导出
摘要 为改善传统机器学习方法无法考虑文本语义信息的缺陷,利用递归自编码器(RAE)树形结构学习短语向量空间表示。该方法可在常用的数据集上取得良好效果,但是在学习向量表示过程中,往往需要大量标记数据标记每个结点,人工标注工作量较大。因此提出一种半监督方法,利用PMI方法计算终端结点情感极性值,并考虑上下文程度副词和否定词对修饰情感词语的情感倾向与情感强度的影响。实验结果表明,与手动标记的传统RAE模型相比,引入PMI方法标记结点后,准确率提升至88.1%,可一定程度减少人工标注的工作量。 In order to improve the shortcomings of traditional machine learning methods that are difficult to consider text semantic information,the recursive autoencoder(RAE)is used to learn the vector space representation of phrases with its tree structure,and it has achieved good results on commonly used data sets.However,in the process of learning vector representation,a large amount of labeling data is often needed to label each node.This paper proposes a semi-supervised method that uses the PMI method to calculate the emotional polarity value of the terminal node,and considers the influence of degree adverbs and negative words in the context on the emotional tendency and emotional strength of the modified emotional words.The experimental results show that compared with the traditional RAE model,after the PMI method is introduced to label the nodes,the accuracy is increased to 88.1%,and a lot of manual labeling workload is saved.
作者 孙琦 梁永全 SUN Qi;LIANG Yong-quan(College of Computer Science and Engineering,Shandong University of Science and Technology,Qingdao 266590,China)
出处 《软件导刊》 2021年第6期59-62,共4页 Software Guide
关键词 情感分析 递归自编码器 PMI sentiment analysis recursive autoencoder PMI
  • 相关文献

参考文献3

二级参考文献41

  • 1朱嫣岚,闵锦,周雅倩,黄萱菁,吴立德.基于HowNet的词汇语义倾向计算[J].中文信息学报,2006,20(1):14-20. 被引量:326
  • 2唐慧丰,谭松波,程学旗.基于监督学习的中文情感分类技术比较研究[J].中文信息学报,2007,21(6):88-94. 被引量:136
  • 3B.Pang,L.Lee.Seeing stars:Exploiting class relationships for sentiment categorization with respect to rating scales[C]Proceedings of the ACL,2005:115-124.
  • 4Y.Bengio,R.Ducharme,P.Vincent,et al.A neural probabilistic language model[J].Journal of Machine Learning Research,2003,3:1137-1155.
  • 5Collobert R,Weston J.A unified architecture for natural language processing:Deep neural networks with multitask learning[C]//Proceedings of the 25th international conference on Machine learning.ACM,2008:160-167.
  • 6Mnih A,Hinton G E.A Scalable Hierarchical Distributed Language Model[C]//Proceedings of NIPS.2008::1081-1088.
  • 7Mikolov T,Karafiát M,Burget L,et al.Recurrent neural network based language model[C]//Proceedingsof INTERSPEECH.2010:1045-1048.
  • 8Mikolov T,Kombrink S,Burget L,et al.Extensions of recurrent neural network language model[C]//Proceedings of Acoustics,Speech and Signal Processing(ICASSP),2011 IEEE International Conference on.IEEE,2011:5528-5531.
  • 9Kombrink S,Mikolov T,Karafiát M,et al.Recurrent Neural Network Based Language Modeling in Meeting Recognition[C]//Proceedings of INTERSPEECH.2011:2877-2880.
  • 10Mikolov T,Chen K,Corrado G,et al.Efficient estimation of word representations in vector space[J].arXiv preprint arXiv:1301.3781,2013.

共引文献267

同被引文献6

引证文献1

二级引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部