期刊文献+

基于BERT-Attention-BiLSTM的多特征主题情感分析

Multi-feature topic sentiment analysis based on BERT-Attention-BiLSTM
下载PDF
导出
摘要 关注微博用户对于事件的情感倾向,有利于平台了解用户心声,也能为决策者的舆情处理工作提供参考和方向。然而,当前大部分微博情感分析研究仍是基于文本的,忽略了表情、图片等要素。针对上述问题,本文提出了一个多模型融合的情感分析模型,以BERT预训练模型为基础,融合情感词典,并采用双向LSTM获取文本特征,有效联系前后文,并引入注意力机制,同时提出了一种emoji表情特征计算方法,得到一个情感分类更准确的多特征主题情感分析模型。 Currently,paying attention to Weibo users’emotional tendencies towards events will help the platform understand users’voices,and can also provide reference and direction for decision makers in handling public opinion.However,most of the current microblog sentiment analysis research is still based on text,ignoring elements such as expressions and pictures.In response to the above problems,this paper proposes a multi-model fusion sentiment analysis model,which integrates sentiment lexicon based on the BERT model,and uses two-way LSTM to obtain text features,effectively connects context,and introduces an attention mechanism.At the same time,a calculation method of emoji expression features is proposed,and a multi-feature topic sentiment analysis model with more accurate classification is proposed.
作者 马律倩 MA Lüqian(School of Computer Science and Technology,Zhejiang Sci-Tech University,Hangzhou 310018,China)
出处 《智能计算机与应用》 2024年第5期205-208,共4页 Intelligent Computer and Applications
关键词 情感分析 注意力机制 预训练模型 深度学习 sentiment analysis attention mechanism pre-training model deep learning
  • 相关文献

参考文献3

二级参考文献24

  • 1王家乾,龚子寒,薛云,庞士冠,古东宏.基于混合多头注意力和胶囊网络的特定目标情感分析[J].中文信息学报,2020(5):100-110. 被引量:9
  • 2Hart G W. To decode short cryptograms[A]. Communications of the ACM[C]. New York: Association for Computing Machinery, 1994.102-108.
  • 3Van Rijsbergen C J. Information retrieval[M]. London: Butterworths Scientific Publication, 1975.
  • 4Fox C. Lexical analysis and stoplists(including the ‘Brown Corpus’stoplist), information retrieval: Data structures and algorithms[M]. Upper Saddle River, New Jersey: Prentice Hall, 1992.
  • 5Sinka M P, Corne D W. Web intelligence WI 2003[A]. Proceedings IEEE/WIC International Conference on Soc[C]. Los Alamitos: IEEE Comput, 2003.396-402.
  • 6Silva C, Ribeiro B. The importance of stop word removal on recall values in text categorization[J]. Neural Networks, 2003, 3:20-24.
  • 7Yang Y. Pedersen J O. A comparative study on feature selection in text categorization[A]. Proceedings of ICML-97, 14th International Conference on Machine Learning[C]. San Francisco: Morgan Kaufmann Publishers Inc., 1997.412-420.
  • 8Luhn H P. The automatic creation of literature abstracts[J]. IBM Journal of Research and Development, 1958, 2(2):159-165.
  • 9Harman D. An experimental study of factors important in document ranking[A]. Proceedings of the 1986 ACM Conference on Research and Developments in Information Retrieval[C]. New York: Association for Computing Machinery, 1986.186-193.
  • 10北京大学计算语言学研究所. 1998年1月人民日报切分、标注语料库[EB/OL]. http:∥icl.pku.edu.cn//icl_groups/corpus/dwldform1.asp,2001-05-10/2004-04-01. (in Chinese)Institute of Computational Linguistics Peking University. Word segmentation corpus from People's Daily(January 1998)[EB/OL]. http:∥icl.pku.edu.cn//icl_groups/corpus/dwldform1.asp,2001-05-10/2004-04-01.

共引文献78

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部