摘要
【目的】系统揭示传统深度表示模型与最新预训练模型的原理,探究其在文本挖掘任务中的效果差异。【方法】采用对比研究法,从模型侧和实验侧分别比较传统模型与最新模型在CR、MR、MPQA、Subj、SST-2和TREC六个数据集上的效果差异。【结果】在六个任务中,XLNet模型取得了最高的平均F1值(0.9186),优于ELMo(0.8090)、BERT(0.8983)、Word2Vec(0.7692)、GloVe(0.7576)和FastText(0.7506)。【局限】由于篇幅限制,实证研究以文本挖掘中的分类任务为主,尚未比较词汇表示学习方法在机器翻译、问答等其他任务中的效果。【结论】传统深度表示学习模型与最新预训练模型在文本挖掘任务中的表现存在较大差异。
[Objective]This study systematically explores the principles of traditional deep representation models and the latest pre-training ones,aiming to examine their performance in text mining tasks.[Methods]We compared these models’data mining results from the model side and the experimental side.All tests were conducted with six datasets of CR,MR,MPQA,Subj,SST-2 and TREC.[Results]The XLNet model achieved the best average F1 value(0.9186),which was higher than ELMo(0.8090),BERT(0.8983),Word2 Vec(0.7692),GloVe(0.7576)and FastText(0.7506).[Limitations]Our research focused on classification tasks of text mining,which did not compare the performance of vocabulary representation methods in machine translation,Q&A and other tasks.[Conclusions]The traditional deep representation learning models and the latest pre-training ones yield different results in text mining tasks.
作者
余传明
王曼怡
林虹君
朱星宇
黄婷婷
安璐
Yu Chuanming;Wang Manyi;Lin Hongjun;Zhu Xingyu;Huang Tingting;An Lu(School of Information and Safety Engineering,Zhongnan University of Economics and Law,Wuhan 430073,China;School of Statistics and Mathematics,Zhongnan University of Economics and Law,Wuhan 430073,China;School of Information Management,Wuhan University,Wuhan 430072,China)
出处
《数据分析与知识发现》
CSSCI
CSCD
北大核心
2020年第8期28-40,共13页
Data Analysis and Knowledge Discovery
基金
国家自然科学基金面上项目“面向跨语言观点摘要的领域知识表示与融合模型研究”(项目编号:71974202)
中南财经政法大学中央高校基本科研业务费专项资金资助“大数据视角下的中美贸易战观点挖掘研究”(项目编号:2722019JX007)的研究成果之一。
关键词
词汇表示学习
知识表示
深度学习
文本挖掘
Word Representation Learning
Knowledge Representation
Deep Learning
Text Mining