摘要
【目的】对只有少量标注的文本进行高效率的分类,提出一种新的半监督文本分类方法。【方法】提出DW-TCI半监督文本分类方法,通过使用双通道的特征提取方式得到基分类器组的两组特征输入向量,并引入基于分歧的半监督分类方法和集成学习的思想,将无监督共识结果样本引入模型训练,最后通过等值加权投票法得到预测文本的分类结果。【结果】在两个不同的数据集下,DW-TCI方法使用20%有标签样本训练时,分类精度分别达到92.32%和87.01%,对比其他半监督分类方法最少分别提升5.54%和5.65%。【局限】使用的数据集数量较少,未在更多的数据集上进行验证。【结论】DW-TCI方法可以大幅减少对训练样本的标注,为服务商进行高效的文本分类提供了有效支持。
[Objective] This paper proposes a new semi-supervised method for text classification, aiming to efficiently process texts with only small amount of annotations. [Methods] The proposed DW-TCI based method used double-channel feature extraction to obtain two sets of feature input vectors of the base classifier group.Then, we introduced the semi-supervised classification method with divergence and the idea of integrated learning. Finally, we trained the non-supervised sample with our model, and obtained the classification result of the predicted text with the equivalent weighted voting method. [Results] We examined our method with two different data sets having 20% labeled samples. The classification accuracy reached 92.32% and 87.01%, which were at least 5.54% and 5.65% higher than those of similar methods. [Limitations] The sample data set needs to be expanded. [Conclusions] The proposed method could reduce the labeling workloads of training samples and provide effective support for better text classification results.
作者
余本功
汲浩敏
Yu Bengong;Ji Haomin(School of Management,Hefei University of Technology,Hefei 230009,China;Key Laboratory of Process Optimization&Intelligent Decision-Making,Ministry of Education,Hefei University of Technology,Hefei 230009,China)
出处
《数据分析与知识发现》
CSSCI
CSCD
北大核心
2020年第10期58-69,共12页
Data Analysis and Knowledge Discovery
基金
国家自然科学基金项目“基于制造大数据的产品研发知识集成与服务机制研究”(项目编号:71671057)
过程优化与智能决策教育部重点实验室开放课题的研究成果之一。
关键词
半监督分类
样本分歧
分类器分歧
集成学习
Semi-Supervised Classification
Sample Divergence
Classifier Divergence
Ensemble Learning