期刊文献+

密度峰值隶属度优化的半监督Self-Training算法

Semi-supervised Self-Training Algorithm for Density Peak Membership Optimization
下载PDF
导出
摘要 现实中由于获取标签的成本很高,大部分的数据只含有少量标签。相比监督学习和无监督学习,半监督学习能充分利用数据集中的大量无标签数据和少量有标签数据,以较少的标签成本获得较高的学习性能。自训练算法是一种经典的半监督学习算法,在其迭代优化分类器的过程中,不断从无标签样本中选取高置信度样本并由基分类器赋予标签,再将这些样本和伪标签添加进训练集。选取高置信度样本是Self-Training算法的关键,受密度峰值聚类算法(DPC)启发,将密度峰值用于高置信度样本的选取,提出了密度峰值隶属度优化的半监督Self-Training算法(STDPM)。首先,STDPM利用密度峰值发现样本的潜在空间结构信息并构造原型树。其次,搜索有标签样本在原型树上的无标签近亲结点,将无标签近亲结点的隶属于不同类簇的峰值定义为簇峰值,归一化后作为密度峰值隶属度。最后,将隶属度大于设定阈值的样本作为高置信度样本,由基分类器赋予标签后添加进训练集。STDPM充分利用密度峰值所隐含的密度和距离信息,提升了高置信度样本的选取质量,进而提升了分类性能。在8个基准数据集上进行对比实验,结果验证了STDPM算法的有效性。 Most of data contain only a few labels because of high cost of obtaining them in reality.Compared with supervised learning and unsupervised learning,semi-supervised learning can obtain higher learning performance with less labeling cost by making full use of large amount of unlabeled data and small amount of labeled data in datasets.Self-Training algorithm is a classical semi-supervised learning algorithm.In the process of iteratively optimizing classifier,high-confidence samples are continuously selected from unlabeled samples and labeled by the base classifier.Then,these samples and pseudo-labels will be added into the training sets.Selecting high-confidence samples is a critical step in the Self-Training algorithm.Inspired by the density peaks clustering(DPC)algorithm,this paper proposes semi-supervised Self-Training algorithm for density peak membership optimization(STDPM),which uses density peak to select high-confidence samples.Firstly,STDPM takes density peak to discover the potential spatial structure information of the samples and constructs a prototype tree.Secondly,STDPM searches the unlabeled direct relatives of the labeled samples in the prototype tree,and defines the density peak of the unlabeled direct relatives that belong to different clusters as the clusters-peak.Then,clusters-peak is turned into the density peak membership after normalized.Finally,STDPM regards samples with membership greater than the set threshold as high-confidence samples that are labeled by the base classifier and added to the training set.STDPM makes full use of the density and distance information implied by the peak,which improves the selection quality of high-confidence samples and further improves the classification performance.Comparative experiments are conducted on 8 benchmark datasets,which verify the effectiveness of STDPM.
作者 刘学文 王继奎 杨正国 李冰 聂飞平 LIU Xuewen;WANG Jikui;YANG Zhengguo;LI Bing;NIE Feiping(School of Information Engineering,Lanzhou University of Finance and Economics,Lanzhou 730020,China;Center for Optical Imagery Analysis and Learning,Northwestern Polytechnical University,Xi’an 710072,China)
出处 《计算机科学与探索》 CSCD 北大核心 2022年第9期2078-2088,共11页 Journal of Frontiers of Computer Science and Technology
基金 国家自然科学基金(61772427,11801345) 甘肃省高等学校创新能力提升项目(2019B-97,2019A-069) 兰州财经大学科研项目(Lzufe2020B-0010,Lzufe2020B-011) 甘肃省科技计划项目(20CX9ZA057)。
关键词 密度峰值隶属度 簇峰值 原型树 近亲结点集 自训练 density peak membership clusters-peak prototype tree direct relative node sets self-training
  • 相关文献

参考文献10

二级参考文献58

共引文献124

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部