期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Robust AUC maximization for classification with pairwise confidence comparisons
1
作者 haochen shi Mingkun XIE Shengjun HUANG 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期73-83,共11页
Supervised learning often requires a large number of labeled examples,which has become a critical bottleneck in the case that manual annotating the class labels is costly.To mitigate this issue,a new framework called ... Supervised learning often requires a large number of labeled examples,which has become a critical bottleneck in the case that manual annotating the class labels is costly.To mitigate this issue,a new framework called pairwise comparison(Pcomp)classification is proposed to allow training examples only weakly annotated with pairwise comparison,i.e.,which one of two examples is more likely to be positive.The previous study solves Pcomp problems by minimizing the classification error,which may lead to less robust model due to its sensitivity to class distribution.In this paper,we propose a robust learning framework for Pcomp data along with a pairwise surrogate loss called Pcomp-AUC.It provides an unbiased estimator to equivalently maximize AUC without accessing the precise class labels.Theoretically,we prove the consistency with respect to AUC and further provide the estimation error bound for the proposed method.Empirical studies on multiple datasets validate the effectiveness of the proposed method. 展开更多
关键词 method pairwise WEAKLY
原文传递
Deep active sampling with self-supervised learning
2
作者 haochen shi Hui ZHOU 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第4期221-223,共3页
1 Introduction.Recently,some research efforts[1]have tried to combine selfsupervised learning and active learning to reduce the cost of labeling samples.However,this method is difficult to effectively improve the mode... 1 Introduction.Recently,some research efforts[1]have tried to combine selfsupervised learning and active learning to reduce the cost of labeling samples.However,this method is difficult to effectively improve the model performance because it does not consider the feature representation performance of the examples on the pretext task.In order to overcome this shortcoming,we propose a deep active sampling framework with self-supervised representation learning. 展开更多
关键词 tried LEARNING OVERCOME
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部