Supervised learning often requires a large number of labeled examples,which has become a critical bottleneck in the case that manual annotating the class labels is costly.To mitigate this issue,a new framework called ...Supervised learning often requires a large number of labeled examples,which has become a critical bottleneck in the case that manual annotating the class labels is costly.To mitigate this issue,a new framework called pairwise comparison(Pcomp)classification is proposed to allow training examples only weakly annotated with pairwise comparison,i.e.,which one of two examples is more likely to be positive.The previous study solves Pcomp problems by minimizing the classification error,which may lead to less robust model due to its sensitivity to class distribution.In this paper,we propose a robust learning framework for Pcomp data along with a pairwise surrogate loss called Pcomp-AUC.It provides an unbiased estimator to equivalently maximize AUC without accessing the precise class labels.Theoretically,we prove the consistency with respect to AUC and further provide the estimation error bound for the proposed method.Empirical studies on multiple datasets validate the effectiveness of the proposed method.展开更多
1 Introduction.Recently,some research efforts[1]have tried to combine selfsupervised learning and active learning to reduce the cost of labeling samples.However,this method is difficult to effectively improve the mode...1 Introduction.Recently,some research efforts[1]have tried to combine selfsupervised learning and active learning to reduce the cost of labeling samples.However,this method is difficult to effectively improve the model performance because it does not consider the feature representation performance of the examples on the pretext task.In order to overcome this shortcoming,we propose a deep active sampling framework with self-supervised representation learning.展开更多
基金Natural Science Foundation of Jiangsu Province,China(BK20222012,BK20211517)National Key R&D Program of China(2020AAA0107000)National Natural Science Foundation of China(Grant No.62222605)。
文摘Supervised learning often requires a large number of labeled examples,which has become a critical bottleneck in the case that manual annotating the class labels is costly.To mitigate this issue,a new framework called pairwise comparison(Pcomp)classification is proposed to allow training examples only weakly annotated with pairwise comparison,i.e.,which one of two examples is more likely to be positive.The previous study solves Pcomp problems by minimizing the classification error,which may lead to less robust model due to its sensitivity to class distribution.In this paper,we propose a robust learning framework for Pcomp data along with a pairwise surrogate loss called Pcomp-AUC.It provides an unbiased estimator to equivalently maximize AUC without accessing the precise class labels.Theoretically,we prove the consistency with respect to AUC and further provide the estimation error bound for the proposed method.Empirical studies on multiple datasets validate the effectiveness of the proposed method.
文摘1 Introduction.Recently,some research efforts[1]have tried to combine selfsupervised learning and active learning to reduce the cost of labeling samples.However,this method is difficult to effectively improve the model performance because it does not consider the feature representation performance of the examples on the pretext task.In order to overcome this shortcoming,we propose a deep active sampling framework with self-supervised representation learning.