1 Introduction As an emerging machine learning paradigm,unsupervised domain adaptation(UDA)aims to train an effective model for unlabeled target domain by leveraging knowledge from related but distribution-inconsisten...1 Introduction As an emerging machine learning paradigm,unsupervised domain adaptation(UDA)aims to train an effective model for unlabeled target domain by leveraging knowledge from related but distribution-inconsistent source domain.Most of the existing UDA methods[2]align class-wise distributions resorting to target domain pseudo-labels,for which hard labels may be misguided by misclassifications while soft labels are confusing with trivial noises so that both of them tend to cause frustrating performance.To overcome such drawbacks,as shown in Fig.1,we propose to achieve UDA by performing self-adaptive label filtering learning(SALFL)from both the statistical and the geometrical perspectives,which filters out the misclassified pseudo-labels to reduce negative transfer.Specifically,the proposed SALFL firstly predicts labels for the target domain instances by graph-based random walking and then filters out those noise labels by self-adaptive learning strategy.展开更多
Unsupervised domain adaptation(UDA)has achieved great success in handling cross-domain machine learning applications.It typically benefits the model training of unlabeled target domain by leveraging knowledge from lab...Unsupervised domain adaptation(UDA)has achieved great success in handling cross-domain machine learning applications.It typically benefits the model training of unlabeled target domain by leveraging knowledge from labeled source domain.For this purpose,the minimization of the marginal distribution divergence and conditional distribution divergence between the source and the target domain is widely adopted in existing work.Nevertheless,for the sake of privacy preservation,the source domain is usually not provided with training data but trained predictor(e.g.,classifier).This incurs the above studies infeasible because the marginal and conditional distributions of the source domain are incalculable.To this end,this article proposes a source-free UDA which jointly models domain adaptation and sample transport learning,namely Sample Transport Domain Adaptation(STDA).Specifically,STDA constructs the pseudo source domain according to the aggregated decision boundaries of multiple source classifiers made on the target domain.Then,it refines the pseudo source domain by augmenting it through transporting those target samples with high confidence,and consequently generates labels for the target domain.We train the STDA model by performing domain adaptation with sample transport between the above steps in alternating manner,and eventually achieve knowledge adaptation to the target domain and attain confident labels for it.Finally,evaluation results have validated effectiveness and superiority of the proposed method.展开更多
基金supported by the National Natural Science Foundation of China(Grants Nos.62176128 and 61702273)the Natural Science Foundation of Jiangsu Province(BK20170956)+2 种基金the Open Projects Program of National Laboratory of Pattern Recognition(202000007)the Fundamental Research Funds for the Central Universities(NJ2019010)the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund,the Postgraduate Research&Practice Innovation Program of Jiangsu Province KYCX21_1006,and was also sponsored by the Qing Lan Project.
文摘1 Introduction As an emerging machine learning paradigm,unsupervised domain adaptation(UDA)aims to train an effective model for unlabeled target domain by leveraging knowledge from related but distribution-inconsistent source domain.Most of the existing UDA methods[2]align class-wise distributions resorting to target domain pseudo-labels,for which hard labels may be misguided by misclassifications while soft labels are confusing with trivial noises so that both of them tend to cause frustrating performance.To overcome such drawbacks,as shown in Fig.1,we propose to achieve UDA by performing self-adaptive label filtering learning(SALFL)from both the statistical and the geometrical perspectives,which filters out the misclassified pseudo-labels to reduce negative transfer.Specifically,the proposed SALFL firstly predicts labels for the target domain instances by graph-based random walking and then filters out those noise labels by self-adaptive learning strategy.
基金This work was partially supported by the National Natural Science Foundation of China under Grant Nos.61702273 and 62076062the Natural Science Foundation of Jinangsu Province of China under Grant No.BK20170956+1 种基金the Open Projects Program of National Laboratory of Pattern Recognition under Grant No.20200007was also sponsored by Qing Lan Project.
文摘Unsupervised domain adaptation(UDA)has achieved great success in handling cross-domain machine learning applications.It typically benefits the model training of unlabeled target domain by leveraging knowledge from labeled source domain.For this purpose,the minimization of the marginal distribution divergence and conditional distribution divergence between the source and the target domain is widely adopted in existing work.Nevertheless,for the sake of privacy preservation,the source domain is usually not provided with training data but trained predictor(e.g.,classifier).This incurs the above studies infeasible because the marginal and conditional distributions of the source domain are incalculable.To this end,this article proposes a source-free UDA which jointly models domain adaptation and sample transport learning,namely Sample Transport Domain Adaptation(STDA).Specifically,STDA constructs the pseudo source domain according to the aggregated decision boundaries of multiple source classifiers made on the target domain.Then,it refines the pseudo source domain by augmenting it through transporting those target samples with high confidence,and consequently generates labels for the target domain.We train the STDA model by performing domain adaptation with sample transport between the above steps in alternating manner,and eventually achieve knowledge adaptation to the target domain and attain confident labels for it.Finally,evaluation results have validated effectiveness and superiority of the proposed method.