期刊文献+

一种鲁棒性的少样本学习方法 被引量:3

Robust Few-shot Learning Method
下载PDF
导出
摘要 少样本学习是目前机器学习研究领域的一个热点,它能在少量的标记样本中学习到较好的分类模型.但是,在噪声的不确定环境中,传统的少样本学习模型泛化能力弱.针对这一问题,提出一种鲁棒性的少样本学习方法RFSL(Robust Few-Shot Learning).首先,使用核密度估计(Kernel Density Estimation,KDE)和图像滤波(Image Filtering)方法在训练集中加入不同的随机噪声,形成多个不同噪声下的训练集,并分别生成支持集和查询集.其次,利用关系网络的关系模块通过训练集端到端地学习多个基分类器.最后,采用投票的方式对各基分类器的最末Sigmoid层非线性分类结果进行融合.实验结果表明,RFSL模型可促进小样本学习快速收敛,同时,与R-Net以及其他主流少样本学习方法相比,RFSL具有更高的分类准确率,更强的鲁棒性. Few-Shot Learning is a hot topic in the field of machine learning.It can learn a better classification model from a small number of labeled samples.However,in the uncertain environment of noise,the generalization ability of traditional learning model with few samples is weak.To solve this problem,we propose a Robust Few-Shot Learning Method(RFSL).Firstly,we use the kernel density estimation(KDE)and image filtering to add different random noises into the training set.In this way,the training set under different noises is formed,and the support set and the query set are generated respectively.Secondly,the relational module of relational network is used to learn multiple base classifiers end to end through training set.Finally,we use the method of voting to merge the nonlinear classification results of the last Sigmoid layer of each base classifier.We show that with our RFSL method,our framework can promote fast convergence of Few-Shot Learning.And compared with R-Net and other classical Few-Shot Learning methods,our algorithm has higher classification accuracy and stronger robustness.
作者 代磊超 冯林 杨玉亭 尚兴林 苏菡 DAI Lei-chao;FENG Lin;YANG Yu-ting;SHANG Xing-lin;SU Han(School of Computer Science,Sichuan Normal University,Chengdu 610101,China)
出处 《小型微型计算机系统》 CSCD 北大核心 2021年第2期340-347,共8页 Journal of Chinese Computer Systems
基金 国家自然科学基金项目(61876158)资助.
关键词 少样本学习 深度学习 R-Net 随机噪声 few-shot learning deep learning R-Net random noise
  • 相关文献

参考文献2

二级参考文献18

  • 1Estivill-Castro V. Why so many clustering algorithms-A position paper. SIGKDD Explorations, 2002,4(1):65-75.
  • 2Dietterich TG. Machine learning research: Four current directions. AI Magazine, 1997,18(4):97-136.
  • 3Breiman L. Bagging predicators. Machine Learning, 1996,24(2):123-140.
  • 4Zhou ZH, Wu J, Tang W. Ensembling neural networks: Many could be better than all. Artificial Intelligence, 2002,137(1-2):239-263.
  • 5Strehl A, Ghosh J. Cluster ensembles-A knowledge reuse framework for combining partitionings. In: Dechter R, Kearns M,Sutton R, eds. Proc. of the 18th National Conf. on Artificial Intelligence. Menlo Park: AAAI Press, 2002. 93-98.
  • 6MacQueen JB. Some methods for classification and analysis of multivariate observations. In: LeCam LM, Neyman J, eds. Proc. of the 5th Berkeley Symp. on Mathematical Statistics and Probability. Berkeley: University of California Press, 1967,1:281-297.
  • 7Blake C, Keogh E, Merz CJ. UCI Repository of machine learning databases. Irvine: Department of Information and Computer Science, University of California, 1998. http://www.ics.uci.edu/~mlearn/MLRepository.html
  • 8Modha DS, Spangler WS. Feature weighting in k-means clustering. Machine Learning, 2003,52(3):217-237.
  • 9Zhou ZH, Tang W. Clusterer ensemble. Technical Report, Nanjing: AI Lab., Department of Computer Science & Technology,Nanjing University, 2002.
  • 10Fern XZ, Brodley CE. Random projection for high dimensional data clustering: A cluster ensemble approach. In: Fawcett T, Mishra N, eds. Proc. of the 20th Int'l Conf. on Machine Learning. Menlo Park: AAAI Press, 2003. 186-193.

共引文献98

同被引文献13

引证文献3

二级引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部