摘要
基于大数据的深度学习算法越来越完善,然而如何解决训练样本数非常少的情况,是目前神经网络研究领域中一个非常重要且极具挑战的问题。首先,介绍了少样本问题的定义;接着将现有的少样本学习方法分为数据增强、度量学习和元学习三类,分别从方法所用模型、数据集以及相应的实验结果进行分析;最后,总结了现有方法的不足,探讨了未来少样本研究的方向。
Deep learning algorithms based on big data are becoming more and more perfect.However,how to solve the situation where the number of training samples is very small is a very important and challenging problem in the existing neural network research field.Firstly,the definition of few-shot learning is introduced.Then,the existing few-shot learning methods are divided into three categories,data enhancement,metric learning and meta-learning,which are analyzed and discussed in terms of the models used in the method,data sets and corresponding experimental results.Finally,the shortcomings of the existing methods are summarized,and the direction of future few-shot learning research is discussed.
作者
卢依宏
蔡坚勇
郑华
曾远强
LU Yiyong;CAI Jianyong;ZHENG Hua;ZENG Yuanqiang(College of Photonic and Electronic Engineering,Fujian Normal University,Fuzhou 350007,China)
出处
《电讯技术》
北大核心
2021年第1期125-130,共6页
Telecommunication Engineering
基金
福建省自然科学基金项目(2017J01744)。
关键词
深度神经网络
少样本学习
数据增强
度量学习
元学习
deep neural network
few-shot learning
data enhancement
metric learning
meta learning