摘要
针对浅层神经网络全批量学习收敛缓慢和单批量学习易受随机扰动的问题,借鉴深度神经网基于子批量的训练方法,提出了针对浅层神经网络的子批量学习方法和子批量学习参数优化配置方法。数值实验结果表明:浅层神经网络子批量学习方法是一种快速稳定的收敛算法,算法中批量和学习率等参数配置对于网络的收敛性、收敛时间和泛化能力有着重要的影响,学习参数经优化后可大幅缩短网络收敛迭代次数和训练时间,并提高网络分类准确率。
When solving problems in shallow neural networks, the full-batch learning method converges slowly and the single-batch learning method fluctuates easily. By referring to the subbatch training method for deep neural net- works, this paper proposes the subbatch learning method and the subbatch learning parameter optimization and allo- cation method for shallow neural networks. Experimental comparisons indicate that subbatch learning in shallow neural networks converges quickly and stably. The batch size and learning rate have significant impacts on the net convergence, convergence time, and generation ability. Selecting the optimal parameters can dramatically shorten the iteration time for convergence and the training time as well as improve the classification accuracy.
出处
《智能系统学报》
CSCD
北大核心
2016年第2期226-232,共7页
CAAI Transactions on Intelligent Systems
基金
国家自然科学基金项目(51304114
71371091)
关键词
子批量学习
神经网络
BP算法
批量尺寸
训练方法评估
分类
subbatch learning
neural network
backpropagation algorithms
batch size
training methods and evaluation
classification