摘要
针对目前使用梯度下降原则的BP学习算法 ,受饱和区域影响容易出现收敛速度趋缓的问题 ,提出一种新的基于误差放大的快速BP学习算法以消除饱和区域对后期训练的影响 该算法通过对权值修正函数中误差项的自适应放大 ,使权值的修正过程不会因饱和区域的影响而趋于停滞 ,从而使BP学习算法能很快地收敛到期望的精度值 对 3 par ity问题和Soybean分类问题的仿真实验表明 ,与目前常用的Delta bar Delta方法、加入动量项方法、PrimeOffset等方法相比 。
A back propagation neural network based on enlarging error is proposed for improving the learning speed of multi layer artificial neural networks with sigmoid activation function It deals with the flat spots that play a significant role in the slow convergence of back propagation (BP) The advantages of the proposed algorithm are that it can be established easily and convergent with minimal mean square error It updates the weights of neural network effectively by enlarging the error term of each output unit, and keeps high learning rate to meet the convergence criteria quickly The experiments based on the well established benchmarks, such as 3 parity and soybean data sets, show that the algorithm is more efficacious and powerful than some of the existing algorithms such as Delta bar Delta algorithm, momentum algorithm, and Prime Offset algorithm in learning, and it is less computationally intensive and less required memory than the Levenberg Marquardt(LM) method
出处
《计算机研究与发展》
EI
CSCD
北大核心
2004年第5期774-779,共6页
Journal of Computer Research and Development
基金
国家自然科学基金项目 (69975 0 0 5
60 2 73 0 83 )