摘要
通过分析隐层神经元饱和度对网络性能的影响,构造了新的误差函数,同时设计了一种自适应调节的放大误差信号方法,得到新的BP学习算法。该算法流程简单,不需要太大的计算复杂性。仿真实验结果表明新改进算法在收敛速度和避免误差函数陷入局部极小方面明显优于其它BP算法。
By analyzing the influences of saturation degree in the hidden layer on the performances of multi-layer feedforward neural networks, a new error function was constructed, a new adaptive method of magnifying error signal was designed, and an improved back-propagation algorithm was proposed. In addition, the flow is simple and no heavy computational is necessary in the proposed algorithm. The results show that, in terms of the convergence rate and the capability of avoiding local minima, the new algorithm always outperforms the other traditional methods.
出处
《系统仿真学报》
EI
CAS
CSCD
北大核心
2007年第19期4591-4593,4598,共4页
Journal of System Simulation
关键词
前馈神经网络
学习算法
饱和度
局部极小
误差信号
feedforward neuralnetworks
learning algorithm
saturation degree
local minima
error signal