摘要
从BP算法原理出发,找到造成这一结果的根本原因,利用目标函数对学习步长的一阶、二阶梯度值,应用牛顿近似法和线性寻优法来求得动态最优步长,这种算法所需存储一阶二阶导数的单元结构和标准BP算法中的结构相同,不会对存储造成大的负担,可使编程易于实现.计算机的仿真实验结果表明,这种方法是切实有效的.
This paper considers self-tuning learning rate of the BP Algorithm using derivative information. An efficient method of deriving the first and second derivatives of the objective function with respect to the learning rate is explored. The methods, such as linear expansion of the actual outputs and Newton-like method are in troduced. The computational and storage burden scale with the network size exactly like the standard BP algo rithm, and the convergence of the BP algorithm is accelerated with in a remarkable reduction.
出处
《河北工业大学学报》
CAS
2000年第3期106-108,共3页
Journal of Hebei University of Technology
关键词
BP神经网络
最优步长
线性寻优法
BP算法
backpropagation algorithm
optimized learning rate
linear expansion
newton-like method