摘要
针对BP算法收敛缓慢和易于陷入局部极小的缺点,将基于一类非线性特性的动量项引入BP算法的梯度搜索,提出前向神经网络(FNN)的一种通用且简单的全局训练算法(IBPM 算法)。结合升温策略,算法在优化精度和训练速度方面有较大的改善。
To solve the slow convergence of BP algorithm and avoid getting stick in local minima, a general and simple training algorithm for feed-forward neural networks(FNN), named IBPM, which combines a class of nonlinear property of momentum term with the gradient descent of BP algorithm, is presented. Using “temperature-raising” strategy, convergence rate and training speed of the algorithm are greatly improved. Simulation results verify the efficiency of such algorithm, and some conclusions to choose parameters are provided.
出处
《控制与决策》
EI
CSCD
北大核心
2000年第1期19-22,共4页
Control and Decision
基金
国家自然科学基金项目!(69684001)
国家攀登计划基金项目