摘要
针对BP网络的训练算法SPDS算法,研究了局部极小问题.利用基于单参数动态搜索算法的SPDS算法的变量逐一搜索的特点,证明了每次迭代的等价误差函数均为拟凸函数,进而极小点存在并可求出.将迭代必将收敛的初值集合定义为全局极小区域,针对局部极小问题给出L-SPDS算法,并证明了SPDS算法的全局极小区域沿坐标轴方向扩张的区域既是L-SPDS算法的全局极小区域,从而SPDS算法收敛于全局极小点的可能性大大增加了,算法的仿真试验也证明了这一点.
The local minimum problem of SPDS algorithm--the training algorithm of BP neural network, is studied. As one of SPDS algorithm features based on the single parameter dynamic searching algorithm is that the variables are searched one by one, this paper proves that the equivalence error function of each iteration is a quasi-convex function, and minimum points are presence and can be found out. The initial value set from which the iterative must be convergence is defined as the global minimum area, and according to the local minimum problems, L-SPDS algorithm is given. The global minimum area of the SPDS algorithm expanding along coordinate direction is the global minimum area of L-SPDS algorithm. The possibility that SPDS algorithm converges to the global minimum point greatly increases, which is proved by algorithm simulation test.
出处
《哈尔滨工业大学学报》
EI
CAS
CSCD
北大核心
2013年第11期125-128,共4页
Journal of Harbin Institute of Technology
基金
国家自然科学基金资助项目(61173034)