摘要
针对过程神经网络(PNN)单一训练算法自适应调整能力差、缺乏对学习性质有效控制的问题,提出一种梯度下降与牛顿迭代相结合的求解算法——混合误差梯度下降算法.在训练初始阶段,基于网络训练目标函数,采用梯度下降法进行迭代寻优,只需计算目标函数一阶导数数值公式,复杂度低且误差下降快;当梯度下降法学习效率降低时,引入牛顿迭代法,并将梯度下降法的训练结果作为初始参数代入目标函数,使问题转化为求解非线性方程组,不需要一维搜索而提高网络训练效率.通过学习效率分析自适应调节两种算法的切换,直至满足停机条件.将其应用于时变信号模式分类,实验结果表明,该算法较大地提高PNN训练效率.
Aiming at the problems in the training of the process neural networks,a learning algorithm combined by gradient descent algorithm and Newton iteration—Hybrid error gradient descent algorithm is proposed.Based on the comprehensive utilization of two different methods of solving characteristics,the algorithm using gradient method only needs to calculate the first order derivative of the objective function in the initial learning phase,so it has low computational complexity and high error diminished speed.And we introduce Newton iteration algorithm in the convergent domain of the equivalent objective function which is established based on minimum model error.The gradient algorithm is replaced by Newton iteration algorithm when the network learning efficiency become low.Accordingly the problem is transformed into solving nonlinear equations without one dimensional search.The training efficiencyof process neural network is improved.The proposed algorithm is applied to time-varying Signs pattern classification,and the experimental results show that the algorithm considerably improves the training efficiency of PNN.
出处
《东北石油大学学报》
CAS
北大核心
2014年第4期92-96,11-12,共5页
Journal of Northeast Petroleum University
基金
国家自然科学基金项目(61170132)
关键词
过程神经元网络
算法效率
牛顿迭代法
梯度下降法
混合误差梯度下降算法
process neural networks
algorithm efficiency
Newton iteration algorithm
gradient algorithm
Hybrid error gradient descent algorithm