摘要
结合遗传算法与梯度下降法优点,提出了一种训练神经网络权值的混合优化算法,同时能够优化网络的结构。首先利用全局搜索能力可靠的遗传算法,采用递阶编码方案和自适应变异概率,同时优化网络的权值和结构,在进化结束时,能够寻到全局最优点附近的点。在遗传算法搜索结果的基础上,利用局部寻优能力较强的梯度下降法,从此点出发,进行局部搜索,最终达到网络的训练目标。与单一的遗传算法或者梯度下降法比较而言,混合优化算法的收敛速度明显提高。
With the merits of the genetic algorithm and the gradient descent algorithm, a mixed option algorithm to train the neural networks is put forward; meanwhile, organization of networks is optimized. The first, take the advantage of genetic algorithm ,using the stepping coding technique and adaptive mutation probability, it is to optimize the organization of networks and train the networks in the same tine. At the end of the evolution, the solution of the evolution is by the global optional solution. Then, from the solution it is to search the global optional solution in the local area. Compared with the genetic algorithm or gradient descent algorithm, the convergence rate of the mixed option algorithm is greatly improved.
出处
《微计算机信息》
北大核心
2005年第10S期49-51,共3页
Control & Automation
基金
海军指令性科研项目
编号不公开
关键词
遗传算法
神经网络
梯度下降法
自适应变异
genetic algorithm
neural networks
gradient descent algorithm
adaptive mutation