摘要
针对传统BP神经网络训练收敛速度慢、易陷入局部极小点的问题,将遗传算法与误差放大的BP学习算法相结合,提出基于切片模型的快速混合学习算法.该算法通过将传统神经网络的训练过程划分为许多小的训练切片,并利用遗传算法的并行寻优特性,对采用误差放大的BP训练过程进行监督.通过及时发现收敛速率较快的个体和过滤陷入局部极小点的个体,来保证网络训练的成功率和实现快速向全局最优区域逼近的目的.仿真实验表明,该算法在不增加网络隐层节点数的情况下,显著地提高了网络的收敛精度和泛化能力.
A hybrid algorithm based on chips(HABC) is proposed to speed up the training of back-propagation neural networks, and to improve the performances of neural networks. The algorithm divide the training of neural networks into many training chips, and an improved BP algorithm based on magnified error signal is performing on those chips. The genetic algorithm is introduced to optimize the results of chips training when a chip training is accomplished. Then the next chip training is carrying out on the optimized result. Therefore, the HABC obtains the ability of searching the global optimum solution relying on these optimal operations, and it is easy to be parallel processed. The simulation experiments show that this algorithm can effectively avoid failure training caused by randomizing the initial weights and thresholds, and solve the slow convergence problem resulted from the Flat-Spots when the error signal becomes too small.
出处
《哈尔滨工业大学学报》
EI
CAS
CSCD
北大核心
2006年第5期685-688,共4页
Journal of Harbin Institute of Technology
基金
国家自然科学基金资助项目(60273083)
关键词
BP算法
人工神经网络
遗传算法
饱和区域
局部最优
Back-Propagation algorithm
artiicial neural network
genetic algorithm
flat - spots
local opti- mumvb v mmmvzb