摘要
BP(back propagation)神经网络中隐层节点的个数过多将影响网络的泛化性能和效率,自构形学习算法通过考察网络隐层节点输出之间的相关性来删除和合并隐层节点。但自构形算法在节点的删除和合并时存在网络收敛不一致问题,因此,在自构形算法中引入随机度概念,在分治算法思想的基础上提出了循环自构形算法来优化网络结构。Matlab实验对比验证了循环自构形算法能从不同或相同的隐层节点数剪枝到一致的网络结构,并将网络结构优化至最精简。
Too many hidden neurons in BP neural network affect the entire network's performance and efficiency directly. Self-configuring algorithm deletes or combines the hidden neurons through analyzing the correlation between the output and hidden neurons. But there is un-identical convergence in the BP neural network structure optimization based on the self-configuring algorithm. Thus, a circular selfconfiguring algorithm based-on the self-configuring algorithm, randomness and divide-and conquer is presented. The experiments in Matlab show that the circular self-configuring algorithm can get the unified network structure either with the same or different hidden neurons and converge at simplest network structure.
出处
《计算机工程与设计》
CSCD
北大核心
2008年第2期411-413,417,共4页
Computer Engineering and Design
基金
江西省科技攻关项目(20041B100100)
关键词
循环自构形
BP神经网络
结构优化
泛化
随机度
circular self-configuring
BP neural network
structure optimization
generalization
randomness