摘要
前馈神经网络中隐层神经元的个数与它的学习和泛化能力密切相关.通过广义逆矩阵算法解决最小二乘问题改进神经网络自构行学习算法,得到一种新的前馈神经网络删剪算法.将新算法用于已经训练好的大型网络,能删剪"冗余"的隐层神经元,得到一个最精简的神经网络.此精简的神经网络不需要重新训练仍能保持原有的性能,并且泛化能力很好.仿真实例说明此算法的有效性和可行性.
The number of neurons in hidden layers of feedforward neural network is very relative to its learning ability and generalization ability. A new feedforward neural network pruning algorithm is obtained by improving the Neural Network Self-configuring Learning(NNSCL) algorithm through using Generalized Inverse Matrix(GIM) algorithm to solve the least-squares problem. For a large trained neural network, the new algo- rithm can remove redundant neurons in its hidden layers to obtain the minimum neural network which pre- serves its original performance without retraining after pruning and has good generalization ability. The simu- lation results demonstrate the effectiveness and the feasibility of the algorithm.
出处
《四川大学学报(自然科学版)》
CAS
CSCD
北大核心
2008年第6期1352-1356,共5页
Journal of Sichuan University(Natural Science Edition)