摘要
针对故障诊断神经网络隐层结点数过少或过多而造成的网络学习过程可能不收敛、网络泛化能力弱和网络容错性下降等问题,根据隐层结点对输出层输出影响的显著性差异,提出了一种在权值学习过程中动态的删除隐层冗余结点的优化学习方法.
Abstract The problem that the hidden layer nodes of neural network to fault diagnosis are insufficient or redundant can result in many problems such as neural network study unconvergence,weak ability of generalization and tolerance of error.In order to solve these problems,on the basis of the differences of the hidden nodes'great effect on the outputs of the output layer,an optimization algorithm for dynamically deleting reduntant hidden nodes during the weight learning process is presented.The results of its application in neural network to fault diagnosis show that the optimization algorithm can increase the training speed and promote accuracy of diagnosis.
出处
《郑州工业大学学报》
1999年第1期36-38,共3页
Journal of Zhengzhou University of Technology
基金
河南省科技攻关项目