摘要
Generalization ability is a major problem encountered when using neural networks to find the structures in noisy data sets. Controlling the network complexity is a common method to solve this problem. In this paper, however, a novel additive penalty term which represents the features extracted by hidden units is introduced to eliminate the overtraining of multilayer feedfoward networks. Computer simulations demonstrate that by using this unsupervised fashion penalty term, the generalization ability is greatly improved.
在利用神经网络提取含噪数据特征时,泛化能力是一个极需解决的问题.通常的方法是控制网络的复杂度。本文中,我们提出了一类基于熵因子的惩罚项,该项正确反映了隐层节点所提取的数据特征,从而有效避免了前馈网络的过训练现象,计算机仿真结果表明该算法能大大提高网络泛化能力.