期刊文献+

Using Entropy Penalty Term for Improving the Generalization Ability of Multilayer Feedfoward Networks *

基于熵因子惩罚项的提高前馈网络泛化能力的新算法(英文)
下载PDF
导出
摘要 Generalization ability is a major problem encountered when using neural networks to find the structures in noisy data sets. Controlling the network complexity is a common method to solve this problem. In this paper, however, a novel additive penalty term which represents the features extracted by hidden units is introduced to eliminate the overtraining of multilayer feedfoward networks. Computer simulations demonstrate that by using this unsupervised fashion penalty term, the generalization ability is greatly improved. 在利用神经网络提取含噪数据特征时,泛化能力是一个极需解决的问题.通常的方法是控制网络的复杂度。本文中,我们提出了一类基于熵因子的惩罚项,该项正确反映了隐层节点所提取的数据特征,从而有效避免了前馈网络的过训练现象,计算机仿真结果表明该算法能大大提高网络泛化能力.
出处 《Journal of Southeast University(English Edition)》 EI CAS 1998年第1期29-34,共6页 东南大学学报(英文版)
关键词 GENERALIZATION OVERTRAINING ENTROPY 泛化能力 过训练
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部