摘要
在分析现有神经网络集成构造过程的基础上,提出了一种神经网络紧凑集成模式,集成中成员网络的训练和网络组合权重的优化在同一个学习过程中进行,各参数的调整以提高集成泛化性能为目的.与现有神经网络集成模式相比,集成构造过程更加紧凑,它将个体网络生成阶段与结论合成阶段合二为一,并且网络之间的信息交互建立在实时动态的集成结构基础上,保证了成员网络训练与结论合成之间信息传递的始终一致性.为验证该模式的有效性及优越性,采用4种典型的分类数据集对神经网络紧凑集成模式与CNNE、Bagging、Boosting等现有的集成模式在泛化性能上进行了比较,结果表明神经网络集成模式在测试数据集上的错误率降低了8%~16%.
Analyzing the construction process of the existing neural network ensembles, a novel compact ensemble model of neural networks is proposed. Training members in an ensemble and optimizing the combination weights for members are carried out simultaneously in the same learning process, and all parameters are adjusted to improve the generalization ability of the ensemble. In comparison with other existing ensemble model, the ensemble construction process is more compact by integrating the two stages into one, and the communication among neural networks is based on the real-time and dynamic structure of an ensemble so that the information conveyed between integration and training always keeps coincident. To validate the validity and advantage of this ensemble model, four well known classification problems are considered to compare the generalization error of compact ensemble with the generalization error of CNNE, Bagging, Boosting and other existing neural network ensemble model. The experimental results show that the error rate on testing sets can be decreased with compact ensemble model by 8% to 16%.
出处
《西安交通大学学报》
EI
CAS
CSCD
北大核心
2007年第3期295-298,共4页
Journal of Xi'an Jiaotong University
基金
国家自然科学基金资助项目(50575179)
关键词
神经网络集成
紧凑集成模式
组合权重
泛化性能
neural network ensemble
compact ensemble model
combination weight
generaliza tion ability