期刊文献+

零误差密度函数准则的BP神经网络学习研究

Learning of BP Neural Networks Based on Zero-Error Density Criterion Function
下载PDF
导出
摘要 BP神经网络的学习通常以均方误差函数(MSE)为目标函数,当目标变量不满足高斯分布时,其结果可能偏离真正最优.零误差密度函数(ZED)利用非参数估计中的Parzen窗法得到误差在零点的概率密度函数.将零误差密度函数作为BP网络的目标函数时,通过对光滑参数的选择使新的目标函数能够适用于期望输出满足任意分布.仿真实验分别以零误差密度函数和均方误差函数为目标函数的BP网络学习在函数逼近方面进行比较,结果表明零误差密度函数要比均方误差函数的适用范围更广. BP neural networks usually use mean squares error(MSE) function as the objective function,the results may deviate the optimal values in the condition that expected vectors don't follow Gaussian distribution.zero-error density (ZED) function uses Parzen window method of non-parameter estimation to get error density at origin,which can be used in the condition that expected output vector follow any density distribution by choosing an appropriate smooth parameter. Compared the BP networks with the new cost function with the BP networks with mean squared function in function approximation through the experiments,the simulation results show the zero-error density function has a larger range of application than mean squared error(MSE)function.
出处 《淮阴师范学院学报(自然科学版)》 CAS 2010年第4期322-325,共4页 Journal of Huaiyin Teachers College;Natural Science Edition
关键词 BP网络 均方误差函数 零误差密度函数 非高斯分布 BP networks mean squared error function zero-error density maximization algorithm non-gaussian distribution
  • 相关文献

参考文献5

  • 1Bishop C. Neural Networks for Pattern Recognition[ M]. Oxford University Press, 1995,194 - 208.
  • 2Silva L, Alexandre L, Marques S. Neural Network Classification: Maximizing Zero-Error Density[ R]. In ICAPR2005, LNCS 3686, 2005, 127- 135.
  • 3Silva L, Alexandre L, Marques S. New Developments of the Z-EDM Algorithm[J]. In Proceedings of the Sixth International Con- ference Intelligent Systems Design and Applications, 2006( 1 ) : 1067 - 1072.
  • 4Silva L, Mexandre L, Marques S. Data Classification with Multilayer Perceptrons Using a Generalized Error Function[J]. Neural Networks, 2008,21 : 1302 - 1310.
  • 5袁小芳,王耀南,孙炜,杨辉前.一种用于RBF神经网络的支持向量机与BP的混合学习算法[J].湖南大学学报(自然科学版),2005,32(3):88-92. 被引量:8

二级参考文献9

  • 1CHRISTOPHER J, BURGES C. A tutorial on support vector machines for pattern recognition [ J ]. Data Mining and Knowledge Discovery, 1998, 2 (2) : 121 - 167.
  • 2CHAN W C, CHAN CW, CHEUNG K C, etal. On the modelling of nonlinear dynamic system using support vector neural networks [ J ]. Engineering Applications of Artificial Intelligence,2001,14(2) : 105 - 113.
  • 3PETER Andras. The equivalence of support vector machine and regularization neural networks [ J ]. Neural Processing Letters,2002, 15(2) : 97 - 104.
  • 4PLATT J. Fast training of support vector machines using sequential minimum optimization [ A ]. Advance in Kernel Methods-support Vector Learning [ C ]. Cambridge : MIT Press, 1999,185 - 208.
  • 5VAPNIK V. The nature of statistical learning theory[ M ] . New York : Springer, 1995.
  • 6NELLOCritianini JOHNShawe-taylor 李国正 王猛 曾华军译.支持向量机导论[M].北京:电子工业出版社,2004..
  • 7VAPNIK V. An overview of statistical learning theory [ J ]. IEEE Transactions on Neural Networks, 1999, 10 (5) : 988 - 999.
  • 8王学雷,邵惠鹤,李亚芬.一种径向基函数神经网络在线训练算法及其在非线性控制中的应用[J].信息与控制,2001,30(3):249-253. 被引量:30
  • 9柴杰,江青茵,曹志凯.RBF神经网络的函数逼近能力及其算法[J].模式识别与人工智能,2002,15(3):310-316. 被引量:104

共引文献7

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部