期刊文献+

一种基于误差放大的快速BP学习算法 被引量:10

An Algorithm for Fast Convergence of Back Propagation by Enlarging Error
下载PDF
导出
摘要 针对目前使用梯度下降原则的BP学习算法 ,受饱和区域影响容易出现收敛速度趋缓的问题 ,提出一种新的基于误差放大的快速BP学习算法以消除饱和区域对后期训练的影响 该算法通过对权值修正函数中误差项的自适应放大 ,使权值的修正过程不会因饱和区域的影响而趋于停滞 ,从而使BP学习算法能很快地收敛到期望的精度值 对 3 par ity问题和Soybean分类问题的仿真实验表明 ,与目前常用的Delta bar Delta方法、加入动量项方法、PrimeOffset等方法相比 。 A back propagation neural network based on enlarging error is proposed for improving the learning speed of multi layer artificial neural networks with sigmoid activation function It deals with the flat spots that play a significant role in the slow convergence of back propagation (BP) The advantages of the proposed algorithm are that it can be established easily and convergent with minimal mean square error It updates the weights of neural network effectively by enlarging the error term of each output unit, and keeps high learning rate to meet the convergence criteria quickly The experiments based on the well established benchmarks, such as 3 parity and soybean data sets, show that the algorithm is more efficacious and powerful than some of the existing algorithms such as Delta bar Delta algorithm, momentum algorithm, and Prime Offset algorithm in learning, and it is less computationally intensive and less required memory than the Levenberg Marquardt(LM) method
出处 《计算机研究与发展》 EI CSCD 北大核心 2004年第5期774-779,共6页 Journal of Computer Research and Development
基金 国家自然科学基金项目 (69975 0 0 5 60 2 73 0 83 )
关键词 反向传播 多层人工神经网络 误差放大 饱和区域 奇偶问题 Soybean数据集 back propagation multi layer artificial neural network enlarging error flat spots parity problem soybean data set
  • 相关文献

参考文献15

  • 1K Balakrishnan,V Honavar.Improving convergence of back propagation by handling flat-spots in the output layer.The 2nd Int'l Conf on Artificial Neural Networks,Brighton,UK,1992
  • 2R Parekh,K Balakrishnan,V Honavor.An empirical comparison of flat-spot elimination techniques in back-propagation networks.The 3rd Workshop on Neural Networks-WNN'92,Auburn,2002
  • 3靳蕃.神经计算智能基础·原理·方法.成都:西南交通大学出版社,2000(Jin Fan.The Intelligence Basis of Neural Computing:Theory & Method (in Chinese).Chengdu:Southwest Jiaotong University Press,2000)
  • 4郑志军,郑守淇.进化神经网络中的变异算子研究[J].软件学报,2002,13(4):726-731. 被引量:8
  • 5S E Fahlman.Faster-learning variations of back propagation:An empirical study.In..D Touretzky,G E Hinton,T J Sejnowski eds.In:Proc of the 1988 Connectionist Models Summer School.San Mateo,CA:Morgan Kaufmann Publishers,1988.38~51
  • 6R A Jacobs.Increased rates of convergence through learning rate adaptation.Neural Networks,1988,1(4):295~308
  • 7阎平凡,张长水.人工神经网络与模拟进化计算.北京:清华大学出版社,2000(Yan Pingfan,Zhang Changshui.Artificia Neural Network and Simulating-Evolution Computation(in Chinese).Beijing:Tsinghua University Press,2000)
  • 8M Wilamowski Bogdan,Chen Yixin.Efficient algorithm for training neural networks with one hidden layer.In:M Aleksander ed.Proc of the Int'l Joint Conf on Neural Networks,vol 3.Washington,DC:IEEE Press,1999.1725~1728
  • 9骆德汉,陈伟海.一种前馈神经网络的变误差主动式学习算法[J].北京航空航天大学学报,1998,24(3):350-353. 被引量:1
  • 10田岚,陆小珊,白树忠.基于快速神经网络算法的非特定人语音识别[J].控制与决策,2002,17(1):65-68. 被引量:10

二级参考文献6

  • 1焦李成.神经网络系统理论[M].西安:西安电子科技大学出版社,1992..
  • 2周育健.“规则+例外”的学习和机器学习:硕士学位论文[M].北京:中国科学院自动化研究所,1996..
  • 3贺思敏.可满足性问题的算法设计与分析:博士学位论文[M].北京:清华大学,1997..
  • 4邵健.基于Rough Sets的信息粒度计算及其应用:硕士学位论文[M].北京:中国科学院自动化研究所,2000..
  • 5王Su.认知心理学[M].北京:北京大学出版社,1992..
  • 6马钧水,刘贵忠,贾玉兰.改进遗传算法搜索性能的大变异操作[J].控制理论与应用,1998,15(3):404-408. 被引量:84

共引文献19

同被引文献84

引证文献10

二级引证文献36

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部