期刊文献+

一种前馈神经网络的变误差主动式学习算法 被引量:1

Active Back propagation Algorithm Based on Adjusting Error for Multilayer Feed forward Neural Network
下载PDF
导出
摘要 研究误差反向传播多层前馈神经网络的主动式学习方法.文章分析了目前用于训练前馈神经网络改进BP算法的特点和存在的不足,在此基础上提出逐次主动调整网络学习误差的网络训练思想,根据网络输出误差趋势,主动变化输出层的调整误差δpl,使Wkji和θkj在调整过程中受到每次学习效果信息的控制,从而得到一种主动式变误差的学习算法.实验表明,在训练多层前馈神经网络时,变误差主动式算法的学习效率比改进BP算法的学习效率有明显提高. The back propagation (BP) algorithm was used as a learning algorithm in training multilayer feed forward neural networks (MLFNN) in past years, and some improved BP algorithms have recently been developed to speed up MLFNN learning. However, the effeciency of these improved BP algorithms are limited due to ignoring the activity of adjusting error during training MLFNN. In this paper, an active back propagation (ABP) algorithm based on improved BP algorithm is developed for MLFNN trained. The ABP algorithm alters the adjusting errors of MLFNN during the network trained, according to the error tendency of the network, and aimed to enhance rapidity of the network trained. The paper describes experiments that compare the performance of ABP algorithm with improved BP algorithms. The experiment results have shown that the ABP algorithm gives more efficient than improved BP algorithm for MLFNN trained.
出处 《北京航空航天大学学报》 EI CAS CSCD 北大核心 1998年第3期350-353,共4页 Journal of Beijing University of Aeronautics and Astronautics
关键词 前馈神经网络 BP算法 主动式BP算法 feedforward neural networks errors back propagation algorithm improved back propagation algorithm active back propagation algorithm
  • 相关文献

同被引文献14

  • 1R Parekh,K Balakrishnan,V Honavor.An empirical comparison of flat-spot elimination techniques in back-propagation networks.The 3rd Workshop on Neural Networks-WNN'92,Auburn,2002
  • 2靳蕃.神经计算智能基础·原理·方法.成都:西南交通大学出版社,2000(Jin Fan.The Intelligence Basis of Neural Computing:Theory & Method (in Chinese).Chengdu:Southwest Jiaotong University Press,2000)
  • 3S E Fahlman.Faster-learning variations of back propagation:An empirical study.In..D Touretzky,G E Hinton,T J Sejnowski eds.In:Proc of the 1988 Connectionist Models Summer School.San Mateo,CA:Morgan Kaufmann Publishers,1988.38~51
  • 4R A Jacobs.Increased rates of convergence through learning rate adaptation.Neural Networks,1988,1(4):295~308
  • 5阎平凡,张长水.人工神经网络与模拟进化计算.北京:清华大学出版社,2000(Yan Pingfan,Zhang Changshui.Artificia Neural Network and Simulating-Evolution Computation(in Chinese).Beijing:Tsinghua University Press,2000)
  • 6M Wilamowski Bogdan,Chen Yixin.Efficient algorithm for training neural networks with one hidden layer.In:M Aleksander ed.Proc of the Int'l Joint Conf on Neural Networks,vol 3.Washington,DC:IEEE Press,1999.1725~1728
  • 7蔡自兴,徐光祐.人工智能及其应用.北京:清华大学出版社,1996(Cai Zixing,Xu Guangyou.Artificial Intelligence:Principles & Applications(in Chinese) .Beijing:Tsinghua University Press,1996)
  • 8D B Skalak.Prototype and feature selection,by sampling and random mutation hill climbing algorithms.In:Proc of the 1 1th Int'l Machine Learning Conference( ICML-94).New Brunswick,NJ:Morgan Kauffrnann,1994.293~301
  • 9Zhexue Huang.A fast clustering algorithm to cluster very large categorical data sets in data mining.SIGMOD Workshop on Research Issues on Data Mining and Knowledge Discovery (SIGMOD-DMKD'97),Tucson,Arizona,1997
  • 10Andrea Passerini,Massimiliano Pontil,Paolo Frasconi.From margins to probabilities in multiclass learning problems.In:F vanHarmelen ed.Proc of the 15th European Conf on Artificial Intelligence,Amsterdam:IOS Press,2002

引证文献1

二级引证文献10

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部