期刊文献+

提高前向神经网络泛化性能和实时性能的新算法 被引量:7

A new algorithm for improving the generalization performance and real-time ability of feedforward neural networks
下载PDF
导出
摘要 提出一种基于正则化最小二乘的前向神经网络快速学习的混合算法。该算法将正则化方法和基于单个权值的局部化快速算法的优势结合起来,并加入隐节点删除算法,极大地提高了前向网络的泛化性能和实时性能,学习收敛速度快,精度较高。仿真结果表明了该混合算法的有效性。 A fast learning hybrid algorithm of feedforward neural networks based on the regularized least squares is presented. The proposed algorithm combines the advantages of both the regularization methods and local fast algorithm based on single weight. In addition, a scheme to prune superfluous hidden units is designed. Therefore the proposed algorithm can improve greatly the generalization performance and real-time abilities of the feedforward networks, and has high convergence speed and accuracy. The simulations demonstrate the effectiveness of the algorithm.
出处 《电机与控制学报》 EI CSCD 北大核心 2002年第3期241-244,264,共5页 Electric Machines and Control
关键词 前向神经网络 泛化性能 实时性能 新算法 正则化方法 feedforward neural networks generalization performance real-time ability regularization method
  • 相关文献

参考文献6

  • 1SETIONO, R. A penalty function approach for pruning feedforward neural networks[J]. Neural Computation, 1997, 9(1): 185- 204.
  • 2CHEN S, CHNG E S, ALKADHIMI K. Regularized orthogonal least squares algorithm for constructing radial basis function networks[J]. Int J Control, 1996, 64(5):829-837.
  • 3王正欧,林晨.一种前向神经网络快速学习算法及其在系统辨识中的应用[J].自动化学报,1997,23(6):728-735. 被引量:19
  • 4朱涛.前向网络的实时性能与泛化性能的研究[D].天津:天津大学,1999.
  • 5CHEN S, BILLINGS S A, GRANT P M. Recursive hybrid algorithm for nonlinear system identification using radial basis function networks[J]. Int J Control, 1992, 55(5):1051-1070.
  • 6GEVA A B. Scale net-multiscale neural network architecture for time series prediction [J]. IEEE Trans Neural Networks, 1998, 9(5):1471-1 482.

二级参考文献1

  • 1Chen S,Int J Control,1990年,51卷,6期,1191页

共引文献18

同被引文献33

  • 1边肇祺.模式识别[M].清华大学出版社,1999..
  • 2Rumelhart D E,Hinon D E,Williams R J. Learning Internal Representations by Error Propagation, Parallel Distributed Processing: Explorations in the Microstructure of Cognition[M]. Cambridge MA:MIT Press,1986,318-362.
  • 3Singlhal S,Wu L. Training Feed-Forward Networks with the Extended Karlman Algorithm[R]. In:Proc. of the IEEE int.Conf. on Acoustics, Speech and Singal Processing, Glasgow,1989,1187-1190.
  • 4Kollas S, Amastassion D. An Adaptive Least Squares Algorithm for the Efficient Training of Artificial Neural Networks[J]. IEEE Trans. Circuits and Systems, 1990, 36: 1092-1101.
  • 5Shah S, Patmieri F, Datum M. Optimal Filtering Algorithms for Fast Learning in Feed-Forward Neural Networks[J].Neural Networks, 1992,5 : 779- 787.
  • 6Martin T Hagan, Howard B Demuth, Mark Beale. Neural networks design[M]. Boston: PWS Publishing Company, 1996: 60-97.
  • 7Engozinger S, Tomsen E. An accelerated learning algorithm for multiplayer perceptions: Optimization layer by layer[J]. IEEE Trans on Neural Networks, 1995,6(1):31-42.
  • 8Robert S Scalero, Nazif Tepedelenlioglu. A fast new algorithm for training feedforward networks[J]. IEEE Trans on Signal Processing, 1992, 40(1):202-210.
  • 9Chng E S, Chen S, Mulgrew B. Gradient radial basis function networks for nonlinear and nonstationary time series prediction[J]. IEEE Trans on Neural Networks, 1996,7(1): 190-194.
  • 10Das G, Gunopulos D. Finding similar time series. In Proc. of the Conference on Principles of Knowledge Discovery and Data Mining, Trondheim, Norway, 1997: 124-135.

引证文献7

二级引证文献14

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部