期刊文献+

Reduce Training Error of Extreme Learning Machine by Selecting Appropriate Hidden Layer Output Matrix 被引量:1

原文传递
导出
摘要 Extreme learning machine(ELM)is a feedforward neural network with a single layer of hidden nodes,where the weight and the bias connecting input to hidden nodes are randomly assigned.The output weight between hidden nodes and outputs are learned by a linear model.It is interesting to ask whether the training error of ELM is significantly affected by the hidden layer output matrix H,because a positive answer will enable us obtain smaller training error from better H.For single hidden layer feedforward neural network(SLFN)with one input neuron,there is significant difference between the training errors of different Hs.We find there is a reliable strong negative rank correlation between the training errors and some singular values of the Moore-Penrose generalized inverse of H.Based on the rank correlation,a selection algorithm is proposed to choose robust appropriate H to achieve smaller training error among numerous Hs.Extensive experiments are carried out to validate the selection algorithm,including tests on real data set.The results show that it achieves better performance in validity,speed and robustness.
出处 《Journal of Systems Science and Systems Engineering》 SCIE EI CSCD 2021年第5期552-571,共20页 系统科学与系统工程学报(英文版)
基金 supported by the National Key Research and Development Program of China under Grant No.2020YFA0714200.
  • 相关文献

参考文献1

二级参考文献60

  • 1Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors. Nature, 1986, 323: 533-536.
  • 2Hagan M T, Menhaj M B. Training feedforward networks with the marquardt algorithm. IEEE Trans Neural Netw, 1994, 5:989-993.
  • 3Wilamowski B M, Yu H. Neural network learning without backpropagation. IEEE Trans Neural Netw, 2010, 21: 1793-1803.
  • 4Chen S, Cowan C, Grant P. Orthogonal least squares learning algorithm for radial basis function networks. IEEE Trans Neural Netw, 1991, 2:302-309.
  • 5Li K, Peng J X, Irwin G W. A fast nonlinear model identification method. IEEE Trans Automat Contr, 2005, 50: 1211-1216.
  • 6Hornik K. Approximation capabilities of multilayer feedforward networks. Neural netw, 1991, 4:251 257.
  • 7Hassoun M H. Fundamentals of Artificial Neural Networks. MIT Press, 1995. 35-55.
  • 8Cybenko G. Approximation by superpositions of a sigmoidal function. Math Control Signal Syst, 1989, 2:303-314.
  • 9White H. Artificial Neural Networks: Approximation and Learning Theory. Blackwell Publishers, Inc., 1992. 30-100.
  • 10Huang G B, Zhu Q Y, Siew C K. Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of IEEE International Joint Conference on Neural Networks, Budapest, 2004. 985-990.

共引文献51

同被引文献6

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部