期刊文献+

Lasso极限最小学习机 被引量:7

LASSO EXTREME LEARNING MACHINE
下载PDF
导出
摘要 极限最小学习机ELM(Extreme Learning Machine)是一种具有快速学习能力的神经网络训练算法。它通过随机选择神经网络节点的参数结合最小二乘法达到了减少训练时间的目的,但它需要产生大量的神经网络节点协助运算。提出一种利用迭代式Lasso回归优化的极限最小学习机(Lasso-ELM),它具有以下优势:(1)能大幅减少神经网络隐藏层节点的数量;(2)具有更好的神经网络泛化能力。实验表明Lasso-ELM的综合性能优于ELM、BP与SVM。 Extreme Learning Machine(ELM) is a neural network training algorithm with rapid learning capability.It reaches the goal of reducing training time by randomly choosing the parameters of neural networks nodes and combining the least squares method,but at the cost of producing a great deal of neural networks nodes to assist the operation.In this paper,we propose an ELM utilising the regression optimisation of iteration expression Lasso(Lasso-ELM),it has the following advantages:(a) it can significantly decrease the number of the nodes in hidden layer of neural networks;(b) it has better generalisation capability of neural networks.Experiments show,the comprehensive performance of Lasso-ELM outperforms the ELM,BP and SVM.
出处 《计算机应用与软件》 CSCD 北大核心 2013年第2期6-9,共4页 Computer Applications and Software
基金 国家自然科学基金项目(40776044)
关键词 极速最小学习机 Lasso 神经网络 Extreme learning machine Lasso Neural network
  • 相关文献

参考文献8

  • 1Poggio T,Girosi F. A Theory of Networks for Approximation and Learning[M].Cambridge,ma:the Mit Press,1989.1140.
  • 2邓万宇,郑庆华,陈琳,许学斌.神经网络极速学习方法研究[J].计算机学报,2010,33(2):279-287. 被引量:162
  • 3Huang G B,Zhu Q Y,Siew C K. Extreme Learning Machine:A New Learning Scheme of Feedforward Neural Networks[A].2004.985-990.
  • 4Huang G B,Zhu Q Y,Siew C K. Extreme Learning Machine:Theory and Applications[J].Neurocomputing,2006,(1-3):489-501.doi:10.1016/j.neucom.2005.12.126.
  • 5Huang G B,Wang Dianhui,Lan Yuan. Extreme Learning Machines:A Survey[A].2011.
  • 6Hoerl A E,Kennard R W. Ridge regression:Applications to nonorthogonal problems[J].Technometrics,1970,(01):69-82.
  • 7Tibshirani R. Regression Shrinkage and Selection via the Lasso[J].Journal of the Royal Statistical Society,Series B:Statistical Methodology,1996,(01):267-288.
  • 8Perkins S,Lacker K,Theiler J. Grafting:Fast,incremental feature selection by gradient descent in function space[J].Journal of Machine Learning Research,2003.1333-1356.

二级参考文献17

  • 1Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Networks, 1991, 4(2): 251-257.
  • 2Leshno M, Lin V Y, Pinkus A, Schocken S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 1993, 6(6) : 861-867.
  • 3Huang G-B, Babri H A. Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Transactions on Neural Networks, 1998, 9(1): 224-229.
  • 4Huang G-B. Learning capability and storage capacity of two hidden-layer feedforward networks. IEEE Transactions on Neural Networks, 2003, 14(2): 274-281.
  • 5Huang G-B, Zhu Q-Y, Siew C-K. Extreme learning machine: Theory and applications. Neurocomputing, 2006, 70 (1-3): 489-501.
  • 6Vapnik V N. The Nature of Statistical Learning Theory. New York: Springer, 1995.
  • 7Rousseeuw P J, Leroy A. Robust Regression and Outlier Detection. New York: Wiley, 1987.
  • 8Rumelhart D E, McClelland J L. Parallel Distributed Processing. Cambridge.. MIT Press, 1986, 1(2): 125-187.
  • 9Cristianini N, Shawe-Taylor J. An Introduction to Support Vector Machines. Cambridge: Cambridge University Press, 2000.
  • 10Tamura S, Tateishi M. Capabilities of a four-layered feedforward neural network: Four layers versus three. IEEE Transactions on Neural Networks, 1997, 8(2): 251-255.

共引文献161

同被引文献80

引证文献7

二级引证文献18

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部