期刊文献+

优化极限学习机的序列最小优化方法 被引量:18

A Sequential Minimal Optimization Method for Optimization Extreme Learning Machine
下载PDF
导出
摘要 针对传统二次规划求解方法训练优化极限学习机(OMELM)存在速度慢和效率低的问题,提出了单变量迭代序列最小优化(SSMO)算法.该算法通过在框式约束中优化拉格朗日乘子来实现目标函数的最小化:首先在初始化拉格朗日乘子中选择使目标函数值下降最大的拉格朗日乘子,将该拉格朗日乘子作为目标函数的唯一变量;然后求解目标函数的最小值并更新该变量的值;重复这个过程直到所有的拉格朗日乘子都满足二次规划问题的Karush-Kuhn-Tucker条件为止.实验结果表明:SSMO算法只需调节很少的参数值便可得到足够好的泛化性能;采用SSMO算法的OMELM方法在泛化性能上要好于采用序列最小优化算法的支持向量机方法;在随机数据集测试中,SSMO算法具有较好的鲁棒性. A sequential minimal optimization (SSMO) algorithm iterated with single variable is proposed to solve the slow speed and low efficiency problems in training optimization method based extreme learning machine (OMELM) by traditional quadratic programming solver. The algorithm searches the minimum value of the objective function by optimizing Lagrange multipliers within box constraints. The Lagrange multiplier that can generate maximum reduction in the objective function is selected from the initial Lagrange multipliers as for the unique variable of the objective function. Then the objective function is minimized with respect to the variable, and a new value of the Lagrange multiplier is obtained. The process is repeated until all Lagrange mul- tipliers satisfy the Karush-Kuhn-Tucker condition of the quadratic programming problem. Experimental results show that SSMO algorithm can obtain good enough generalization performance with adjusting few parameter values. The generalization performance of OMELM method using SSMO algorithm is better than that of the support vector machine (SVM) method using sequential minimal optimization (SMO) algorithm. SSMO algorithm is robust in random datasets trails.
出处 《西安交通大学学报》 EI CAS CSCD 北大核心 2011年第6期7-12,19,共7页 Journal of Xi'an Jiaotong University
基金 国家高技术研究发展计划资助项目(2008AA01Z136)
关键词 极限学习机 支持向量机 序列最小优化 extreme learning machine support vector machine sequential minimal optimization
  • 相关文献

参考文献11

  • 1HUANG Guangbin, ZHU Qinyu, SIEW C K. Extreme learning machine: theory and applications [J]. Neurocomputing, 2006, 70(1/2/3) : 489-501.
  • 2HUANG Guangbin, ZHU Qinyu, SIEW C K. Real-time learning capability of neural networks [J]. IEEE Transactions on Neural Network, 2006, 17 (2): 863- 878.
  • 3程松,闫建伟,赵登福,王圈,王海明.短期负荷预测的集成改进极端学习机方法[J].西安交通大学学报,2009,43(2):106-110. 被引量:22
  • 4邓万宇,郑庆华,陈琳,许学斌.神经网络极速学习方法研究[J].计算机学报,2010,33(2):279-287. 被引量:160
  • 5BARTLETT P L. The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network [J]. IEEE Transactions on Information Theory, 1998, 44 (2) : 525-536.
  • 6HUANG Guangbin, DING Xiaojian, ZHOU Hong- ming. Optimization method based extreme learning machine for classification [J]. Neurocomputing, 2010, 74(1/2/3): 155-163.
  • 7PLATT J. Fast training of support vector machines using sequential minimal optimization [M] // Advances in Kernel Methods: Support Vector Learning. Cambridge, MA, USA.. MIT Press, 1999: 185-208.
  • 8FLETCHER R. Practical methods of optimization: constrained optimization [M]. New York, USA: John Wiley and Sons, 1981: 2.
  • 9BLAKE C L, MERZ C J. UCI repository of machine learning databases [EB/OL]. (1998-04-02)[2010-02- 12]. http:///www, ics. uci. edu/- mlearn/MLRepository. html.
  • 10MICHIE D, SPIEGELHALTER D J, TAYLOR C C.Machine learning, neural and statistical classification [M]. Englewood Cliffs, NJ, USA: Prentice Hall, 1994.

二级参考文献23

  • 1GROSS G, GALIANA F D. Short-term load forecasting[J] Proc IEEE, 1987, 75(12): 1537-1558.
  • 2HIPPERT H S, PEDREIRA C E, SOUZA R C. Neural networks for short-term load forecasting: a review and evaluation[J]. IEEE Trans on Power Systems, 2001,16(1) : 44-55.
  • 3HUANG G B, ZHU Q Y, SlEW C K. Extreme learning machine: theory and applications [J ]. ScienceDirect: Neurocomputing, 2006, 70: 489-501.
  • 4SCHAPIRE R E. The strength of weak learnability [J]. Machine Learning, 1990,5(2) : 197-227.
  • 5DRUCKER H. Improving regressors using Boosting techniques[C]//The Fourteenth International Conference on Machine Learning. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. , 1997:107-115.
  • 6Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Networks, 1991, 4(2): 251-257.
  • 7Leshno M, Lin V Y, Pinkus A, Schocken S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 1993, 6(6) : 861-867.
  • 8Huang G-B, Babri H A. Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Transactions on Neural Networks, 1998, 9(1): 224-229.
  • 9Huang G-B. Learning capability and storage capacity of two hidden-layer feedforward networks. IEEE Transactions on Neural Networks, 2003, 14(2): 274-281.
  • 10Huang G-B, Zhu Q-Y, Siew C-K. Extreme learning machine: Theory and applications. Neurocomputing, 2006, 70 (1-3): 489-501.

共引文献174

同被引文献150

引证文献18

二级引证文献40

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部