期刊文献+

基于柔性多面体的最优核极限学习机算法 被引量:1

Optimal kernel extreme learning machine algorithm based on flexible polyhedron
下载PDF
导出
摘要 针对核极限学习机参数优化困难的问题,提出一种基于网格搜索柔性多面体的最优化核极限学习机算法。为高斯核变量和惩罚变量构造二维网格,从网格中选取最小目标函数值所对应的参数点构造初始柔性多面体,解决柔性多面体对初始值敏感的问题;给柔性多面体的变形搜索参数添加权重值,区分核参数和惩罚参数对核极限学习机分类性能影响程度;通过迭代柔性多面体实现核极限学习参数的最优化搜索,用所获最优参数构造核极限学习机并用于数据分类。在UCI、KEEL和人工数据集上与其它优化核极限学习机算法进行计算结果比较,验证所提算法的可行性。 To solve the problem of parameter optimization of the kernel extreme learning machine,an optimal kernel extreme learning machine algorithm based on grid searching flexible polyhedron was proposed.A two-dimensional grid for Gaussian kernel variable and penalty variable was constructed.The initial flexible polyhedron was determined by selecting the parameter points reflecting the minimum value of the objective function from the grid to solve the problem that the flexible polyhedron is sensitive to the initial value.The weight was added to the deformation parameters of flexible polyhedron to distinguish the degree of influence of kernel variable and penalty variable on classification performance of the kernel extreme learning machine.The optimal search of kernel extreme learning parameters was realized through iterative flexible polyhedron,and the kernel extreme learning machine was constructed with the obtained optimal parameters and was used for data classification.The calculation results of UCI,KEEL and artificial dataset were compared with those of other optimized kernel extreme learning machine algorithms to verify the feasibility of the proposed algorithm.
作者 苏一丹 麻晓璇 覃华 王保锋 SU Yi-dan;MA Xiao-xuan;QIN Hua;WANG Bao-feng(School of Computer,Electronics and Information,Guangxi University,Nanning 530004,China)
出处 《计算机工程与设计》 北大核心 2020年第9期2454-2459,共6页 Computer Engineering and Design
基金 国家自然科学基金项目(51667004、61762009)。
关键词 核极限学习机 参数优化 网格搜索 柔性多面体最优化搜索 分类精度 kernel extreme learning machine parameter optimization grid search flexible polyhedron optimization search classification accuracy
  • 相关文献

参考文献3

二级参考文献70

  • 1钱晓东.数据挖掘中分类方法综述[J].图书情报工作,2007,51(3):68-71. 被引量:28
  • 2Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors. Nature, 1986, 323: 533-536.
  • 3Hagan M T, Menhaj M B. Training feedforward networks with the marquardt algorithm. IEEE Trans Neural Netw, 1994, 5:989-993.
  • 4Wilamowski B M, Yu H. Neural network learning without backpropagation. IEEE Trans Neural Netw, 2010, 21: 1793-1803.
  • 5Chen S, Cowan C, Grant P. Orthogonal least squares learning algorithm for radial basis function networks. IEEE Trans Neural Netw, 1991, 2:302-309.
  • 6Li K, Peng J X, Irwin G W. A fast nonlinear model identification method. IEEE Trans Automat Contr, 2005, 50: 1211-1216.
  • 7Hornik K. Approximation capabilities of multilayer feedforward networks. Neural netw, 1991, 4:251 257.
  • 8Hassoun M H. Fundamentals of Artificial Neural Networks. MIT Press, 1995. 35-55.
  • 9Cybenko G. Approximation by superpositions of a sigmoidal function. Math Control Signal Syst, 1989, 2:303-314.
  • 10White H. Artificial Neural Networks: Approximation and Learning Theory. Blackwell Publishers, Inc., 1992. 30-100.

共引文献57

同被引文献9

引证文献1

二级引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部