期刊文献+

基于栈式结构的深度极限学习机

Deep Extreme Learning Machine Based on Stacking Structure
下载PDF
导出
摘要 针对极限学习机无法学习到原始数据中蕴含的知识,以及其隐节点设置不当引起的过拟合等问题,提出一种快速训练的深度栈式极限学习机(FT DSELM).栈式结构以极限学习机为基础单元,通过级联多个基础单元,将初始样本与当前基础单元的决策信息逐层融合,充分挖掘子分类器中的分类知识,并引入dropout思想,以增加分类器集成的多样性.基于基准数据集的实验表明,FT DSELM不仅能深化对原始信息的理解,还具有较好的识别性能和极快的学习速度. In order to alleviate the problem that ELM cannot learn the knowledge implied in the original data and overfitting caused by improper assignment of hidden nodes,a fast training deep stacking extreme learning machine(FT DSELM)is proposed.The stacking structure regards ELM as the basic building module and cascades multiple modules.The original instances and the decision of the current module are fused in a layerwise manner so that the classification knowledge in the sub classifier is fully extracted.Additionally,the theory of dropout is also introduced to improve the diversity of ensemble structure.According to the experimental result,FT DSELM not only develops a deeper insight into the original information,but also achieves satisfactory recognition performance and superb learning speed.
作者 董帅 申情 张雄涛 DONG Shuai;SHEN Qing;ZHANG Xiongtao(School of Information Engineering,Huzhou University,Huzhou 313000,China;School of Science and Engineering,Huzhou College,Huzhou 313000,China)
出处 《湖州师范学院学报》 2022年第4期65-71,77,共8页 Journal of Huzhou University
基金 浙江省重点研发计划项目(2020C01097).
关键词 极限学习机 深度学习 集成学习 栈式结构 快速训练 extreme learning machine deep learning ensemble learning stacking structure fast training
  • 相关文献

参考文献5

二级参考文献21

  • 1Hornik K.Approximation capabilities of multilayer feedforward networks[J].Neural Networks,1991,4:251-257.
  • 2Leshno M,Lin V Y,Pinkus A,et al.Multilayer feedforward networks with a nonpolynomial activation function can approximate any function[J].Neural Networks,1993,6:861-867.
  • 3Huang G B,Babri H A.Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions[J].IEEE Trans on Neural Networks,1998,9(1):224-229.
  • 4Huang G B,Zhu Q Y,Siew C K.Real-time learning capability of neural networks,Technical Report S/45/2003[R].Singapore:School of Electrical and Electronic Engineering,Nanyang Technological University,2003.
  • 5Bartlett P L.The sample complexity of pattern classification with neural networks:the size of the weights is more important than the size of the network[J].IEEE Trans on Inf Theory,1998,44(2):525-536.
  • 6Tamura S,Tateishi M.Capabilities of a four-layered feedforward neural network:four yers versus three[J].IEEE Trans on Neural Networks,1997,8(2):251-255.
  • 7Huang G B.Learning capability and storage capacity of two hidden-layer feedforward networks[J].IEEE Trans on Neural Networks,2003,14(2):274-281.
  • 8Rao C R,Mitra S K.Generalized inverse of matrices and its applications[M].London:Wiley,1971.
  • 9Serre D.Theory and applications[M].New York:Springer,2002.
  • 10Ortega J M.Matrix theory[M].New York:Plenum Press,1987.

共引文献178

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部