期刊文献+

基于栈式降噪稀疏自编码器的极限学习机 被引量:10

Extreme Learning Machine Based on Stacked Denoising Sparse Auto-Encoder
下载PDF
导出
摘要 极限学习机(ELM)随机选择网络输入权重和隐层偏置,存在网络结构复杂和鲁棒性较弱的不足。为此,提出基于栈式降噪稀疏自编码器(sDSAE)的ELM算法。利用sDSAE稀疏网络的优势,挖掘目标数据的深层特征,为ELM产生输入权值与隐层偏置以求得隐层输出权值,完成训练分类器,同时通过加入稀疏性约束优化网络结构,提高算法分类准确率。实验结果表明,与ELM、PCA-ELM、ELM-AE和DAE-ELM算法相比,该算法在处理高维含噪数据时分类准确率较高,并且具有较强的鲁棒性。 Extreme Learning Machine(ELM)randomly selects input weights and hidden-layer bias of network,which increases the complexity and reduces the robustness of network.To address the problem,this paper proposes an ELM algorithm based on stacked Denoising Sparse Auto-Encoder(sDSAE-ELM).By taking the advantage of sparse network of stacked Denoising Sparse Auto-Encoder(sDSAE),the deep features of target data are mined,and the input weight and hidden-layer bias are generated for ELM to obtain the hidden-layer output weight,and the training classifier is completed.Then sparsity constraints are added to optimize the network structure and improve the accuracy of algorithm classification.Experimental results show that the proposed algorithm has higher classification accuracy and stronger robustness than ELM,PCA-ELM,ELM-AE and DAE-ELM algorithms in processing of high-dimensional noisy data.
作者 张国令 王晓丹 李睿 来杰 向前 ZHANG Guoling;WANG Xiaodan;LI Rui;LAI Jie;XIANG Qian(Air and Missile Defense College,Air Force Engineering University,Xi’an 710051,China)
出处 《计算机工程》 CAS CSCD 北大核心 2020年第9期61-67,共7页 Computer Engineering
基金 国家自然科学基金(61876189,61273275,61806219,61703426)。
关键词 极限学习机 降噪稀疏自编码器 稀疏性 深度学习 特征提取 Extreme Learning Machine(ELM) Denoising Sparse Auto-Encoder(DSAE) sparsity deep learning feature extraction
  • 相关文献

参考文献2

二级参考文献14

  • 1HUANG G B, ZHU Q Y, SlEW C K. Extreme learning machine: a new learning scheme of feedforward neural networks[C]//Proceedings of the International Joint Conference on Neural Networks. Budapest, Hungary: IEEE, 2004:25 - 29.
  • 2ZHU Q Y, QIN A K, SUGANTHAN P N, et al. Evolutionary extreme learning machine[J]. Pattern Recognition, 2005, 38(2): 1759- 1763.
  • 3TAMURA S, TATEISHI M. Capabilities of a four-layered feedforward neural network: four layers versus three[J]. IEEE Transactions on NeuralNetworks, 1997, 8(2): 251 - 255.
  • 4HUANG G B. Learning capability and storage capacity of two-hidden-layer feedforward network[J]. IEEE Transactions on Neural Networks, 2003, 14(2): 274 - 281.
  • 5SERRE D. Matrices: Theory and Applications[M]. New York: Springer, 2002.
  • 6LIANG N Y, HUANG G B. A fast and accurate online sequential learning algorithm for feedforward networks[J]. IEEE Transactions on Neural Networks, 2006, 17(6): 1411 - 1423.
  • 7HUI C, MATS N, SIRKKA L. Evaluation of PCA methods with improved fault isolation capabilities on a paper machine simulator[J]. Chemometrics and Intelligent Laboratory Systems, 2008, 92(3): 186 - 199.
  • 8PENG D Z, ZHANG Y. Dynamics of generalized PCA and MCA learning algorithms[J]. IEEE Transactions on Neural Networks, 2007, 18(6): 1777- 1784.
  • 9BAFFI G, MARTIN E B, MORRIS A J. Non-linear projection to latent structures revisited: the neural network PLS algorithm[J]. Computers & Chemical Engineering, 1999, 23(9): 1293 - 1307.
  • 10李连诗.钢管塑性变形原理[M].北京:冶金工业出版社,1985..

共引文献11

同被引文献111

引证文献10

二级引证文献12

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部