期刊文献+

ELM优化的深度自编码分类算法 被引量:6

ELM Optimized Deep Autoencoder Classification Algorithm
下载PDF
导出
摘要 针对自编码神经网络训练时间长的问题,提出了一种改进的深度自编码神经网络算法。首先利用极限学习机(extreme learning machine,ELM)作为自编码块,构建多层自编码神经网络,以提高分类准确率。采用ELM能避免大量的迭代过程,减少网络训练时间。其次为实现分类,在各输出层中加入标签节点,对实际输出与各样本的期望标签进行比对,使原始的自编码无监督学习转化为监督学习过程,从而在深度学习的过程中实现分类训练。为验证该方法的有效性,在多个UCI数据集中进行广泛的测试。实验结果表明,与其他自编码网络和RBF(radial basis function)神经网络相比,该方法取得了良好的分类准确率,并且有效提高了训练速度。 For the autoencoder neural networks having long training time,this paper puts forward a kind of improved deep autoencoder neural network.Firstly,this paper uses extreme learning machine(ELM)as an autoencoder block and constructs a multilayer autoencoder neural network,to improve the classification accuracy.Using ELM can avoid the iterative process and reduce the training time of network.Secondly,this paper adds the label tag in the output layer nodes,and expects the actual output with the sample tag,making the unsupervised learning to be a supervised learning and achieving the classification training in the process of deep learning.To verify the validity of the proposed method,this paper tests in the multiple UCI datasets.The experimental results show that the accuracy is good and the training speed is improved,compared with other autoencoder networks and radial basis function(RBF)neural network.
作者 徐毅 董晴 戴鑫 宋威 XU Yi;DONG Qing;DAI Xin;SONG Wei(School of Internet of Things Engineering,Jiangnan University,Wuxi,Jiangsu 214122,China)
出处 《计算机科学与探索》 CSCD 北大核心 2018年第5期820-827,共8页 Journal of Frontiers of Computer Science and Technology
基金 国家自然科学基金No.61673193 中央高校基本科研业务费专项资金Nos.JUSRP51635B JUSRP51510 江苏省自然科学基金No.BK20150159~~
关键词 深度神经网络 极限学习机 自编码 分类 deep neural network extreme learning machine autoencoder classification
  • 相关文献

参考文献4

二级参考文献47

  • 1单丽莉,刘秉权,孙承杰.文本分类中特征选择方法的比较与改进[J].哈尔滨工业大学学报,2011,43(S1):319-324. 被引量:25
  • 2YANG J C, YU K, GONG Y H, et al. Linear spatial pyramid matc- hing using sparse coding for image classification [ C]//Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Rec- ognition. Washington, DC: IEEE Computer Society, 2009: 1794- 1801.
  • 3WANG J J, YANG J C, YU K, et al. Learning locality-constrained linear coding for image classification [ C]//: Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2010:3360-3367.
  • 4van GEMERT J C, VEENMAN C J, SMEULDERS A W M, et al. Visual word ambiguity [ J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(7) : 1271 - 1283.
  • 5LAZEBNIK S, SCHMID C, PONCE J. Beyond bags of features: spatial pyramid matching for recognizing natural scene categories [ C]/! Proceedings of the 2006 IEEE Conference on Computer Vi- sion and Pattern Recognition. Washington, DC: IEEE Computer So- ciety, 2006:2169-2178.
  • 6LOWED G. Distinctive image features from scale-invariant key- points [J]. International Journal of Computer Vision, 2004, 60 (2): 91-110.
  • 7WU J X, REHG J M. Centrist: a visual descriptor for scene catego- rization [ J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 33(8) : 1489 - 1501.
  • 8ARTHUR D, VASSILVITSKII S. k-means + + : the advantages of careful seeding [ C]//Proceedings of the Eighteenth Annual ACM- SIAM Symposium on Discrete Algorithms. Philadelphia: Society for Industrial and Applied Mathematics, 2007:1027 -1035.
  • 9SHEN F R, HASEGAWA O. An incremental network for on-line unsupervised classification and topology learning [ J]. Neural Net- works, 2005, 19(1): 90-106.
  • 10SHEN F R, OGURA T, HASEGAWA O. An enhanced self-organi- zing incremental neural network for online unsupervised learning [J]. Neural Networks, 2007, 20(8): 893-903.

共引文献29

同被引文献41

引证文献6

二级引证文献59

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部