期刊文献+

基于自动编码器组合的深度学习优化方法 被引量:43

Deep learning algorithm optimization based on combination of auto-encoders
下载PDF
导出
摘要 为了提高自动编码器算法的学习精度,更进一步降低分类任务的分类错误率,提出一种组合稀疏自动编码器(SAE)和边缘降噪自动编码器(m DAE)从而形成稀疏边缘降噪自动编码器(Sm DAE)的方法,将稀疏自动编码器和边缘降噪自动编码器的限制条件加载到一个自动编码器(AE)之上,使得这个自动编码器同时具有稀疏自动编码器的稀疏性约束条件和边缘降噪自动编码器的边缘降噪约束条件,提高自动编码器算法的学习能力。实验表明,稀疏边缘降噪自动编码器在多个分类任务上的学习精度都高于稀疏自动编码器和边缘降噪自动编码器的分类效果;与卷积神经网络(CNN)的对比实验也表明融入了边缘降噪限制条件,而且更加鲁棒的Sm DAE模型的分类精度比CNN还要好。 In order to improve the learning accuracy of Auto-Encoder( AE) algorithm and further reduce the classification error rate,Sparse marginalized Denoising Auto-Encoder( Sm DAE) was proposed combined with Sparse AutoEncoder( SAE) and marginalized Denoising Auto-Encoder( m DAE). Sm DAE is an auto-encoder which was added the constraint conditions of SAE and m DAE and has the characteristics of SAE and m DAE,so as to enhance the ability of deep learning. Experimental results show that Sm DAE outperforms both SAE and m DAE in the given classification tasks;comparative experiments with Convolutional Neural Network( CNN) show that Sm DAE with marginalized denoising and a more robust model outperforms convolutional neural network.
出处 《计算机应用》 CSCD 北大核心 2016年第3期697-702,共6页 journal of Computer Applications
基金 国家自然科学基金资助项目(61273225) 国家科技支撑计划项目(2012BAC22B01)~~
关键词 深度学习 自动编码器 稀疏自动编码器 降噪自动编码器 卷积神经网络 deep learning Auto-Encoder(AE) Sparse Auto-Encoder(SAE) Denoising Auto-Encoder(DAE) Convolutional Neural Network(CNN)
  • 相关文献

参考文献15

  • 1RUMELHART D E, HINTON G E, WILLIAMS R J. Learning representations by back-propagating errors [ J]. Nature, 1986, 323 (9) : 533 -536.
  • 2BALDI P, HORNIK K. Neural networks and principal component analysis: learning from examples without local minima [ J]. Neural Networks, 1989, 2(1) : 53 - 58.
  • 3BENGIO Y, LAMBLIN P, POPOVIEI D, et al. Personal communications with Will Zou. learning optimization Greedy layer-wise training of deep networks [ C]// Proceedings of the 20th Annual Conference on Neural Information Processing System. Cambridge, MA: MIT Press, 2006:153 - 160.
  • 4BENGIO Y. Learning deep architectures for AI [ J]. Foundations & Trends in Machine Learning, 2009, 2(1) : 1 - 127.
  • 5VINCENT P, LAROCHELLE H, BENGIO Y, et al. Extracting and composing robust features with denoising autoencoders [ C]// Proceedings of the 2008 25th International Conference on Machine Learning. New York: ACM, 2008:1096-1103.
  • 6CHEN M, WEINBERGER K, SHA F, et al. Marginalized denoising auto-encoders for nonlinear representations [ C]//Proceedings of the 2014 31 th International Conference on Machine Learning. New York: ACM, 2014: 1476- 1484.
  • 7VINCENT P, LAROCHELLE H, LAJOIE I, et al. Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion [ J]. Journal of Machine Learning Research, 2010, 11(6) : 3371 -3408.
  • 8LeCUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition [J]. Proceedings of the IEEE, 1998, 86(11) : 2278 - 2324.
  • 9FARABET C, COUPRIE C, NAJMAN L, et al. Learning hierarchi- cal features for scene labeling [ J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(8) : 1915 - 1929.
  • 10MOHAMED A, DAHL G E, HINTON G. Acoustic modeling using deep belief networks [ J]. IEEE Transactions on Audio, Speech, and Language Processing, 2012, 20(1): 14-22.

同被引文献345

引证文献43

二级引证文献203

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部