期刊文献+

深度自动编码器的研究与展望 被引量:40

Research and Prospect of Deep Auto-encoders
下载PDF
导出
摘要 深度学习是机器学习的一个分支,开创了神经网络发展的新纪元。作为深度学习结构的主要组成部分之一,深度自动编码器主要用于完成转换学习任务,同时在无监督学习及非线性特征提取过程中也扮演着至关重要的角色。首先介绍深度自动编码器的发展由来、基本概念及原理,然后介绍它的构建方法以及预训练和精雕的一般步骤,并对不同类型深度自动编码器进行总结,最后在深入分析深度自动编码器目前存在的问题的基础上,对其未来发展趋势进行展望。 Deep learning,which is a branch of machine learning,inaugurates new era in the development of neural network. As a key component of deep structure,the deep auto-encoder is used to fulfill a task of transforming learning and plays important role in both unsupervised learning and non-linear characters extraction. We firstly introduced the origin of deep auto-encoder as well as its basic concept and principle,secondly,the construction procedure,pre-training and fine-tune procedure of depth auto-encoders were generally introduced,meanwhile,a comprehensive summarization of different kinds of DAE was made. At last,the direction of future work was proposed based on an in-depth study of current DAE researches.
出处 《计算机与现代化》 2014年第8期128-134,共7页 Computer and Modernization
关键词 深度学习 深度自动编码器 预训练 精雕 神经网络 deep learning deep auto-encoder(DAE) pre-train fine-tune neural network
  • 相关文献

参考文献5

二级参考文献93

  • 1Michison G. The Organization of Sequential Memory:Sparse Representations and the Targetiong Problem. Organiztion of Neural Networks, VCH Verlagsgesellschaft, Weinheim, 1988:347 -367.
  • 2Field D J. What the Statistics of Natural Images Tell Us about Visual Coding[J]. Proc. the International Society for Optical Engineering(SPIE), 1989, 1077:269 - 276.
  • 3Olshausen B A,Field D J. Emergence of Simple - cell Receptire Field Properties by Learning a Sparse Code for Natural Images[J]. Nature, 1996,381 : 607 - 609.
  • 4Olshausen B A,Field D J. Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1 Vision Re search,1997,37:3 311 -3 325.
  • 5Bell A J,Sejnowsld T J. The Independent Components of Natural Scenes are Edge Filters[J]. Vision Research, 1997, 37:3 327-3 338.
  • 6Joshua B Tenenbaum,William T Freeman. Separating Style and Content with Bilinear Models[J]. Neural Computation, 2000,12:1 247 - 1 283.
  • 7Bruno A Olshausen,Phil Sallee,Michael S Lewichi. Learning Sparse Image Codes Using a Wavelet Pyramid Archi tecture[J]. Advances in Neural Information Processing Systems,2001,13:887 - 893.
  • 8Aapo Hyvarien, Hoyer P O. A Two - layer Sparse Coding Model Learns Simple and Complex cell Receptive Fields and Topography from Natural Images[J]. Vision Research, 2001,41(18):2 413-2 423.
  • 9Hoyer P O. Modeling Receptive Fields with Non - negative Sparse Coding. Computational Neuroscience.-Trends in Research, 2003, Elsevier, Amsterdam, 2003.
  • 10Donoho D L, Michael Elad. Optimally Sparse Representation in General Dictionaries via 1^∞ Minimization[J]. Proceedings of the National Academy of Sciences (PNAS), 2003,100(5):2 197-2 202.

共引文献674

同被引文献228

引证文献40

二级引证文献326

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部