期刊文献+

基于多新息理论的深度信念网络算法 被引量:5

Deep belief network algorithm based on multi-innovation theory
下载PDF
导出
摘要 针对深度信念网络(DBN)算法在采用反向传播修正网络的连接权值和偏置的过程中,容易产生梯度小、学习率低、误差收敛速度慢等问题,提出一种结合多新息理论对标准DBN算法进行改进的算法,即多新息DBN(MIDBN)。MI-DBN算法是对标准DBN算法中反向传播的过程重新建模,使得算法在原先只利用单个新息的情况下,扩展为能够充分利用之前多个周期的新息,从而大幅提高误差收敛速度。通过实验对MI-DBN算法和其他分类算法进行了数据集分类的比较,实验结果表明,MI-DBN算法相较其他分类算法,其误差收敛速度较快,而且最终对MNIST数据集和Caltech101数据集的识别中误差结果相对更小。 Aiming at the problem of small gradient, low learning rate, slow convergence of error during the process of using Deep Belief Network (DBN) algorithm to correct connection weight and bias of network by the method of back propagation, a new algorithm called Multi-Innovation DBN (MI-DBN) was proposed based on combination of standard DBN algorithm with multi-innovation theory. The back propagation process in standard DBN algorithm was remodeled to make full use of multiple innovations in previous cycles, while the original algorithm can only use single innovation. Thus, the convergence rate of error was significantly increased. MI-DBN algorithm and other representative classifiers were compared through experiments of datasets classification. Experimental results show that MI-DBN algorithm has a faster convergence rate than other sorting algorithms; especially when identifying MNIST and Caltech101 dataset, MI-DBN algorithm has the fewest inaccuracies among all the algorithms.
出处 《计算机应用》 CSCD 北大核心 2016年第9期2521-2525,2534,共6页 journal of Computer Applications
基金 山西省自然科学基金资助项目(2015011045)~~
关键词 深度信念网络算法 误差收敛速度 多新息理论 反向传播 Deep Belief Network (DBN) algorithm error convergence rate multi-innovation theory back-propagation
  • 相关文献

参考文献22

  • 1LECUN Y, BENGIO Y, HINTON G E, et al. Deep learning [J]. Nature, 2015, 521(7553): 436-444.
  • 2HINTON G E, OSINDERO S, TEH Y W. A fast learning algorithm for deep belief nets [J]. Neural Computation, 2006, 18(7): 1527-1554.
  • 3LAROCHELLE H, ERHAN D, COURVILLE A, et al. An empirical evaluation of deep architectures on problems with many factors of variation [C]// ICML '07: Proceedings of the 2007 24th International Conference on Machine Learning. New York: ACM, 2007: 473-480.
  • 4KEYVANRAD M A, HOMAYOUNPOUR M M. Deep belief network training improvement using elite samples minimizing free energy [EB/OL]. [2015-11-22]. http://xueshu.baidu.com/s?wd=paperuri%3A%282e0ed0ef0b45da606b7629105f1f17ed%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Farxiv.org%2Fpdf%2F1411.4046v1&ie=utf-8&sc_us=758889154434608003.
  • 5LIU Y, ZHOU S, CHEN Q. Discriminative deep belief networks for visual data classification [J]. Pattern Recognition, 2011, 44(10/11): 2287-2296.
  • 6HINTON G E, SALAKHUTDINOV R. Reducing the dimensionality of data with neural networks [J]. Science, 2016, 313(5786): 504-507.
  • 7丁锋,萧德云,丁韬.多新息随机梯度辨识方法[J].控制理论与应用,2003,20(6):870-874. 被引量:40
  • 8DING F. Several multi-innovation identification methods [J]. Digital Signal Processing, 2010, 20(4): 1027-1039.
  • 9HINTON G E. A practical guide to training restricted Boltzmann machines [C]// Neural Networks: Tricks of the Trade, LNCS 7700. Berlin: Springer, 2012: 599-619.
  • 10SWERSKY K, CHEN B, MARLIN B, et al. A tutorial on stochastic approximation algorithms for training restricted Boltzmann machines and deep belief nets [C]// Proceedings of the 2010 Information Theory and Applications Workshop. Piscataway, NJ: IEEE, 2010:1-10.

二级参考文献44

共引文献89

同被引文献87

引证文献5

二级引证文献18

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部