期刊文献+

人工神经网络中的一种Krylov子空间优化算法 被引量:3

A Krylov Subspace Optimization Method in Artificial Neural Network
下载PDF
导出
摘要 介绍了人工神经网络的二阶优化算法研究现状,对人工神经网络损失函数的KSD(Krylov Subspace Descent)优化算法进行改进。针对KSD算法中采用固定不变的Krylov子空间维数的方式,提出了Krylov子空间维数根据计算结果自适应改变的MKSD(Modified KSD)算法,并给出了利用MKSD、KSD以及SGD(Stochastic Gradient Descent)优化算法对不同问题的全连接神经网络进行训练的数值算例。计算结果说明MKSD的算法对比于其他算法具有一定的优势。 The development of algorithms for optimizing the loss function of artificial neural networks is introduced is this work.The KSD(Krylov Subspace Descent)algorithm is extended to MKSD(Modified KSD)algorithm which has adaptively variable subspace dimension instead of fixed dimension.Some numerical examples of optimizing the fully connected neural network problems by MKSD,KSD and SGD(Stochastic Gradient Descent)algorithms are given.The numerical results show that the MKSD method has certain advantages over other methods.
作者 张振宇 林沐阳 ZHANG Zhenyu;LIN Muyang(School of Mathematics,Shanghai University of Finance and Economics,Shanghai 200433;Shanghai University of Finance and Economics Zhejiang College,Jinhua 321013)
出处 《工程数学学报》 CSCD 北大核心 2022年第5期681-694,共14页 Chinese Journal of Engineering Mathematics
基金 国家自然科学基金(11671246)。
关键词 人工神经网络 KRYLOV子空间 优化算法 artificial neural network Krylov subspace optimization method
  • 相关文献

参考文献1

二级参考文献18

  • 1Rumelhart D E, Hinton G E, Williams R J. Learninginternal repr esentatio ns by error propagation[A].Rumelhart D E James L.McClelland J L. Parallel di stributed processing: explorations in the microstructure of cognition[C], vol ume 1, Cambridge, MA:MIT Press, 1986.318~362.
  • 2Neural Network Toolbox User's Guide .The Mathworks,inc. 1999.
  • 3Fahlman S E. Faster-learning variations on back-propagation: an e mpirical study[A].Touretzky D,Hinton G,Sejnowski T. Proceedings of the 1988 C onnectionist Models Summer School[C].Carnegic Mellon University,1988,38~51.
  • 4Jacobs R A. Increased rates of convergence through learning rate adaptation[J]. Neural Networks,1988,1:295~307.
  • 5Shar S, Palmieri F. MEKA-a fast, local algorithm for training feedforwa rd neural networks[A]. Proceedings of the International Joint Conference on Ne ural Networks[C]. IEEE Press, New York, 1990.41~46.
  • 6Watrous R L. Learning algorithms for connectionist network: appli ed gradie nt methods of nonlinear optimization[A]. Proceedings of IEEE International Con ference on Neural Networks[c]. IEEE Press, New York, 1987.619~627.
  • 7Shar S,Palmieri F,Datum M.Optimal filtering algorithms f or fast l earning in feedforward neural networks[J]. Neural Networks,1992, 5(5):779~7 87.
  • 8Martin R,Heinrich B. A Direct Adaptive Method for F aster Backpropagation Learning: The RPROP Algorithrm[A]. Ruspini H. Proceedi ngs of the IEEE Interna t ional Conference on Neural Networks (ICNN)[C]. IEEE Press, New York. 1993.58 6~591.
  • 9Fletcher R,Reeves C M. Function minimization by conjugate gra dients[J]. Computer Journal ,1964,7:149~154.
  • 10Powell MJD. Restart procedures for the conjugate gradient metho d[J]. Mathematical Programming, 1977, 12: 241~254.

共引文献169

同被引文献28

引证文献3

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部