期刊文献+

Gradient Convergence of Deep Learning-Based Numerical Methods for BSDEs

原文传递
导出
摘要 The authors prove the gradient convergence of the deep learning-based numerical method for high dimensional parabolic partial differential equations and backward stochastic differential equations, which is based on time discretization of stochastic differential equations(SDEs for short) and the stochastic approximation method for nonconvex stochastic programming problem. They take the stochastic gradient decent method,quadratic loss function, and sigmoid activation function in the setting of the neural network. Combining classical techniques of randomized stochastic gradients, Euler scheme for SDEs, and convergence of neural networks, they obtain the O(K^(-1/4)) rate of gradient convergence with K being the total number of iterative steps.
出处 《Chinese Annals of Mathematics,Series B》 SCIE CSCD 2021年第2期199-216,共18页 数学年刊(B辑英文版)
基金 This work was supported by the National Key R&D Program of China(No.2018YFA0703900) the National Natural Science Foundation of China(No.11631004)。
  • 相关文献

参考文献3

二级参考文献2

共引文献33

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部