摘要
深度神经网络近年来逐渐成为一个研究热点,它在建模复杂的数据集上面有着突出的表现。 深度 神经网络和动力系统有着潜在的联系,如何借助动力系统理论方法深入研究深度神经网络具有重 要的理论和实际意义。 本文首先介绍了Haber等人给出的三类可逆神经网络以及稳定性定理,然 后介绍了林洁对于连续模型的稳定性做出的贡献,紧接着给出一个反例以此来说明Haber等人给 出的关于离散模型的稳定性定理不严谨,然后对定理进行优化改进,得到新的判断欧拉格式稳定 性的定理,最后将稳定性定理运用到一类哈密顿网络中。
In recent years, deep neural networks have gradually become a research hotspot, and they have relatively good performance in modeling complex data sets. There is a potential connection between deep neural networks and dynamical systems. It is of great theoretical and practical significance to deeply study deep neural networks using dynamic system theory and methods. In this paper, firstly three types of reversible neural networks and stability theorems given by Haber et al. are introduced, and then Lin Jie’s contribution to the stability of continuous models is presented. Next, we give a counter example to illustrate that the stability theorem for discrete models given by Haber et al. is not rigorous, and then optimize and improve the theorem to obtain a new theorem for judging the stability of Euler schemes. Finally, we apply the stability theorem to a class of Hamiltonian networks.
出处
《应用数学进展》
2023年第7期3250-3260,共11页
Advances in Applied Mathematics