摘要
本文针对神经网络在微分方程求解中的先验估计问题,将极限学习机(Extreme Learning Machine, ELM)嵌入粒子群优化算法和一类使用梯度更新的深度学习方法进行比较,将他们应用于一类微分方程求解并进行对比。与使用梯度信息优化损失函数的深度学习方法不同,ELM使用Moore-Penrose广义逆替代了梯度信息更新权重以达到损失函数最小化。本文通过实验证明了ELM对于方程的区间没有要求,训练时间短,但劣势是若解是非连续的,对间断点类型有要求等,而此类深度学习算法加深了网络层数,优势是学习到更抽象的特征,且解可以是非连续的,劣势对方程区间有较严格要求,否则梯度消失,并通过实验结果进行了验证,且训练时间长此类深度学习算法有时会伴随着一些剪枝算法压缩网络。本文的结论可以用于先验的算法选择,提高求解效率。
In this paper, aiming at the priori estimation problem of neural network in differential equation solving, the Extreme Learning Machine (ELM) was embedded in particle swarm optimization algo-rithm and a class of deep learning algorithms using gradient update are compared, and they are applied to a class of differential equation solving and compared. Unlike the deep learning algo-rithms using gradient information to optimize the loss function, ELM used Moore-Penrose gener-alized inverse instead of gradient information to update the weight to minimize the loss function. This paper proved through experiments that ELM has no requirements for the interval of the equation, and the training time is short, but the disadvantages are that if the solution is discon-tinuous, it has requirements for the type of breakpoints, etc. This kind of deep learning algorithm deepens the number of network layers, and the advantages are learning more abstract features, and the solution can be discontinuous. While the disadvantages are strict requirements for the in-terval of the equation, otherwise the gradient is disappear, which is verified by the experimental results, and the training time is long. Such deep learning algorithms sometimes are accompanied by some pruning algorithms to compress the network. The conclusion of this paper can be used for prior algorithm selection to improve the efficiency of solution.
出处
《应用数学进展》
2022年第12期8740-8749,共10页
Advances in Applied Mathematics