An efficient data-driven approach for predicting steady airfoil flows is proposed based on the Fourier neural operator(FNO),which is a new framework of neural networks.Theoretical reasons and experimental results are ...An efficient data-driven approach for predicting steady airfoil flows is proposed based on the Fourier neural operator(FNO),which is a new framework of neural networks.Theoretical reasons and experimental results are provided to support the necessity and effectiveness of the improvements made to the FNO,which involve using an additional branch neural operator to approximate the contribution of boundary conditions to steady solutions.The proposed approach runs several orders of magnitude faster than the traditional numerical methods.The predictions for flows around airfoils and ellipses demonstrate the superior accuracy and impressive speed of this novel approach.Furthermore,the property of zero-shot super-resolution enables the proposed approach to overcome the limitations of predicting airfoil flows with Cartesian grids,thereby improving the accuracy in the near-wall region.There is no doubt that the unprecedented speed and accuracy in forecasting steady airfoil flows have massive benefits for airfoil design and optimization.展开更多
Fourier neural operator(FNO)model is developed for large eddy simulation(LES)of three-dimensional(3D)turbulence.Velocity fields of isotropic turbulence generated by direct numerical simulation(DNS)are used for trainin...Fourier neural operator(FNO)model is developed for large eddy simulation(LES)of three-dimensional(3D)turbulence.Velocity fields of isotropic turbulence generated by direct numerical simulation(DNS)are used for training the FNO model to predict the filtered velocity field at a given time.The input of the FNO model is the filtered velocity fields at the previous several time-nodes with large time lag.In the a posteriori study of LES,the FNO model performs better than the dynamic Smagorinsky model(DSM)and the dynamic mixed model(DMM)in the prediction of the velocity spectrum,probability density functions(PDFs)of vorticity and velocity increments,and the instantaneous flow structures.Moreover,the proposed model can significantly reduce the computational cost,and can be well generalized to LES of turbulence at higher Taylor-Reynolds numbers.展开更多
In this paper,we develop the deep learning-based Fourier neural operator(FNO)approach to find parametric mappings,which are used to approximately display abundant wave structures in the nonlinear Schr?dinger(NLS)equat...In this paper,we develop the deep learning-based Fourier neural operator(FNO)approach to find parametric mappings,which are used to approximately display abundant wave structures in the nonlinear Schr?dinger(NLS)equation,Hirota equation,and NLS equation with the generalized PT-symmetric Scarf-II potentials.Specifically,we analyze the state transitions of different types of solitons(e.g.bright solitons,breathers,peakons,rogons,and periodic waves)appearing in these complex nonlinear wave equations.By checking the absolute errors between the predicted solutions and exact solutions,we can find that the FNO with the Ge Lu activation function can perform well in all cases even though these solution parameters have strong influences on the wave structures.Moreover,we find that the approximation errors via the physics-informed neural networks(PINNs)are similar in magnitude to those of the FNO.However,the FNO can learn the entire family of solutions under a given distribution every time,while the PINNs can only learn some specific solution each time.The results obtained in this paper will be useful for exploring physical mechanisms of soliton excitations in nonlinear wave equations and applying the FNO in other nonlinear wave equations.展开更多
Partial differential equations(PDEs)play a dominant role in themathematicalmodeling ofmany complex dynamical processes.Solving these PDEs often requires prohibitively high computational costs,especially when multiple ...Partial differential equations(PDEs)play a dominant role in themathematicalmodeling ofmany complex dynamical processes.Solving these PDEs often requires prohibitively high computational costs,especially when multiple evaluations must be made for different parameters or conditions.After training,neural operators can provide PDEs solutions significantly faster than traditional PDE solvers.In this work,invariance properties and computational complexity of two neural operators are examined for transport PDE of a scalar quantity.Neural operator based on graph kernel network(GKN)operates on graph-structured data to incorporate nonlocal dependencies.Here we propose a modified formulation of GKN to achieve frame invariance.Vector cloud neural network(VCNN)is an alternate neural operator with embedded frame invariance which operates on point cloud data.GKN-based neural operator demonstrates slightly better predictive performance compared to VCNN.However,GKN requires an excessively high computational cost that increases quadratically with the increasing number of discretized objects as compared to a linear increase for VCNN.展开更多
In this paper, the technique of approximate partition of unity is used to construct a class of neural networks operators with sigmoidal functions. Using the modulus of continuity of function as a metric, the errors of...In this paper, the technique of approximate partition of unity is used to construct a class of neural networks operators with sigmoidal functions. Using the modulus of continuity of function as a metric, the errors of the operators approximating continuous functions defined on a compact interval are estimated. Furthmore, Bochner-Riesz means operators of double Fourier series are used to construct networks operators for approximating bivariate functions, and the errors of approximation by the operators are estimated.展开更多
Learning mappings between functions(operators)defined on complex computational domains is a common theoretical challenge in machine learning.Existing operator learning methods mainly focus on regular computational dom...Learning mappings between functions(operators)defined on complex computational domains is a common theoretical challenge in machine learning.Existing operator learning methods mainly focus on regular computational domains,and have many components that rely on Euclidean structural data.However,many real-life operator learning problems involve complex computational domains such as surfaces and solids,which are non-Euclidean and widely referred to as Riemannian manifolds.Here,we report a new concept,neural operator on Riemannian manifolds(NORM),which generalises neural operator from Euclidean spaces to Riemannian manifolds,and can learn the operators defined on complex geometries while preserving the discretisation-independent model structure.NORM shifts the function-to-function mapping to finite-dimensional mapping in the Laplacian eigenfunctions’subspace of geometry,and holds universal approximation property even with only one fundamental block.The theoretical and experimental analyses prove the significant performance of NORM in operator learning and show its potential for many scientific discoveries and engineering applications.展开更多
Recently,Li[16]introduced three kinds of single-hidden layer feed-forward neural networks with optimized piecewise linear activation functions and fixed weights,and obtained the upper and lower bound estimations on th...Recently,Li[16]introduced three kinds of single-hidden layer feed-forward neural networks with optimized piecewise linear activation functions and fixed weights,and obtained the upper and lower bound estimations on the approximation accuracy of the FNNs,for continuous function defined on bounded intervals.In the present paper,we point out that there are some errors both in the definitions of the FNNs and in the proof of the upper estimations in[16].By using new methods,we also give right approximation rate estimations of the approximation by Li’s neural networks.展开更多
In this paper,we propose a machine learning approach via model-operatordata network(MOD-Net)for solving PDEs.A MOD-Net is driven by a model to solve PDEs based on operator representationwith regularization fromdata.Fo...In this paper,we propose a machine learning approach via model-operatordata network(MOD-Net)for solving PDEs.A MOD-Net is driven by a model to solve PDEs based on operator representationwith regularization fromdata.For linear PDEs,we use a DNN to parameterize the Green’s function and obtain the neural operator to approximate the solution according to the Green’s method.To train the DNN,the empirical risk consists of the mean squared loss with the least square formulation or the variational formulation of the governing equation and boundary conditions.For complicated problems,the empirical risk also includes a fewlabels,which are computed on coarse grid points with cheap computation cost and significantly improves the model accuracy.Intuitively,the labeled dataset works as a regularization in addition to the model constraints.The MOD-Net solves a family of PDEs rather than a specific one and is much more efficient than original neural operator because few expensive labels are required.We numerically show MOD-Net is very efficient in solving Poisson equation and one-dimensional radiative transfer equation.For nonlinear PDEs,the nonlinear MOD-Net can be similarly used as an ansatz for solving nonlinear PDEs,exemplified by solving several nonlinear PDE problems,such as the Burgers equation.展开更多
文摘An efficient data-driven approach for predicting steady airfoil flows is proposed based on the Fourier neural operator(FNO),which is a new framework of neural networks.Theoretical reasons and experimental results are provided to support the necessity and effectiveness of the improvements made to the FNO,which involve using an additional branch neural operator to approximate the contribution of boundary conditions to steady solutions.The proposed approach runs several orders of magnitude faster than the traditional numerical methods.The predictions for flows around airfoils and ellipses demonstrate the superior accuracy and impressive speed of this novel approach.Furthermore,the property of zero-shot super-resolution enables the proposed approach to overcome the limitations of predicting airfoil flows with Cartesian grids,thereby improving the accuracy in the near-wall region.There is no doubt that the unprecedented speed and accuracy in forecasting steady airfoil flows have massive benefits for airfoil design and optimization.
基金supported by the National Natural Science Foundation of China(Nos.91952104,92052301,12172161,and 12161141017)National Numerical Windtunnel Project(No.NNW2019ZT1-A04)+4 种基金Shenzhen Science and Technology Program(No.KQTD20180411143441009)Key Special Project for Introduced Talents Team of Southern Marine Science and Engineering Guangdong Laboratory(Guangzhou)(No.GML2019ZD0103)CAAI-Huawei Mind Spore open Fundand by Department of Science and Technology of Guangdong Province(No.2019B21203001)supported by Center for Computational Science and Engineering of Southern University of Science and Technology。
文摘Fourier neural operator(FNO)model is developed for large eddy simulation(LES)of three-dimensional(3D)turbulence.Velocity fields of isotropic turbulence generated by direct numerical simulation(DNS)are used for training the FNO model to predict the filtered velocity field at a given time.The input of the FNO model is the filtered velocity fields at the previous several time-nodes with large time lag.In the a posteriori study of LES,the FNO model performs better than the dynamic Smagorinsky model(DSM)and the dynamic mixed model(DMM)in the prediction of the velocity spectrum,probability density functions(PDFs)of vorticity and velocity increments,and the instantaneous flow structures.Moreover,the proposed model can significantly reduce the computational cost,and can be well generalized to LES of turbulence at higher Taylor-Reynolds numbers.
基金the NSFC under Grant Nos.11925108 and 11731014the NSFC under Grant No.11975306
文摘In this paper,we develop the deep learning-based Fourier neural operator(FNO)approach to find parametric mappings,which are used to approximately display abundant wave structures in the nonlinear Schr?dinger(NLS)equation,Hirota equation,and NLS equation with the generalized PT-symmetric Scarf-II potentials.Specifically,we analyze the state transitions of different types of solitons(e.g.bright solitons,breathers,peakons,rogons,and periodic waves)appearing in these complex nonlinear wave equations.By checking the absolute errors between the predicted solutions and exact solutions,we can find that the FNO with the Ge Lu activation function can perform well in all cases even though these solution parameters have strong influences on the wave structures.Moreover,we find that the approximation errors via the physics-informed neural networks(PINNs)are similar in magnitude to those of the FNO.However,the FNO can learn the entire family of solutions under a given distribution every time,while the PINNs can only learn some specific solution each time.The results obtained in this paper will be useful for exploring physical mechanisms of soliton excitations in nonlinear wave equations and applying the FNO in other nonlinear wave equations.
基金supported by the U.S.Air Force under agreement number FA865019-2-2204.
文摘Partial differential equations(PDEs)play a dominant role in themathematicalmodeling ofmany complex dynamical processes.Solving these PDEs often requires prohibitively high computational costs,especially when multiple evaluations must be made for different parameters or conditions.After training,neural operators can provide PDEs solutions significantly faster than traditional PDE solvers.In this work,invariance properties and computational complexity of two neural operators are examined for transport PDE of a scalar quantity.Neural operator based on graph kernel network(GKN)operates on graph-structured data to incorporate nonlocal dependencies.Here we propose a modified formulation of GKN to achieve frame invariance.Vector cloud neural network(VCNN)is an alternate neural operator with embedded frame invariance which operates on point cloud data.GKN-based neural operator demonstrates slightly better predictive performance compared to VCNN.However,GKN requires an excessively high computational cost that increases quadratically with the increasing number of discretized objects as compared to a linear increase for VCNN.
基金Supported by the National Natural Science Foundation of China(61179041, 61101240)the Zhejiang Provincial Natural Science Foundation of China(Y6110117)
文摘In this paper, the technique of approximate partition of unity is used to construct a class of neural networks operators with sigmoidal functions. Using the modulus of continuity of function as a metric, the errors of the operators approximating continuous functions defined on a compact interval are estimated. Furthmore, Bochner-Riesz means operators of double Fourier series are used to construct networks operators for approximating bivariate functions, and the errors of approximation by the operators are estimated.
基金supported by the National Science Fund for Distinguished Young Scholars(51925505)the General Program of National Natural Science Foundation of China(52275491)+3 种基金the Major Program of the National Natural Science Foundation of China(52090052)the Joint Funds of the National Natural Science Foundation of China(U21B2081)the National Key R&D Program of China(2022YFB3402600)the New Cornerstone Science Foundation through the XPLORER PRIZE.Author contributions G.C.,X.L.
文摘Learning mappings between functions(operators)defined on complex computational domains is a common theoretical challenge in machine learning.Existing operator learning methods mainly focus on regular computational domains,and have many components that rely on Euclidean structural data.However,many real-life operator learning problems involve complex computational domains such as surfaces and solids,which are non-Euclidean and widely referred to as Riemannian manifolds.Here,we report a new concept,neural operator on Riemannian manifolds(NORM),which generalises neural operator from Euclidean spaces to Riemannian manifolds,and can learn the operators defined on complex geometries while preserving the discretisation-independent model structure.NORM shifts the function-to-function mapping to finite-dimensional mapping in the Laplacian eigenfunctions’subspace of geometry,and holds universal approximation property even with only one fundamental block.The theoretical and experimental analyses prove the significant performance of NORM in operator learning and show its potential for many scientific discoveries and engineering applications.
文摘Recently,Li[16]introduced three kinds of single-hidden layer feed-forward neural networks with optimized piecewise linear activation functions and fixed weights,and obtained the upper and lower bound estimations on the approximation accuracy of the FNNs,for continuous function defined on bounded intervals.In the present paper,we point out that there are some errors both in the definitions of the FNNs and in the proof of the upper estimations in[16].By using new methods,we also give right approximation rate estimations of the approximation by Li’s neural networks.
基金sponsored by the National Key R&D Program of China Grant No.2019YFA0709503(Z.X.)and No.2020YFA0712000(Z.M.)the Shanghai Sailing Program(Z.X.)+9 种基金the Natural Science Foundation of Shanghai Grant No.20ZR1429000(Z.X.)the National Natural Science Foundation of China Grant No.62002221(Z.X.)the National Natural Science Foundation of China Grant No.12101401(T.L.)the National Natural Science Foundation of China Grant No.12101402(Y.Z.)Shanghai Municipal of Science and Technology Project Grant No.20JC1419500(Y.Z.)the Lingang Laboratory Grant No.LG-QS-202202-08(Y.Z.)the National Natural Science Foundation of China Grant No.12031013(Z.M.)Shanghai Municipal of Science and Technology Major Project No.2021SHZDZX0102the HPC of School of Mathematical Sciencesthe Student Innovation Center at Shanghai Jiao Tong University.
文摘In this paper,we propose a machine learning approach via model-operatordata network(MOD-Net)for solving PDEs.A MOD-Net is driven by a model to solve PDEs based on operator representationwith regularization fromdata.For linear PDEs,we use a DNN to parameterize the Green’s function and obtain the neural operator to approximate the solution according to the Green’s method.To train the DNN,the empirical risk consists of the mean squared loss with the least square formulation or the variational formulation of the governing equation and boundary conditions.For complicated problems,the empirical risk also includes a fewlabels,which are computed on coarse grid points with cheap computation cost and significantly improves the model accuracy.Intuitively,the labeled dataset works as a regularization in addition to the model constraints.The MOD-Net solves a family of PDEs rather than a specific one and is much more efficient than original neural operator because few expensive labels are required.We numerically show MOD-Net is very efficient in solving Poisson equation and one-dimensional radiative transfer equation.For nonlinear PDEs,the nonlinear MOD-Net can be similarly used as an ansatz for solving nonlinear PDEs,exemplified by solving several nonlinear PDE problems,such as the Burgers equation.