The emerging push of the differentiable programming paradigm in scientific computing is conducive to training deep learning turbulence models using indirect observations.This paper demonstrates the viability of this a...The emerging push of the differentiable programming paradigm in scientific computing is conducive to training deep learning turbulence models using indirect observations.This paper demonstrates the viability of this approach and presents an end-to-end differentiable framework for training deep neural networks to learn eddy viscosity models from indirect observations derived from the velocity and pressure fields.The framework consists of a Reynolds-averaged Navier–Stokes(RANS)solver and a neuralnetwork-represented turbulence model,each accompanied by its derivative computations.For computing the sensitivities of the indirect observations to the Reynolds stress field,we use the continuous adjoint equations for the RANS equations,while the gradient of the neural network is obtained via its built-in automatic differentiation capability.We demonstrate the ability of this approach to learn the true underlying turbulence closure when one exists by training models using synthetic velocity data from linear and nonlinear closures.We also train a linear eddy viscosity model using synthetic velocity measurements from direct numerical simulations of the Navier–Stokes equations for which no true underlying linear closure exists.The trained deep-neural-network turbulence model showed predictive capability on similar flows.展开更多
In this paper, we formulate interface problem and Neumann elliptic boundary value problem into a form of linear operator equations with self-adjoint positive definite op- erators. We prove that in the discrete level t...In this paper, we formulate interface problem and Neumann elliptic boundary value problem into a form of linear operator equations with self-adjoint positive definite op- erators. We prove that in the discrete level the condition number of these operators is independent of the mesh size. Therefore, given a prescribed error tolerance, the classical conjugate gradient algorithm converges within a fixed number of iterations. The main computation task at each iteration is to solve a Dirichlet Poisson boundary value problem in a rectangular domain, which can be furnished with fast Poisson solver. The overall computational complexity is essentially of linear scaling.展开更多
文摘The emerging push of the differentiable programming paradigm in scientific computing is conducive to training deep learning turbulence models using indirect observations.This paper demonstrates the viability of this approach and presents an end-to-end differentiable framework for training deep neural networks to learn eddy viscosity models from indirect observations derived from the velocity and pressure fields.The framework consists of a Reynolds-averaged Navier–Stokes(RANS)solver and a neuralnetwork-represented turbulence model,each accompanied by its derivative computations.For computing the sensitivities of the indirect observations to the Reynolds stress field,we use the continuous adjoint equations for the RANS equations,while the gradient of the neural network is obtained via its built-in automatic differentiation capability.We demonstrate the ability of this approach to learn the true underlying turbulence closure when one exists by training models using synthetic velocity data from linear and nonlinear closures.We also train a linear eddy viscosity model using synthetic velocity measurements from direct numerical simulations of the Navier–Stokes equations for which no true underlying linear closure exists.The trained deep-neural-network turbulence model showed predictive capability on similar flows.
基金The work of the first author was supported by the National Natural Science Foundation of China (91330203). The work of the second author was supported by the National Natural Science Foundation of China (10371218) and the Initiative Scientific Research Program of Tsinghua University.
文摘In this paper, we formulate interface problem and Neumann elliptic boundary value problem into a form of linear operator equations with self-adjoint positive definite op- erators. We prove that in the discrete level the condition number of these operators is independent of the mesh size. Therefore, given a prescribed error tolerance, the classical conjugate gradient algorithm converges within a fixed number of iterations. The main computation task at each iteration is to solve a Dirichlet Poisson boundary value problem in a rectangular domain, which can be furnished with fast Poisson solver. The overall computational complexity is essentially of linear scaling.