INTERNODES is a general purpose method to deal with non-conforming discretizations of partial differential equations on 2D and 3D regions partitioned into two or several disjoint subdomains. It exploits two intergrid ...INTERNODES is a general purpose method to deal with non-conforming discretizations of partial differential equations on 2D and 3D regions partitioned into two or several disjoint subdomains. It exploits two intergrid interpolation operators, one for transfering the Dirichlet trace across the interfaces, and the other for the Neumann trace. In this paper, in every subdomain the original problem is discretized by either the finite element method (FEM) or the spectral element method (SEM or hp-FEM), using a priori non-matching grids and piecewise polynomials of different degrees. Other discretization methods, however, can be used. INTERNODES can also be applied to heterogeneous or multiphysics problems, that is, problems that feature different differential operators inside adjacent subdomains. For instance, in this paper we apply the INTERNODES method to a Stokes- Darcy coupled problem that models the filtration of fluids in porous media. Our results highlight the flexibility of the method as well as its optimal rate of convergence with respect to the grid size and the polynomial degree.展开更多
We propose an adaptive strategy for solving high frequency Helmholtz scattering problems.The method is based on the uniaxial PML method to truncate the scattering problem which is defined in the unbounded domain into ...We propose an adaptive strategy for solving high frequency Helmholtz scattering problems.The method is based on the uniaxial PML method to truncate the scattering problem which is defined in the unbounded domain into the bounded domain.The parameters in the uniaxial PML method are determined by sharp a posteriori error estimates developed by Chen and Wu[8].An hp-adaptive finite element strategy is proposed to solve the uniaxial PML equation.Numerical experiments are included which indicate the desirable exponential decay property of the error.展开更多
The goal of efficient and robust error control, through local mesh adaptationin the computational solution of partial differential equations, is predicated on theability to identify in an a posteriori way those locali...The goal of efficient and robust error control, through local mesh adaptationin the computational solution of partial differential equations, is predicated on theability to identify in an a posteriori way those localized regions whose refinement willlead to the most significant reductions in the error. The development of a posteriori errorestimation schemes and of a refinement infrastructure both facilitate this goal, howeverthey are incomplete in the sense that they do not provide an answer as to where themaximal impact of refinement may be gained or what type of refinement — elementalpartitioning (h-refinement) or polynomial enrichment (p-refinement) — will best leadto that gain. In essence, one also requires knowledge of the sensitivity of the error toboth the location and the type of refinement. In this communication we propose theuse of adjoint-based sensitivity analysis to discriminate both where and how to refine.We present both an adjoint-based and an algebraic perspective on defining and usingsensitivities, and then demonstrate through several one-dimensional model problemexperiments the feasibility and benefits of our approach.展开更多
We perform a comparison in terms of accuracy and CPU time between second order BDF semi-Lagrangian and Lagrange-Galerkin schemes in combination with high order finite element method.The numerical results show that for...We perform a comparison in terms of accuracy and CPU time between second order BDF semi-Lagrangian and Lagrange-Galerkin schemes in combination with high order finite element method.The numerical results show that for polynomials of degree 2 semi-Lagrangian schemes are faster than Lagrange-Galerkin schemes for the same number of degrees of freedom,however,for the same level of accuracy both methods are about the same in terms of CPU time.For polynomials of degree larger than 2,Lagrange-Galerkin schemes behave better than semi-Lagrangian schemes in terms of both accuracy and CPU time;specially,for polynomials of degree 8 or larger.Also,we have performed tests on the parallelization of these schemes and the speedup obtained is quasi-optimal even with more than 100 processors.展开更多
文摘INTERNODES is a general purpose method to deal with non-conforming discretizations of partial differential equations on 2D and 3D regions partitioned into two or several disjoint subdomains. It exploits two intergrid interpolation operators, one for transfering the Dirichlet trace across the interfaces, and the other for the Neumann trace. In this paper, in every subdomain the original problem is discretized by either the finite element method (FEM) or the spectral element method (SEM or hp-FEM), using a priori non-matching grids and piecewise polynomials of different degrees. Other discretization methods, however, can be used. INTERNODES can also be applied to heterogeneous or multiphysics problems, that is, problems that feature different differential operators inside adjacent subdomains. For instance, in this paper we apply the INTERNODES method to a Stokes- Darcy coupled problem that models the filtration of fluids in porous media. Our results highlight the flexibility of the method as well as its optimal rate of convergence with respect to the grid size and the polynomial degree.
文摘We propose an adaptive strategy for solving high frequency Helmholtz scattering problems.The method is based on the uniaxial PML method to truncate the scattering problem which is defined in the unbounded domain into the bounded domain.The parameters in the uniaxial PML method are determined by sharp a posteriori error estimates developed by Chen and Wu[8].An hp-adaptive finite element strategy is proposed to solve the uniaxial PML equation.Numerical experiments are included which indicate the desirable exponential decay property of the error.
基金The work of the third author was supported in part by NSF Career Award CCF0347791.
文摘The goal of efficient and robust error control, through local mesh adaptationin the computational solution of partial differential equations, is predicated on theability to identify in an a posteriori way those localized regions whose refinement willlead to the most significant reductions in the error. The development of a posteriori errorestimation schemes and of a refinement infrastructure both facilitate this goal, howeverthey are incomplete in the sense that they do not provide an answer as to where themaximal impact of refinement may be gained or what type of refinement — elementalpartitioning (h-refinement) or polynomial enrichment (p-refinement) — will best leadto that gain. In essence, one also requires knowledge of the sensitivity of the error toboth the location and the type of refinement. In this communication we propose theuse of adjoint-based sensitivity analysis to discriminate both where and how to refine.We present both an adjoint-based and an algebraic perspective on defining and usingsensitivities, and then demonstrate through several one-dimensional model problemexperiments the feasibility and benefits of our approach.
基金funded by grant CGL2007-66440-C04-01 from Ministerio de Educacion y Ciencia de Espana.
文摘We perform a comparison in terms of accuracy and CPU time between second order BDF semi-Lagrangian and Lagrange-Galerkin schemes in combination with high order finite element method.The numerical results show that for polynomials of degree 2 semi-Lagrangian schemes are faster than Lagrange-Galerkin schemes for the same number of degrees of freedom,however,for the same level of accuracy both methods are about the same in terms of CPU time.For polynomials of degree larger than 2,Lagrange-Galerkin schemes behave better than semi-Lagrangian schemes in terms of both accuracy and CPU time;specially,for polynomials of degree 8 or larger.Also,we have performed tests on the parallelization of these schemes and the speedup obtained is quasi-optimal even with more than 100 processors.