We continue to consider one of the cybernetic methods in biology related to the study of DNA chains. Exactly, we are considering the problem of reconstructing the distance matrix for DNA chains. Such a matrix is forme...We continue to consider one of the cybernetic methods in biology related to the study of DNA chains. Exactly, we are considering the problem of reconstructing the distance matrix for DNA chains. Such a matrix is formed on the basis of any of the possible algorithms for determining the distances between DNA chains, as well as any specific object of study. At the same time, for example, the practical programming results show that on an average modern computer, it takes about a day to build such a 30 × 30 matrix for mnDNAs using the Needleman-Wunsch algorithm;therefore, for such a 300 × 300 matrix, about 3 months of continuous computer operation is expected. Thus, even for a relatively small number of species, calculating the distance matrix on conventional computers is hardly feasible and the supercomputers are usually not available. Therefore, we started publishing our variants of the algorithms for calculating the distance between two DNA chains, then we publish algorithms for restoring partially filled matrices, i.e., the inverse problem of matrix processing. Previously, we used the method of branches and boundaries, but in this paper we propose to use another new algorithm for restoring the distance matrix for DNA chains. Our recent work has shown that even greater improvement in the quality of the algorithm can often be achieved without improving the auxiliary heuristics of the branches and boundaries method. Thus, we are improving the algorithms that formulate the greedy function of this method only. .展开更多
We investigate decomposition of codes and finite languages. A prime decomposition is a decomposition of a code or languages into a concatenation of nontrivial prime codes or languages. A code is prime if it cannot be ...We investigate decomposition of codes and finite languages. A prime decomposition is a decomposition of a code or languages into a concatenation of nontrivial prime codes or languages. A code is prime if it cannot be decomposed into at least two nontrivial codes as the same for the languages. In the paper, a linear time algorithm is designed, which finds the prime decomposition. If codes or finite languages are presented as given by its minimal deterministic automaton, then from the point of view of abstract algebra and graph theory, this automaton has special properties. The study was conducted using system for computational Discrete Algebra GAP. .展开更多
The paper describes some implementation aspects of an algorithm for approximate solution of the traveling salesman problem based on the construction of convex closed contours on the initial set of points (“cities”) ...The paper describes some implementation aspects of an algorithm for approximate solution of the traveling salesman problem based on the construction of convex closed contours on the initial set of points (“cities”) and their subsequent combination into a closed path (the so-called contour algorithm or “onion husk” algorithm). A number of heuristics related to the different stages of the algorithm are considered, and various variants of the algorithm based on these heuristics are analyzed. Sets of randomly generated points of different sizes (from 4 to 90 and from 500 to 10,000) were used to test the algorithms. The numerical results obtained are compared with the results of two well-known combinatorial optimization algorithms, namely the algorithm based on the branch and bound method and the simulated annealing algorithm. .展开更多
To date, it is unknown whether it is possible to construct a complete graph invariant in polynomial time, so fast algorithms for checking non-isomorphism are important, including heuristic algorithms, and for successf...To date, it is unknown whether it is possible to construct a complete graph invariant in polynomial time, so fast algorithms for checking non-isomorphism are important, including heuristic algorithms, and for successful implementations of such heuristics, both the tasks of some modification of previously described graph invariants and the description of new invariants remain relevant. Many of the described invariants make it possible to distinguish a larger number of graphs in the real time of a computer program. In this paper, we propose an invariant for a special kind of directed graphs, namely, for tournaments. The last ones, from our point of view, are interesting because when fixing the order of vertices, the number of different tournaments is exactly equal to the number of undirected graphs, also with fixing the order of vertices. In the invariant we are considering, all possible tournaments consisting of a subset of vertices of a given digraph with the same set of arcs are iterated over. For such subset tournaments, the places are calculated in the usual way, which are summed up to obtain the final values of the points of the vertices;these points form the proposed invariant. As we expected, calculations of the new invariant showed that it does not coincide with the most natural invariant for tournaments, in which the number of points is calculated for each participant. So far, we have conducted a small number of computational experiments, and the minimum value of the pair correlation between the sequences representing these two invariants that we found is for dimension 15.展开更多
It has been shown that the first principle of thermodynamics follows from the conservation laws for energy and linear momentum. And the second principle of thermodynamics follows from the first principle of thermodyna...It has been shown that the first principle of thermodynamics follows from the conservation laws for energy and linear momentum. And the second principle of thermodynamics follows from the first principle of thermodynamics under realization of the integrating factor (namely, temperature) and is a conservation law. The significance of the first principle of thermodynamics consists in the fact that it specifies the thermodynamic system state, which depends on interaction between conservation laws and is non-equilibrium due to a non-commutativity of conservation laws. The realization of the second principle of thermodynamics points to a transition of the thermodynamic system state into a locally-equilibrium state. Phase transitions are examples of such transitions.展开更多
This paper presents the results of numerical simulation of plasma equilibrium and stability in the MEPHIST-0 tokamak with SIEMNED software and comparison of simulation results with experiments.The determined character...This paper presents the results of numerical simulation of plasma equilibrium and stability in the MEPHIST-0 tokamak with SIEMNED software and comparison of simulation results with experiments.The determined characteristics of the vacuum chamber show that it significantly affects the entire discharge.For various scenarios of the inductor operation,a comparison of experimental data and simulated currents and magnetic fields induced in the chamber was carried out.For steady-state tokamak operation,a numerical study of equilibrium plasma configurations was carried out depending on the currents in the poloidal magnetic field coils and plasma current.The vertical plasma instability was investigated.The limiting values of plasma ellipticity preventing the vertical plasma instability were numerically determined.Numerical simulations show that plasma equilibrium is supported by induced currents.It was shown numerically that magnetic configuration with‘zero of higher order’were obtained before the plasma shot,suggesting consistency between the simulation results and observations.展开更多
The aim is to study the set of subsets of grids of the Waterloo language from the point of view of abstract algebra and graph theory. The study was conducted using the library for working with transition graphs of non...The aim is to study the set of subsets of grids of the Waterloo language from the point of view of abstract algebra and graph theory. The study was conducted using the library for working with transition graphs of nondeterministic finite automata NFALib implemented by one of the authors in C#, as well as statistical methods for analyzing algorithms. The results are regularities obtained when considering semilattices on a set of subsets of grids of the Waterloo language. It follows from the results obtained that the minimum covering automaton equivalent to the Waterloo automaton can be obtained by adding one additional to the minimum covering set of grids. .展开更多
Based on the standard definition of the product (concatenation), the natural non-negative degree of the language is introduced. Root extraction is the reverse operation to it, and it can be defined in several differen...Based on the standard definition of the product (concatenation), the natural non-negative degree of the language is introduced. Root extraction is the reverse operation to it, and it can be defined in several different ways. Despite the simplicity of the formulation of the problem of extracting the root, the authors could not find any description of it in the literature (as well as on the Internet), including even its formulation. Most of the material in this article is devoted to the simplest version of the formulation: the root of the 2<sup>nd</sup> degree for the 1-letter alphabet, but many of the provisions of the article are generalized to more complex cases. Apparently, for a possible future description of a polynomial algorithm for solving at least one of the described statements of root extraction problems, it is first necessary to really analyze in detail such a special case, that is: either describe the necessary polynomial algorithm, or, conversely, show that the problem belongs to the class of NP-complete problems. Thus, in this article, we do not propose a polynomial algorithm for the problems under consideration;however, the models described here should help in constructing appropriate heuristic algorithms for their solution. A detailed description of the possible further application of such heuristic algorithms is beyond the scope of this article. .展开更多
The present paper continues the topic of our recent paper in the same journal,aiming to show the role of structural stability in financial modeling.In the context of financial market modeling,structural stability mean...The present paper continues the topic of our recent paper in the same journal,aiming to show the role of structural stability in financial modeling.In the context of financial market modeling,structural stability means that a specific“no-arbitrage”property is unaffected by small(with respect to the Pompeiu–Hausdorff metric)perturbations of the model’s dynamics.We formulate,based on our economic interpretation,a new requirement concerning“no arbitrage”properties,which we call the“uncertainty principle”.This principle in the case of no-trading constraints is equivalent to structural stability.We demonstrate that structural stability is essential for a correct model approximation(which is used in our numerical method for superhedging price computation).We also show that structural stability is important for the continuity of superhedging prices and discuss the sufficient conditions for this continuity.展开更多
Laser elktacytometry is a technique widely used for measuring the deformability of red blood cells(erythrocytes)in blood samples in vitro.In ektacytometer,a flow of highly diluted suspension of erythrocytes in variabl...Laser elktacytometry is a technique widely used for measuring the deformability of red blood cells(erythrocytes)in blood samples in vitro.In ektacytometer,a flow of highly diluted suspension of erythrocytes in variable shear stress conditions is iluninated with a laser beam to obtain a diffraction pattern.The diffraction pattern provides information about the shapes(shear induced elongations)of the cells under investigation.This paper is dedicated to developing the technique of laser ektacytometry s0 that it would enable one to measure the distrilbution of the erythrocytes in deformability.We discuss the problem of calibration of laser elktacytometer and test a novel data processing algorithm allowing to determine the parameters of the distribution of ery-throcytes deformability.Experimentally,we examined 12 specimens of blood of rats under the action of 4 shear stresses.Analysis of the data shows that in conditions of a limited range of digitizing the diffraction patterns,the measurement errors for the mean deformability,deform-ability scatter and the skewness of erythrocytes distribution in deformability by our method are respectively 15%,20%and 20%.展开更多
We consider a deterministic model of market evolution with trading constraints andapply a game-theoretic approach to the superhedging problem.We obtain sufficientconditions for the game equilibrium and prove under thes...We consider a deterministic model of market evolution with trading constraints andapply a game-theoretic approach to the superhedging problem.We obtain sufficientconditions for the game equilibrium and prove under these conditions the existenceof a Borel-measurable transition kernel describing dependence on price prehistory ofthe most unfavourable mixed strategy of the market.展开更多
With the trend of multiple energies or flexible demand in power systems,binary variables appear in systemwide constraints,which are the foundation of marginal pricing currently in markets.An appropriate pricing method...With the trend of multiple energies or flexible demand in power systems,binary variables appear in systemwide constraints,which are the foundation of marginal pricing currently in markets.An appropriate pricing method incentivizes compliance of market participants;otherwise,compliance can be incentivized by paying discriminatory uplift payments which jeopardize transparency of markets.This paper proposes two theorems to examine whether the binary variables brought by multiple energies and flexible demand will impact compliance under marginal pricing.The first theorem shows sufficient conditions with which marginal pricing with fixed binary variables incentivizes compliance,while the second theorem shows sufficient conditions to require uplift payments.To improve transparency by reducing uplift payments under cases which fall into the second theorem,this paper further proposes a pricing method by combining 1)designed constraints to price binary variables in system-wide constraints,and 2)convex hull pricing to price binary variables in private constraints.Effectiveness of the proposed theorems and pricing method is verified in an electricity-gas case(consisting of the IEEE 30-bus system and the NGS 10-node system)and the IEEE 118-bus test system.展开更多
The Ahmed model is a standard bluff body used to study the flow behavior around an automobile.An important issue when investigating turbulent flow fields is the large computational load driven by accurate prediction a...The Ahmed model is a standard bluff body used to study the flow behavior around an automobile.An important issue when investigating turbulent flow fields is the large computational load driven by accurate prediction approaches,such as the large eddy simulation model.In this paper,we present a powerful domain decomposition method-based parallel solver to efficiently utilize existing supercomputer resources.The 3D unsteady incompressible Navier–Stokes equations with a subgridscale(SGS)fluid model are discretized on a pure unstructured tetrahedral grid by a stable P_(1-)P_(1-)finite element method in space,while an implicit second-order backward differentiation formula is employed for the time discretization.We then solve the nonlinear algebraic system by means of the Newton–Krylov–Schwarz method by imposing a restricted additive Schwarz(RAS)right preconditioner for the parallel setting.We validate the proposed method toward the comparison of the flow field,including the velocity profiles and flow structures,with experimental investigations,and we show the parallel efficiency and scalability of the solver with up to 8192 processors.展开更多
This study addresses the parameter identification problem in a system of time-dependent quasi-linear partial differential equations(PDEs).Using the integral equation method,we prove the uniqueness of the inverse probl...This study addresses the parameter identification problem in a system of time-dependent quasi-linear partial differential equations(PDEs).Using the integral equation method,we prove the uniqueness of the inverse problem in nonlinear PDEs.Moreover,using the method of successive approximations,we develop a novel iterative algorithm to estimate sorption isotherms.The stability results of the algorithm are proven under both a priori and a posteriori stopping rules.A numerical example is given to show the efficiency and robustness of the proposed new approach.展开更多
Systemic risk research is gaining traction across diverse disciplinary research communities, but has as yet not been strongly linked to traditional, well-established risk analysis research. This is due in part to the ...Systemic risk research is gaining traction across diverse disciplinary research communities, but has as yet not been strongly linked to traditional, well-established risk analysis research. This is due in part to the fact that systemic risk research focuses on the connection of elements within a system, while risk analysis research focuses more on individual risk to single elements. We therefore investigate how current systemic risk research can be related to traditional risk analysis approaches from a conceptual as well as an empirical point of view. Based on Sklar's Theorem, which provides a one-to-one relationship between multivariate distributions and copulas, we suggest a reframing of the concept of copulas based on a network perspective. This provides a promising way forward for integrating individual risk(in the form of probability distributions) and systemic risk(in the form of copulasdescribing the dependencies among such distributions)across research domains. Copulas can link continuous node states, characterizing individual risks, with a gradual dependency of the coupling strength between nodes on their states, characterizing systemic risk. When copulas are used for describing such refined coupling between nodes,they can provide a more accurate quantification of a system's network structure. This enables more realistic systemic risk assessments, and is especially useful when extreme events(that occur at low probabilities, but have high impacts) affect a system's nodes. In this way, copulas can be informative in measuring and quantifying changes in systemic risk and therefore be helpful in its management. We discuss the advantages and limitations of copulas for integrative risk analyses from the perspectives of modeling, measurement, and management.展开更多
We present an extension of the flux globalization based well-balanced pathconservative central-upwind scheme to the one-and two-dimensional thermal rotating shallow water equations.The scheme is well-balanced in the s...We present an extension of the flux globalization based well-balanced pathconservative central-upwind scheme to the one-and two-dimensional thermal rotating shallow water equations.The scheme is well-balanced in the sense that it can exactly preserve a variety of physically relevant steady states.In the one-dimensional case,it can preserve different“lake-at-rest”equilibria,thermo-geostrophic equilibria,as well as general moving-water steady states.In the two-dimensional case,preserving general moving-water steady states is difficult,and to the best of our knowledge,none of existing schemes can achieve this ultimate goal.The proposed scheme can exactly preserve the x-and y-directional jets in the rotational frame as well as certain genuinely two-dimensional equilibria.Furthermore,our approach employs a path-conservative technique for discretizing nonconservative product terms,which are incorporated into the global fluxes.This allows the developed scheme to exactly preserve some of the discontinuous steady states as well.We provide a number of numerical examples to demonstrate the advantages of the proposed scheme over some alternative finitevolume methods.展开更多
文摘We continue to consider one of the cybernetic methods in biology related to the study of DNA chains. Exactly, we are considering the problem of reconstructing the distance matrix for DNA chains. Such a matrix is formed on the basis of any of the possible algorithms for determining the distances between DNA chains, as well as any specific object of study. At the same time, for example, the practical programming results show that on an average modern computer, it takes about a day to build such a 30 × 30 matrix for mnDNAs using the Needleman-Wunsch algorithm;therefore, for such a 300 × 300 matrix, about 3 months of continuous computer operation is expected. Thus, even for a relatively small number of species, calculating the distance matrix on conventional computers is hardly feasible and the supercomputers are usually not available. Therefore, we started publishing our variants of the algorithms for calculating the distance between two DNA chains, then we publish algorithms for restoring partially filled matrices, i.e., the inverse problem of matrix processing. Previously, we used the method of branches and boundaries, but in this paper we propose to use another new algorithm for restoring the distance matrix for DNA chains. Our recent work has shown that even greater improvement in the quality of the algorithm can often be achieved without improving the auxiliary heuristics of the branches and boundaries method. Thus, we are improving the algorithms that formulate the greedy function of this method only. .
文摘We investigate decomposition of codes and finite languages. A prime decomposition is a decomposition of a code or languages into a concatenation of nontrivial prime codes or languages. A code is prime if it cannot be decomposed into at least two nontrivial codes as the same for the languages. In the paper, a linear time algorithm is designed, which finds the prime decomposition. If codes or finite languages are presented as given by its minimal deterministic automaton, then from the point of view of abstract algebra and graph theory, this automaton has special properties. The study was conducted using system for computational Discrete Algebra GAP. .
文摘The paper describes some implementation aspects of an algorithm for approximate solution of the traveling salesman problem based on the construction of convex closed contours on the initial set of points (“cities”) and their subsequent combination into a closed path (the so-called contour algorithm or “onion husk” algorithm). A number of heuristics related to the different stages of the algorithm are considered, and various variants of the algorithm based on these heuristics are analyzed. Sets of randomly generated points of different sizes (from 4 to 90 and from 500 to 10,000) were used to test the algorithms. The numerical results obtained are compared with the results of two well-known combinatorial optimization algorithms, namely the algorithm based on the branch and bound method and the simulated annealing algorithm. .
文摘To date, it is unknown whether it is possible to construct a complete graph invariant in polynomial time, so fast algorithms for checking non-isomorphism are important, including heuristic algorithms, and for successful implementations of such heuristics, both the tasks of some modification of previously described graph invariants and the description of new invariants remain relevant. Many of the described invariants make it possible to distinguish a larger number of graphs in the real time of a computer program. In this paper, we propose an invariant for a special kind of directed graphs, namely, for tournaments. The last ones, from our point of view, are interesting because when fixing the order of vertices, the number of different tournaments is exactly equal to the number of undirected graphs, also with fixing the order of vertices. In the invariant we are considering, all possible tournaments consisting of a subset of vertices of a given digraph with the same set of arcs are iterated over. For such subset tournaments, the places are calculated in the usual way, which are summed up to obtain the final values of the points of the vertices;these points form the proposed invariant. As we expected, calculations of the new invariant showed that it does not coincide with the most natural invariant for tournaments, in which the number of points is calculated for each participant. So far, we have conducted a small number of computational experiments, and the minimum value of the pair correlation between the sequences representing these two invariants that we found is for dimension 15.
文摘It has been shown that the first principle of thermodynamics follows from the conservation laws for energy and linear momentum. And the second principle of thermodynamics follows from the first principle of thermodynamics under realization of the integrating factor (namely, temperature) and is a conservation law. The significance of the first principle of thermodynamics consists in the fact that it specifies the thermodynamic system state, which depends on interaction between conservation laws and is non-equilibrium due to a non-commutativity of conservation laws. The realization of the second principle of thermodynamics points to a transition of the thermodynamic system state into a locally-equilibrium state. Phase transitions are examples of such transitions.
基金financial support of the Ministry of Education and Science of the Russian Federation as part of the program of the Mathematical Center for Fundamental and Applied Mathematics(No.075-15-2019-162)partly RFBR(No.20-07-00391)。
文摘This paper presents the results of numerical simulation of plasma equilibrium and stability in the MEPHIST-0 tokamak with SIEMNED software and comparison of simulation results with experiments.The determined characteristics of the vacuum chamber show that it significantly affects the entire discharge.For various scenarios of the inductor operation,a comparison of experimental data and simulated currents and magnetic fields induced in the chamber was carried out.For steady-state tokamak operation,a numerical study of equilibrium plasma configurations was carried out depending on the currents in the poloidal magnetic field coils and plasma current.The vertical plasma instability was investigated.The limiting values of plasma ellipticity preventing the vertical plasma instability were numerically determined.Numerical simulations show that plasma equilibrium is supported by induced currents.It was shown numerically that magnetic configuration with‘zero of higher order’were obtained before the plasma shot,suggesting consistency between the simulation results and observations.
文摘The aim is to study the set of subsets of grids of the Waterloo language from the point of view of abstract algebra and graph theory. The study was conducted using the library for working with transition graphs of nondeterministic finite automata NFALib implemented by one of the authors in C#, as well as statistical methods for analyzing algorithms. The results are regularities obtained when considering semilattices on a set of subsets of grids of the Waterloo language. It follows from the results obtained that the minimum covering automaton equivalent to the Waterloo automaton can be obtained by adding one additional to the minimum covering set of grids. .
文摘Based on the standard definition of the product (concatenation), the natural non-negative degree of the language is introduced. Root extraction is the reverse operation to it, and it can be defined in several different ways. Despite the simplicity of the formulation of the problem of extracting the root, the authors could not find any description of it in the literature (as well as on the Internet), including even its formulation. Most of the material in this article is devoted to the simplest version of the formulation: the root of the 2<sup>nd</sup> degree for the 1-letter alphabet, but many of the provisions of the article are generalized to more complex cases. Apparently, for a possible future description of a polynomial algorithm for solving at least one of the described statements of root extraction problems, it is first necessary to really analyze in detail such a special case, that is: either describe the necessary polynomial algorithm, or, conversely, show that the problem belongs to the class of NP-complete problems. Thus, in this article, we do not propose a polynomial algorithm for the problems under consideration;however, the models described here should help in constructing appropriate heuristic algorithms for their solution. A detailed description of the possible further application of such heuristic algorithms is beyond the scope of this article. .
文摘The present paper continues the topic of our recent paper in the same journal,aiming to show the role of structural stability in financial modeling.In the context of financial market modeling,structural stability means that a specific“no-arbitrage”property is unaffected by small(with respect to the Pompeiu–Hausdorff metric)perturbations of the model’s dynamics.We formulate,based on our economic interpretation,a new requirement concerning“no arbitrage”properties,which we call the“uncertainty principle”.This principle in the case of no-trading constraints is equivalent to structural stability.We demonstrate that structural stability is essential for a correct model approximation(which is used in our numerical method for superhedging price computation).We also show that structural stability is important for the continuity of superhedging prices and discuss the sufficient conditions for this continuity.
基金supported by RFBR grants No.13-02-01372 and No.12-02-01329.
文摘Laser elktacytometry is a technique widely used for measuring the deformability of red blood cells(erythrocytes)in blood samples in vitro.In ektacytometer,a flow of highly diluted suspension of erythrocytes in variable shear stress conditions is iluninated with a laser beam to obtain a diffraction pattern.The diffraction pattern provides information about the shapes(shear induced elongations)of the cells under investigation.This paper is dedicated to developing the technique of laser ektacytometry s0 that it would enable one to measure the distrilbution of the erythrocytes in deformability.We discuss the problem of calibration of laser elktacytometer and test a novel data processing algorithm allowing to determine the parameters of the distribution of ery-throcytes deformability.Experimentally,we examined 12 specimens of blood of rats under the action of 4 shear stresses.Analysis of the data shows that in conditions of a limited range of digitizing the diffraction patterns,the measurement errors for the mean deformability,deform-ability scatter and the skewness of erythrocytes distribution in deformability by our method are respectively 15%,20%and 20%.
基金supported by Moscow Center of Fundamental and Applied Mathematics(No.75-15-2022-284).
文摘We consider a deterministic model of market evolution with trading constraints andapply a game-theoretic approach to the superhedging problem.We obtain sufficientconditions for the game equilibrium and prove under these conditions the existenceof a Borel-measurable transition kernel describing dependence on price prehistory ofthe most unfavourable mixed strategy of the market.
基金supported by the National Natural Science Foundation of China(52177072).
文摘With the trend of multiple energies or flexible demand in power systems,binary variables appear in systemwide constraints,which are the foundation of marginal pricing currently in markets.An appropriate pricing method incentivizes compliance of market participants;otherwise,compliance can be incentivized by paying discriminatory uplift payments which jeopardize transparency of markets.This paper proposes two theorems to examine whether the binary variables brought by multiple energies and flexible demand will impact compliance under marginal pricing.The first theorem shows sufficient conditions with which marginal pricing with fixed binary variables incentivizes compliance,while the second theorem shows sufficient conditions to require uplift payments.To improve transparency by reducing uplift payments under cases which fall into the second theorem,this paper further proposes a pricing method by combining 1)designed constraints to price binary variables in system-wide constraints,and 2)convex hull pricing to price binary variables in private constraints.Effectiveness of the proposed theorems and pricing method is verified in an electricity-gas case(consisting of the IEEE 30-bus system and the NGS 10-node system)and the IEEE 118-bus test system.
基金The work of Yan was partially supported by the Natural Science Foundation of China under Grant No.11901559the Shenzhen Sci-Tech Fund Nos.JCYJ20180507182506416 and JCYJ20200109115422828+2 种基金The work of Li and Wang was partially supported by the NSF of China No.11971221Guangdong NSF Major Fund No.2021ZDZX1001,the Shenzhen Sci-Tech Fund Nos.RCJC20200714114556020 and JCYJ20190809150413261Guangdong Provincial Key Laboratory of Computational Science and Material Design No.2019B030301001.
文摘The Ahmed model is a standard bluff body used to study the flow behavior around an automobile.An important issue when investigating turbulent flow fields is the large computational load driven by accurate prediction approaches,such as the large eddy simulation model.In this paper,we present a powerful domain decomposition method-based parallel solver to efficiently utilize existing supercomputer resources.The 3D unsteady incompressible Navier–Stokes equations with a subgridscale(SGS)fluid model are discretized on a pure unstructured tetrahedral grid by a stable P_(1-)P_(1-)finite element method in space,while an implicit second-order backward differentiation formula is employed for the time discretization.We then solve the nonlinear algebraic system by means of the Newton–Krylov–Schwarz method by imposing a restricted additive Schwarz(RAS)right preconditioner for the parallel setting.We validate the proposed method toward the comparison of the flow field,including the velocity profiles and flow structures,with experimental investigations,and we show the parallel efficiency and scalability of the solver with up to 8192 processors.
基金supported by the National Natural Science Foundation of China(No.12171036)Beijing Natural Science Foundation(No.Z210001)the NSF of China No.11971221,Guangdong NSF Major Fund No.2021ZDZX1001,the Shenzhen Sci-Tech Fund Nos.RCJC20200714114556020,JCYJ20200109115422828 and JCYJ20190809150413261,National Center for Applied Mathematics Shenzhen,and SUSTech International Center for Mathematics.
文摘This study addresses the parameter identification problem in a system of time-dependent quasi-linear partial differential equations(PDEs).Using the integral equation method,we prove the uniqueness of the inverse problem in nonlinear PDEs.Moreover,using the method of successive approximations,we develop a novel iterative algorithm to estimate sorption isotherms.The stability results of the algorithm are proven under both a priori and a posteriori stopping rules.A numerical example is given to show the efficiency and robustness of the proposed new approach.
文摘Systemic risk research is gaining traction across diverse disciplinary research communities, but has as yet not been strongly linked to traditional, well-established risk analysis research. This is due in part to the fact that systemic risk research focuses on the connection of elements within a system, while risk analysis research focuses more on individual risk to single elements. We therefore investigate how current systemic risk research can be related to traditional risk analysis approaches from a conceptual as well as an empirical point of view. Based on Sklar's Theorem, which provides a one-to-one relationship between multivariate distributions and copulas, we suggest a reframing of the concept of copulas based on a network perspective. This provides a promising way forward for integrating individual risk(in the form of probability distributions) and systemic risk(in the form of copulasdescribing the dependencies among such distributions)across research domains. Copulas can link continuous node states, characterizing individual risks, with a gradual dependency of the coupling strength between nodes on their states, characterizing systemic risk. When copulas are used for describing such refined coupling between nodes,they can provide a more accurate quantification of a system's network structure. This enables more realistic systemic risk assessments, and is especially useful when extreme events(that occur at low probabilities, but have high impacts) affect a system's nodes. In this way, copulas can be informative in measuring and quantifying changes in systemic risk and therefore be helpful in its management. We discuss the advantages and limitations of copulas for integrative risk analyses from the perspectives of modeling, measurement, and management.
基金supported in part by China Postdoctoral Science Foundation(No.2022M721481)The work of A.Kurganov was supported in part by NSFC grant 12171226 and by the fund of the Guangdong Provincial Key Laboratory of Computational Science and Material Design(No.2019B030301001)The work of Y.Liu was supported in part by SNFS grants 200020204917 and FZEB-0-166980.
文摘We present an extension of the flux globalization based well-balanced pathconservative central-upwind scheme to the one-and two-dimensional thermal rotating shallow water equations.The scheme is well-balanced in the sense that it can exactly preserve a variety of physically relevant steady states.In the one-dimensional case,it can preserve different“lake-at-rest”equilibria,thermo-geostrophic equilibria,as well as general moving-water steady states.In the two-dimensional case,preserving general moving-water steady states is difficult,and to the best of our knowledge,none of existing schemes can achieve this ultimate goal.The proposed scheme can exactly preserve the x-and y-directional jets in the rotational frame as well as certain genuinely two-dimensional equilibria.Furthermore,our approach employs a path-conservative technique for discretizing nonconservative product terms,which are incorporated into the global fluxes.This allows the developed scheme to exactly preserve some of the discontinuous steady states as well.We provide a number of numerical examples to demonstrate the advantages of the proposed scheme over some alternative finitevolume methods.