In this paper, the optimal control problem of parabolic integro-differential equations is solved by gradient recovery based two-grid finite element method. Piecewise linear functions are used to approximate state and ...In this paper, the optimal control problem of parabolic integro-differential equations is solved by gradient recovery based two-grid finite element method. Piecewise linear functions are used to approximate state and co-state variables, and piecewise constant function is used to approximate control variables. Generally, the optimal conditions for the problem are solved iteratively until the control variable reaches error tolerance. In order to calculate all the variables individually and parallelly, we introduce a gradient recovery based two-grid method. First, we solve the small scaled optimal control problem on coarse grids. Next, we use the gradient recovery technique to recover the gradients of state and co-state variables. Finally, using the recovered variables, we solve the large scaled optimal control problem for all variables independently. Moreover, we estimate priori error for the proposed scheme, and use an example to validate the theoretical results.展开更多
Climate change is a reality. The burning of fossil fuels from oil, natural gas and coal is responsible for much of the pollution and the increase in the planet’s average temperature, which has raised discussions on t...Climate change is a reality. The burning of fossil fuels from oil, natural gas and coal is responsible for much of the pollution and the increase in the planet’s average temperature, which has raised discussions on the subject, given the emergencies related to climate. An energy transition to clean and renewable sources is necessary and urgent, but it will not be quick. In this sense, increasing the efficiency of oil extraction from existing sources is crucial, to avoid waste and the drilling of new wells. The purpose of this work was to add diffusive and dispersive terms to the Buckley-Leverett equation in order to incorporate extra phenomena in the temporal evolution between the water-oil and oil-water transitions in the pipeline. For this, the modified Buckley-Leverett equation was discretized via essentially weighted non-oscillatory schemes, coupled with a three-stage Runge-Kutta and a fourth-order centered finite difference methods. Then, computational simulations were performed and the results showed that new features emerge in the transitions, when compared to classical simulations. For instance, the dispersive term inhibits the diffusive term, adding oscillations, which indicates that the absorption of the fluid by the porous medium occurs in a non-homogeneous manner. Therefore, based on research such as this, decisions can be made regarding the replacement of the porous medium or the insertion of new components to delay the replacement.展开更多
The main purpose of this paper is to generalize the effect of two-phased demand and variable deterioration within the EOQ (Economic Order Quantity) framework. The rate of deterioration is a linear function of time. Th...The main purpose of this paper is to generalize the effect of two-phased demand and variable deterioration within the EOQ (Economic Order Quantity) framework. The rate of deterioration is a linear function of time. The two-phased demand function states the constant function for a certain period and the quadratic function of time for the rest part of the cycle time. No shortages as well as partial backlogging are allowed to occur. The mathematical expressions are derived for determining the optimal cycle time, order quantity and total cost function. An easy-to-use working procedure is provided to calculate the above quantities. A couple of numerical examples are cited to explain the theoretical results and sensitivity analysis of some selected examples is carried out.展开更多
It is well established that Nash equilibrium exists within the framework of mixed strategies in strategic-form non-cooperative games. However, finding the Nash equilibrium generally belongs to the class of problems kn...It is well established that Nash equilibrium exists within the framework of mixed strategies in strategic-form non-cooperative games. However, finding the Nash equilibrium generally belongs to the class of problems known as PPAD (Polynomial Parity Argument on Directed graphs), for which no polynomial-time solution methods are known, even for two-player games. This paper demonstrates that in fixed-sum two-player games (including zero-sum games), the Nash equilibrium forms a convex set, and has a unique expected payoff. Furthermore, these equilibria are Pareto optimal. Additionally, it is shown that the Nash equilibrium of fixed-sum two-player games can theoretically be found in polynomial time using the principal-dual interior point method, a solution method of linear programming.展开更多
Our Solar System contains eight planets and their respective natural satellites excepting the inner two planets Mercury and Venus. A satellite hosted by a given Planet is well protected by the gravitational pertubatio...Our Solar System contains eight planets and their respective natural satellites excepting the inner two planets Mercury and Venus. A satellite hosted by a given Planet is well protected by the gravitational pertubation of much heavier planets such as Jupiter and Saturn if the natural satellite lies deep inside the respective host Planet Hill sphere. Each planet has a Hill radius a<sub>H</sub> and planet mean radius R<sub>P </sub>and the ratio R<sub>1</sub>=R<sub>P</sub>/a<sub>H</sub>. Under very low R<sub>1 </sub>(less than 0.006) the approximation of CRTBP (centrally restricted three-body problem) to two-body problem is valid and planet has spacious Hill lobe to capture a satellite and retain it. This ensures a high probability of capture of natural satellite by the given planet and Sun’s perturbation on Planet-Satellite binary can be neglected. This is the case with Earth, Mars, Jupiter, Saturn, Neptune and Uranus. But Mercury and Venus has R<sub>1</sub>=R<sub>P</sub>/a<sub>H</sub> =0.01 and 5.9862 × 10<sup>-3</sup> respectively hence they have no satellites. There is a limit to the dimension of the captured body. It must be a much smaller body both dimensionally as well masswise. The qantitative limit is a subject of an independent study.展开更多
The two universes multi-granularity fuzzy rough set model is an effective tool for handling uncertainty problems between two domains with the help of binary fuzzy relations. This article applies the idea of neighborho...The two universes multi-granularity fuzzy rough set model is an effective tool for handling uncertainty problems between two domains with the help of binary fuzzy relations. This article applies the idea of neighborhood rough sets to two universes multi-granularity fuzzy rough sets, and discusses the two-universes multi-granularity neighborhood fuzzy rough set model. Firstly, the upper and lower approximation operators are defined in the two universes multi-granularity neighborhood fuzzy rough set model. Secondly, the properties of the upper and lower approximation operators are discussed. Finally, the properties of the two universes multi-granularity neighborhood fuzzy rough set model are verified through case studies.展开更多
In order to improve the fitting accuracy of college students’ test scores, this paper proposes two-component mixed generalized normal distribution, uses maximum likelihood estimation method and Expectation Conditiona...In order to improve the fitting accuracy of college students’ test scores, this paper proposes two-component mixed generalized normal distribution, uses maximum likelihood estimation method and Expectation Conditional Maxinnization (ECM) algorithm to estimate parameters and conduct numerical simulation, and performs fitting analysis on the test scores of Linear Algebra and Advanced Mathematics of F University. The empirical results show that the two-component mixed generalized normal distribution is better than the commonly used two-component mixed normal distribution in fitting college students’ test data, and has good application value.展开更多
According to a mathematical model for dense two-phase flows presented in the previous pape[1],a dense two-phase flow in a vertical pipeline is analytically solved, and the analytic expressions of velocity of each cont...According to a mathematical model for dense two-phase flows presented in the previous pape[1],a dense two-phase flow in a vertical pipeline is analytically solved, and the analytic expressions of velocity of each continuous phase and dispersed phase are respectively derived. The results show that when the drag force between twophasesdepends linearly on their relative velocity, the relative velocity profile in the pipeline coincides with Darcy's law except for the thin layer region near the pipeline wall, and that the theoretical assumptions in the dense two-phase flow theory mentioned are reasonable.展开更多
A two-agent scheduling problem on parallel machines is considered in this paper. Our objective is to minimize the makespan for agent A, subject to an upper bound on the makespan for agent B. In this paper, we provide ...A two-agent scheduling problem on parallel machines is considered in this paper. Our objective is to minimize the makespan for agent A, subject to an upper bound on the makespan for agent B. In this paper, we provide a new approximation algorithm called CLPT. On the one hand, we compare the performance between the CLPT algorithm and the optimal solution and find that the solution obtained by the CLPT algorithm is very close to the optimal solution. On the other hand, we design different experimental frameworks to compare the CLPT algorithm and the A-LS algorithm for a comprehensive performance evaluation. A large number of numerical simulation results show that the CLPT algorithm outperformed the A-LS algorithm.展开更多
A brain tumor occurs when abnormal cells grow, sometimes very rapidly, into an abnormal mass of tissue. The tumor can infect normal tissue, so there is an interaction between healthy and infected cell. The aim of this...A brain tumor occurs when abnormal cells grow, sometimes very rapidly, into an abnormal mass of tissue. The tumor can infect normal tissue, so there is an interaction between healthy and infected cell. The aim of this paper is to propose some efficient and accurate numerical methods for the computational solution of one-dimensional continuous basic models for the growth and control of brain tumors. After computing the analytical solution, we construct approximations of the solution to the problem using a standard second order finite difference method for space discretization and the Crank-Nicolson method for time discretization. Then, we investigate the convergence behavior of Conjugate gradient and generalized minimum residual as Krylov subspace methods to solve the tridiagonal toeplitz matrix system derived.展开更多
This paper constructed and studied a nonresident computer virus model with age structure and two delays effects. The non-negativity and boundedness of the solution of the model have been discussed, and then gave the b...This paper constructed and studied a nonresident computer virus model with age structure and two delays effects. The non-negativity and boundedness of the solution of the model have been discussed, and then gave the basic regeneration number, and obtained the conditions for the existence and the stability of the virus-free equilibrium and the computer virus equilibrium. Theoretical analysis shows the conditions under which the model undergoes Hopf bifurcation in three different cases. The numerical examples are provided to demonstrate the theoretical results.展开更多
This research focused on the study of heat and mass transfers in a two-phase stratified turbulent fluid flow in a geothermal pipe with chemical reaction. The derived non-linear partial differential equations governing...This research focused on the study of heat and mass transfers in a two-phase stratified turbulent fluid flow in a geothermal pipe with chemical reaction. The derived non-linear partial differential equations governing the flow were solved using the Finite Difference Method. The effects of various physical parameters on the concentration, skin friction, heat, and mass transfers have been determined. Analysis of the results obtained indicated that the coefficient of skin friction decreased with an increase in Reynolds number and solutal Grasholf number, the rate of heat transfer increased with an increase in Eckert number, Prandtl number, and angle of inclination, and the rate of mass transfer increased with increase in Reynolds number, Chemical reaction parameter and angle of inclination. The findings would be useful to engineers in designing and maintaining geothermal pipelines more effectively.展开更多
This paper presents vehicle localization and tracking methodology to utilize two-channel LiDAR data for turning movement counts. The proposed methodology uniquely integrates a K-means clustering technique, an inverse ...This paper presents vehicle localization and tracking methodology to utilize two-channel LiDAR data for turning movement counts. The proposed methodology uniquely integrates a K-means clustering technique, an inverse sensor model, and a Kalman filter to obtain the final trajectories of an individual vehicle. The objective of applying K-means clustering is to robustly differentiate LiDAR data generated by pedestrians and multiple vehicles to identify their presence in the LiDAR’s field of view (FOV). To localize the detected vehicle, an inverse sensor model was used to calculate the accurate location of the vehicles in the LiDAR’s FOV with a known LiDAR position. A constant velocity model based Kalman filter is defined to utilize the localized vehicle information to construct its trajectory by combining LiDAR data from the consecutive scanning cycles. To test the accuracy of the proposed methodology, the turning movement data was collected from busy intersections located in Newark, NJ. The results show that the proposed method can effectively develop the trajectories of the turning vehicles at the intersections and has an average accuracy of 83.8%. Obtained R-squared value for localizing the vehicles ranges from 0.87 to 0.89. To measure the accuracy of the proposed method, it is compared with previously developed methods that focused on the application of multiple-channel LiDARs. The comparison shows that the proposed methodology utilizes two-channel LiDAR data effectively which has a low resolution of data cluster and can achieve acceptable accuracy compared to multiple-channel LiDARs and therefore can be used as a cost-effective measure for large-scale data collection of smart cities.展开更多
In this paper, we present the a posteriori error estimate of two-grid mixed finite element methods by averaging techniques for semilinear elliptic equations. We first propose the two-grid algorithms to linearize the m...In this paper, we present the a posteriori error estimate of two-grid mixed finite element methods by averaging techniques for semilinear elliptic equations. We first propose the two-grid algorithms to linearize the mixed method equations. Then, the averaging technique is used to construct the a posteriori error estimates of the two-grid mixed finite element method and theoretical analysis are given for the error estimators. Finally, we give some numerical examples to verify the reliability and efficiency of the a posteriori error estimator.展开更多
Purpose: This study aims to answer the question to what extent different types of networks can be used to predict future co-authorship among authors.Design/methodology/approach: We compare three types ot networks: ...Purpose: This study aims to answer the question to what extent different types of networks can be used to predict future co-authorship among authors.Design/methodology/approach: We compare three types ot networks: unwelgntea networks, in which a link represents a past collaboration; weighted networks, in which links are weighted by the number of joint publications; and bipartite author-publication networks. The analysis investigates their relation to positive stability, as well as their potential in predicting links in future versions of the co-authorship network. Several hypotheses are tested.Findings: Among other results, we find that weighted networks do not automatically lead to better predictions. Bipartite networks, however, outperform unweighted networks in almost all cases. Research limitations: Only two relatively small case studies are considered Practical implications: The study suggests that future link prediction studies on networks should consider using the bipartite network as a training network. Originality/value: This is the first systematic comparison of unweighted, weighted, and bipartite training networks in link prediction.展开更多
We calculate the production of χ<sub>c</sub> and η<sub>c</sub> by the two-photon process in ultra-peripheral heavy ion collisions at Relativistic Heavy Ion Collider (RHIC) and Large Hadron Co...We calculate the production of χ<sub>c</sub> and η<sub>c</sub> by the two-photon process in ultra-peripheral heavy ion collisions at Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) energies. The differential cross section of transverse momentum distribution and rapidity distribution for (H = χ<sub>c</sub> and η<sub>c</sub>), are estimated by using the equivalent photon flux in the impact parameter space. The numerical results indicate that the study of χ<sub>c</sub> and η<sub>c</sub> in ultra-peripheral heavy ion collisions are feasible at RHIC and LHC energies.展开更多
In this paper, we have used two reliable approaches (theorems) to find the optimal solutions to transportation problems, using variations in costs. In real-life scenarios, transportation costs can fluctuate due to dif...In this paper, we have used two reliable approaches (theorems) to find the optimal solutions to transportation problems, using variations in costs. In real-life scenarios, transportation costs can fluctuate due to different factors. Finding optimal solutions to the transportation problem in the context of variations in cost is vital for ensuring cost efficiency, resource allocation, customer satisfaction, competitive advantage, environmental responsibility, risk mitigation, and operational fortitude in practical situations. This paper opens up new directions for the solution of transportation problems by introducing two key theorems. By using these theorems, we can develop an algorithm for identifying the optimal solution attributes and permitting accurate quantification of changes in overall transportation costs through the addition or subtraction of constants to specific rows or columns, as well as multiplication by constants inside the cost matrix. It is anticipated that the two reliable techniques presented in this study will provide theoretical insights and practical solutions to enhance the efficiency and cost-effectiveness of transportation systems. Finally, numerical illustrations are presented to verify the proposed approaches.展开更多
In the post-genomic biology era,the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system,and it has been a challenging task i...In the post-genomic biology era,the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system,and it has been a challenging task in bioinformatics.The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages,but how to determine the network structure and parameters is still important to be explored.This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network.The new algorithm is evaluated with the use of both simulated and yeast cell cycle data.The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.展开更多
文摘In this paper, the optimal control problem of parabolic integro-differential equations is solved by gradient recovery based two-grid finite element method. Piecewise linear functions are used to approximate state and co-state variables, and piecewise constant function is used to approximate control variables. Generally, the optimal conditions for the problem are solved iteratively until the control variable reaches error tolerance. In order to calculate all the variables individually and parallelly, we introduce a gradient recovery based two-grid method. First, we solve the small scaled optimal control problem on coarse grids. Next, we use the gradient recovery technique to recover the gradients of state and co-state variables. Finally, using the recovered variables, we solve the large scaled optimal control problem for all variables independently. Moreover, we estimate priori error for the proposed scheme, and use an example to validate the theoretical results.
文摘Climate change is a reality. The burning of fossil fuels from oil, natural gas and coal is responsible for much of the pollution and the increase in the planet’s average temperature, which has raised discussions on the subject, given the emergencies related to climate. An energy transition to clean and renewable sources is necessary and urgent, but it will not be quick. In this sense, increasing the efficiency of oil extraction from existing sources is crucial, to avoid waste and the drilling of new wells. The purpose of this work was to add diffusive and dispersive terms to the Buckley-Leverett equation in order to incorporate extra phenomena in the temporal evolution between the water-oil and oil-water transitions in the pipeline. For this, the modified Buckley-Leverett equation was discretized via essentially weighted non-oscillatory schemes, coupled with a three-stage Runge-Kutta and a fourth-order centered finite difference methods. Then, computational simulations were performed and the results showed that new features emerge in the transitions, when compared to classical simulations. For instance, the dispersive term inhibits the diffusive term, adding oscillations, which indicates that the absorption of the fluid by the porous medium occurs in a non-homogeneous manner. Therefore, based on research such as this, decisions can be made regarding the replacement of the porous medium or the insertion of new components to delay the replacement.
文摘The main purpose of this paper is to generalize the effect of two-phased demand and variable deterioration within the EOQ (Economic Order Quantity) framework. The rate of deterioration is a linear function of time. The two-phased demand function states the constant function for a certain period and the quadratic function of time for the rest part of the cycle time. No shortages as well as partial backlogging are allowed to occur. The mathematical expressions are derived for determining the optimal cycle time, order quantity and total cost function. An easy-to-use working procedure is provided to calculate the above quantities. A couple of numerical examples are cited to explain the theoretical results and sensitivity analysis of some selected examples is carried out.
文摘It is well established that Nash equilibrium exists within the framework of mixed strategies in strategic-form non-cooperative games. However, finding the Nash equilibrium generally belongs to the class of problems known as PPAD (Polynomial Parity Argument on Directed graphs), for which no polynomial-time solution methods are known, even for two-player games. This paper demonstrates that in fixed-sum two-player games (including zero-sum games), the Nash equilibrium forms a convex set, and has a unique expected payoff. Furthermore, these equilibria are Pareto optimal. Additionally, it is shown that the Nash equilibrium of fixed-sum two-player games can theoretically be found in polynomial time using the principal-dual interior point method, a solution method of linear programming.
文摘Our Solar System contains eight planets and their respective natural satellites excepting the inner two planets Mercury and Venus. A satellite hosted by a given Planet is well protected by the gravitational pertubation of much heavier planets such as Jupiter and Saturn if the natural satellite lies deep inside the respective host Planet Hill sphere. Each planet has a Hill radius a<sub>H</sub> and planet mean radius R<sub>P </sub>and the ratio R<sub>1</sub>=R<sub>P</sub>/a<sub>H</sub>. Under very low R<sub>1 </sub>(less than 0.006) the approximation of CRTBP (centrally restricted three-body problem) to two-body problem is valid and planet has spacious Hill lobe to capture a satellite and retain it. This ensures a high probability of capture of natural satellite by the given planet and Sun’s perturbation on Planet-Satellite binary can be neglected. This is the case with Earth, Mars, Jupiter, Saturn, Neptune and Uranus. But Mercury and Venus has R<sub>1</sub>=R<sub>P</sub>/a<sub>H</sub> =0.01 and 5.9862 × 10<sup>-3</sup> respectively hence they have no satellites. There is a limit to the dimension of the captured body. It must be a much smaller body both dimensionally as well masswise. The qantitative limit is a subject of an independent study.
文摘The two universes multi-granularity fuzzy rough set model is an effective tool for handling uncertainty problems between two domains with the help of binary fuzzy relations. This article applies the idea of neighborhood rough sets to two universes multi-granularity fuzzy rough sets, and discusses the two-universes multi-granularity neighborhood fuzzy rough set model. Firstly, the upper and lower approximation operators are defined in the two universes multi-granularity neighborhood fuzzy rough set model. Secondly, the properties of the upper and lower approximation operators are discussed. Finally, the properties of the two universes multi-granularity neighborhood fuzzy rough set model are verified through case studies.
文摘In order to improve the fitting accuracy of college students’ test scores, this paper proposes two-component mixed generalized normal distribution, uses maximum likelihood estimation method and Expectation Conditional Maxinnization (ECM) algorithm to estimate parameters and conduct numerical simulation, and performs fitting analysis on the test scores of Linear Algebra and Advanced Mathematics of F University. The empirical results show that the two-component mixed generalized normal distribution is better than the commonly used two-component mixed normal distribution in fitting college students’ test data, and has good application value.
文摘According to a mathematical model for dense two-phase flows presented in the previous pape[1],a dense two-phase flow in a vertical pipeline is analytically solved, and the analytic expressions of velocity of each continuous phase and dispersed phase are respectively derived. The results show that when the drag force between twophasesdepends linearly on their relative velocity, the relative velocity profile in the pipeline coincides with Darcy's law except for the thin layer region near the pipeline wall, and that the theoretical assumptions in the dense two-phase flow theory mentioned are reasonable.
文摘A two-agent scheduling problem on parallel machines is considered in this paper. Our objective is to minimize the makespan for agent A, subject to an upper bound on the makespan for agent B. In this paper, we provide a new approximation algorithm called CLPT. On the one hand, we compare the performance between the CLPT algorithm and the optimal solution and find that the solution obtained by the CLPT algorithm is very close to the optimal solution. On the other hand, we design different experimental frameworks to compare the CLPT algorithm and the A-LS algorithm for a comprehensive performance evaluation. A large number of numerical simulation results show that the CLPT algorithm outperformed the A-LS algorithm.
文摘A brain tumor occurs when abnormal cells grow, sometimes very rapidly, into an abnormal mass of tissue. The tumor can infect normal tissue, so there is an interaction between healthy and infected cell. The aim of this paper is to propose some efficient and accurate numerical methods for the computational solution of one-dimensional continuous basic models for the growth and control of brain tumors. After computing the analytical solution, we construct approximations of the solution to the problem using a standard second order finite difference method for space discretization and the Crank-Nicolson method for time discretization. Then, we investigate the convergence behavior of Conjugate gradient and generalized minimum residual as Krylov subspace methods to solve the tridiagonal toeplitz matrix system derived.
文摘This paper constructed and studied a nonresident computer virus model with age structure and two delays effects. The non-negativity and boundedness of the solution of the model have been discussed, and then gave the basic regeneration number, and obtained the conditions for the existence and the stability of the virus-free equilibrium and the computer virus equilibrium. Theoretical analysis shows the conditions under which the model undergoes Hopf bifurcation in three different cases. The numerical examples are provided to demonstrate the theoretical results.
文摘This research focused on the study of heat and mass transfers in a two-phase stratified turbulent fluid flow in a geothermal pipe with chemical reaction. The derived non-linear partial differential equations governing the flow were solved using the Finite Difference Method. The effects of various physical parameters on the concentration, skin friction, heat, and mass transfers have been determined. Analysis of the results obtained indicated that the coefficient of skin friction decreased with an increase in Reynolds number and solutal Grasholf number, the rate of heat transfer increased with an increase in Eckert number, Prandtl number, and angle of inclination, and the rate of mass transfer increased with increase in Reynolds number, Chemical reaction parameter and angle of inclination. The findings would be useful to engineers in designing and maintaining geothermal pipelines more effectively.
文摘This paper presents vehicle localization and tracking methodology to utilize two-channel LiDAR data for turning movement counts. The proposed methodology uniquely integrates a K-means clustering technique, an inverse sensor model, and a Kalman filter to obtain the final trajectories of an individual vehicle. The objective of applying K-means clustering is to robustly differentiate LiDAR data generated by pedestrians and multiple vehicles to identify their presence in the LiDAR’s field of view (FOV). To localize the detected vehicle, an inverse sensor model was used to calculate the accurate location of the vehicles in the LiDAR’s FOV with a known LiDAR position. A constant velocity model based Kalman filter is defined to utilize the localized vehicle information to construct its trajectory by combining LiDAR data from the consecutive scanning cycles. To test the accuracy of the proposed methodology, the turning movement data was collected from busy intersections located in Newark, NJ. The results show that the proposed method can effectively develop the trajectories of the turning vehicles at the intersections and has an average accuracy of 83.8%. Obtained R-squared value for localizing the vehicles ranges from 0.87 to 0.89. To measure the accuracy of the proposed method, it is compared with previously developed methods that focused on the application of multiple-channel LiDARs. The comparison shows that the proposed methodology utilizes two-channel LiDAR data effectively which has a low resolution of data cluster and can achieve acceptable accuracy compared to multiple-channel LiDARs and therefore can be used as a cost-effective measure for large-scale data collection of smart cities.
文摘In this paper, we present the a posteriori error estimate of two-grid mixed finite element methods by averaging techniques for semilinear elliptic equations. We first propose the two-grid algorithms to linearize the mixed method equations. Then, the averaging technique is used to construct the a posteriori error estimates of the two-grid mixed finite element method and theoretical analysis are given for the error estimators. Finally, we give some numerical examples to verify the reliability and efficiency of the a posteriori error estimator.
文摘Purpose: This study aims to answer the question to what extent different types of networks can be used to predict future co-authorship among authors.Design/methodology/approach: We compare three types ot networks: unwelgntea networks, in which a link represents a past collaboration; weighted networks, in which links are weighted by the number of joint publications; and bipartite author-publication networks. The analysis investigates their relation to positive stability, as well as their potential in predicting links in future versions of the co-authorship network. Several hypotheses are tested.Findings: Among other results, we find that weighted networks do not automatically lead to better predictions. Bipartite networks, however, outperform unweighted networks in almost all cases. Research limitations: Only two relatively small case studies are considered Practical implications: The study suggests that future link prediction studies on networks should consider using the bipartite network as a training network. Originality/value: This is the first systematic comparison of unweighted, weighted, and bipartite training networks in link prediction.
文摘We calculate the production of χ<sub>c</sub> and η<sub>c</sub> by the two-photon process in ultra-peripheral heavy ion collisions at Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) energies. The differential cross section of transverse momentum distribution and rapidity distribution for (H = χ<sub>c</sub> and η<sub>c</sub>), are estimated by using the equivalent photon flux in the impact parameter space. The numerical results indicate that the study of χ<sub>c</sub> and η<sub>c</sub> in ultra-peripheral heavy ion collisions are feasible at RHIC and LHC energies.
文摘In this paper, we have used two reliable approaches (theorems) to find the optimal solutions to transportation problems, using variations in costs. In real-life scenarios, transportation costs can fluctuate due to different factors. Finding optimal solutions to the transportation problem in the context of variations in cost is vital for ensuring cost efficiency, resource allocation, customer satisfaction, competitive advantage, environmental responsibility, risk mitigation, and operational fortitude in practical situations. This paper opens up new directions for the solution of transportation problems by introducing two key theorems. By using these theorems, we can develop an algorithm for identifying the optimal solution attributes and permitting accurate quantification of changes in overall transportation costs through the addition or subtraction of constants to specific rows or columns, as well as multiplication by constants inside the cost matrix. It is anticipated that the two reliable techniques presented in this study will provide theoretical insights and practical solutions to enhance the efficiency and cost-effectiveness of transportation systems. Finally, numerical illustrations are presented to verify the proposed approaches.
基金supported by National Natural Science Foundation of China (Grant Nos. 60433020, 60175024 and 60773095)European Commission under grant No. TH/Asia Link/010 (111084)the Key Science-Technology Project of the National Education Ministry of China (Grant No. 02090),and the Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Jilin University, P. R. China
文摘In the post-genomic biology era,the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system,and it has been a challenging task in bioinformatics.The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages,but how to determine the network structure and parameters is still important to be explored.This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network.The new algorithm is evaluated with the use of both simulated and yeast cell cycle data.The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.