The thermal evolution of the Earth’s interior and its dynamic effects are the focus of Earth sciences.However,the commonly adopted grid-based temperature solver is usually prone to numerical oscillations,especially i...The thermal evolution of the Earth’s interior and its dynamic effects are the focus of Earth sciences.However,the commonly adopted grid-based temperature solver is usually prone to numerical oscillations,especially in the presence of sharp thermal gradients,such as when modeling subducting slabs and rising plumes.This phenomenon prohibits the correct representation of thermal evolution and may cause incorrect implications of geodynamic processes.After examining several approaches for removing these numerical oscillations,we show that the Lagrangian method provides an ideal way to solve this problem.In this study,we propose a particle-in-cell method as a strategy for improving the solution to the energy equation and demonstrate its effectiveness in both one-dimensional and three-dimensional thermal problems,as well as in a global spherical simulation with data assimilation.We have implemented this method in the open-source finite-element code CitcomS,which features a spherical coordinate system,distributed memory parallel computing,and data assimilation algorithms.展开更多
The selection and coordinated application of government innovation policies are crucial for guiding the direction of enterprise innovation and unleashing their innovation potential.However,due to the lengthy,voluminou...The selection and coordinated application of government innovation policies are crucial for guiding the direction of enterprise innovation and unleashing their innovation potential.However,due to the lengthy,voluminous,complex,and unstructured nature of regional innovation policy texts,traditional policy classification methods often overlook the reality that these texts cover multiple policy topics,leading to lack of objectivity.In contrast,topic mining technology can handle large-scale textual data,overcoming challenges such as the abundance of policy content and difficulty in classification.Although topic models can partition numerous policy texts into topics,they cannot analyze the interplay among policy topics and the impact of policy topic coordination on enterprise innovation in detail.Therefore,we propose a big data analysis scheme for policy coordination paths based on the latent Dirichlet allocation(LDA)model and the fuzzyset qualitative comparative analysis(fsQCA)method by combining topic models with qualitative comparative analysis.The LDA model was employed to derive the topic distribution of each document and the word distribution of each topic and enable automatic classi-fication through algorithms,providing reliable and objective textual classification results.Subsequently,the fsQCA method was used to analyze the coordination paths and dynamic characteristics.Finally,experimental analysis was conducted using innovation policy text data from 31 provincial-level administrative regions in China from 2012 to 2021 as research samples.The results suggest that the proposed method effectively partitions innovation policy topics and analyzes the policy configuration,driving enterprise innovation in different regions.展开更多
Four key stress thresholds exist in the compression process of rocks,i.e.,crack closure stress(σ_(cc)),crack initiation stress(σ_(ci)),crack damage stress(σ_(cd))and compressive strength(σ_(c)).The quantitative id...Four key stress thresholds exist in the compression process of rocks,i.e.,crack closure stress(σ_(cc)),crack initiation stress(σ_(ci)),crack damage stress(σ_(cd))and compressive strength(σ_(c)).The quantitative identifications of the first three stress thresholds are of great significance for characterizing the microcrack growth and damage evolution of rocks under compression.In this paper,a new method based on damage constitutive model is proposed to quantitatively measure the stress thresholds of rocks.Firstly,two different damage constitutive models were constructed based on acoustic emission(AE)counts and Weibull distribution function considering the compaction stages of the rock and the bearing capacity of the damage element.Then,the accumulative AE counts method(ACLM),AE count rate method(CRM)and constitutive model method(CMM)were introduced to determine the stress thresholds of rocks.Finally,the stress thresholds of 9 different rocks were identified by ACLM,CRM,and CMM.The results show that the theoretical stress−strain curves obtained from the two damage constitutive models are in good agreement with that of the experimental data,and the differences between the two damage constitutive models mainly come from the evolutionary differences of the damage variables.The results of the stress thresholds identified by the CMM are in good agreement with those identified by the AE methods,i.e.,ACLM and CRM.Therefore,the proposed CMM can be used to determine the stress thresholds of rocks.展开更多
In this paper,we explore bound preserving and high-order accurate local discontinuous Galerkin(LDG)schemes to solve a class of chemotaxis models,including the classical Keller-Segel(KS)model and two other density-depe...In this paper,we explore bound preserving and high-order accurate local discontinuous Galerkin(LDG)schemes to solve a class of chemotaxis models,including the classical Keller-Segel(KS)model and two other density-dependent problems.We use the convex splitting method,the variant energy quadratization method,and the scalar auxiliary variable method coupled with the LDG method to construct first-order temporal accurate schemes based on the gradient flow structure of the models.These semi-implicit schemes are decoupled,energy stable,and can be extended to high accuracy schemes using the semi-implicit spectral deferred correction method.Many bound preserving DG discretizations are only worked on explicit time integration methods and are difficult to get high-order accuracy.To overcome these difficulties,we use the Lagrange multipliers to enforce the implicit or semi-implicit LDG schemes to satisfy the bound constraints at each time step.This bound preserving limiter results in the Karush-Kuhn-Tucker condition,which can be solved by an efficient active set semi-smooth Newton method.Various numerical experiments illustrate the high-order accuracy and the effect of bound preserving.展开更多
Soil erosion has been recognized as a critical environmental issue worldwide.While previous studies have primarily focused on watershed-scale soil erosion vulnerability from a natural factor perspective,there is a not...Soil erosion has been recognized as a critical environmental issue worldwide.While previous studies have primarily focused on watershed-scale soil erosion vulnerability from a natural factor perspective,there is a notable gap in understanding the intricate interplay between natural and socio-economic factors,especially in the context of spatial heterogeneity and nonlinear impacts of human-land interactions.To address this,our study evaluates the soil erosion vulnerability at a provincial scale,taking Hubei Province as a case study to explore the combined effects of natural and socio-economic factors.We developed an evaluation index system based on 15 indicators of soil erosion vulnerability:exposure,sensitivity,and adaptability.In addition,the combination weighting method was applied to determine index weights,and the spatial interaction was analyzed using spatial autocorrelation,geographical temporally weighted regression and geographical detector.The results showed an overall decreasing soil erosion intensity in Hubei Province during 2000 and 2020.The soil erosion vulnerability increased before 2000 and then.The areas with high soil erosion vulnerability were mainly confined in the central and southern regions of Hubei Province(Xiantao,Tianmen,Qianjiang and Ezhou)with obvious spatial aggregation that intensified over time.Natural factors(habitat quality index)had negative impacts on soil erosion vulnerability,whereas socio-economic factors(population density)showed substantial spatial variability in their influences.There was a positive correlation between soil erosion vulnerability and erosion intensity,with the correlation coefficients ranging from-0.41 and 0.93.The increase of slope was found to enhance the positive correlation between soil erosion vulnerability and intensity.展开更多
Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challeng...Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challenge.This study investigates a bacterial meningitis model through deterministic and stochastic versions.Four-compartment population dynamics explain the concept,particularly the susceptible population,carrier,infected,and recovered.The model predicts the nonnegative equilibrium points and reproduction number,i.e.,the Meningitis-Free Equilibrium(MFE),and Meningitis-Existing Equilibrium(MEE).For the stochastic version of the existing deterministicmodel,the twomethodologies studied are transition probabilities and non-parametric perturbations.Also,positivity,boundedness,extinction,and disease persistence are studiedrigorouslywiththe helpofwell-known theorems.Standard and nonstandard techniques such as EulerMaruyama,stochastic Euler,stochastic Runge Kutta,and stochastic nonstandard finite difference in the sense of delay have been presented for computational analysis of the stochastic model.Unfortunately,standard methods fail to restore the biological properties of the model,so the stochastic nonstandard finite difference approximation is offered as an efficient,low-cost,and independent of time step size.In addition,the convergence,local,and global stability around the equilibria of the nonstandard computational method is studied by assuming the perturbation effect is zero.The simulations and comparison of the methods are presented to support the theoretical results and for the best visualization of results.展开更多
The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinea...The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets.展开更多
Black-Scholes Model (B-SM) simulates the dynamics of financial market and contains instruments such as options and puts which are major indices requiring solution. B-SM is known to estimate the correct prices of Europ...Black-Scholes Model (B-SM) simulates the dynamics of financial market and contains instruments such as options and puts which are major indices requiring solution. B-SM is known to estimate the correct prices of European Stock options and establish the theoretical foundation for Option pricing. Therefore, this paper evaluates the Black-Schole model in simulating the European call in a cash flow in the dependent drift and focuses on obtaining analytic and then approximate solution for the model. The work also examines Fokker Planck Equation (FPE) and extracts the link between FPE and B-SM for non equilibrium systems. The B-SM is then solved via the Elzaki transform method (ETM). The computational procedures were obtained using MAPLE 18 with the solution provided in the form of convergent series.展开更多
This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations...This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations and can lead to varied or similar results in terms of precision and performance under certain assumptions. The article underlines the importance of comparing these two approaches to choose the one best suited to the context, available data and modeling objectives.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
The rapid development of digital education provides new opportunities and challenges for teaching model innovation.This study aims to explore the application of the BOPPPS(Bridge-in,Objective,Pre-assessment,Participat...The rapid development of digital education provides new opportunities and challenges for teaching model innovation.This study aims to explore the application of the BOPPPS(Bridge-in,Objective,Pre-assessment,Participatory learning,Post-assessment,Summary)teaching method in the development of a blended teaching model for the Operations Research course under the background of digital education.In response to the characteristics of the course and the needs of the student group,the teaching design is reconstructed with a student-centered approach,increasing practical teaching links,improving the assessment and evaluation system,and effectively implementing it in conjunction with digital educational technology.This teaching model has shown significant effectiveness in the context of digital education,providing valuable experience and insights for the innovation of the Operations Research course.展开更多
To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondar...To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.展开更多
On the basis of introducing the fundamental theory and the basic analysis steps of the sub model method, the strength of the new engine complex assembly structure was analyzed according to the properties of the engin...On the basis of introducing the fundamental theory and the basic analysis steps of the sub model method, the strength of the new engine complex assembly structure was analyzed according to the properties of the engine structures, some of the key parts of the engine were analyzed with refined mesh by sub model method and the error of the FEM solution was estimated by the extrapolation method. The example showed that the sub model can not only analyze the comlex structures without the restriction of the software and hardware of the computers, but get the more precise analysis result also. This method is more suitable for the strength analysis of the complex assembly structure.展开更多
Hot plane strain compression tests of 6013 aluminum alloy were conducted within the temperature range of 613?773 K and the strain rate range of 0.001?10 s?1. Based on the corrected experimental data with temperature c...Hot plane strain compression tests of 6013 aluminum alloy were conducted within the temperature range of 613?773 K and the strain rate range of 0.001?10 s?1. Based on the corrected experimental data with temperature compensation, Kriging method is selected to model the constitutive relationship among flow stress, temperature, strain rate and strain. The predictability and reliability of the constructed Kriging model are evaluated by statistical measures, comparative analysis and leave-one-out cross-validation (LOO-CV). The accuracy of Kriging model is validated by the R-value of 0.999 and the AARE of 0.478%. Meanwhile, its superiority has been demonstrated while comparing with the improved Arrhenius-type model. Furthermore, the generalization capability of Kriging model is identified by LOO-CV with 25 times of testing. It is indicated that Kriging method is competent to develop accurate model for describing the hot deformation behavior and predicting the flow stress even beyond the experimental conditions in hot compression tests.展开更多
A mathematical model combined projection algorithm with phase-field method was applied. The adaptive finite element method was adopted to solve the model based on the non-uniform grid, and the behavior of dendritic gr...A mathematical model combined projection algorithm with phase-field method was applied. The adaptive finite element method was adopted to solve the model based on the non-uniform grid, and the behavior of dendritic growth was simulated from undercooled nickel melt under the forced flow. The simulation results show that the asymmetry behavior of the dendritic growth is caused by the forced flow. When the flow velocity is less than the critical value, the asymmetry of dendrite is little influenced by the forced flow. Once the flow velocity reaches or exceeds the critical value, the controlling factor of dendrite growth gradually changes from thermal diffusion to convection. With the increase of the flow velocity, the deflection angle towards upstream direction of the primary dendrite stem becomes larger. The effect of the dendrite growth on the flow field of the melt is apparent. With the increase of the dendrite size, the vortex is present in the downstream regions, and the vortex region is gradually enlarged. Dendrite tips appear to remelt. In addition, the adaptive finite element method can reduce CPU running time by one order of magnitude compared with uniform grid method, and the speed-up ratio is proportional to the size of computational domain.展开更多
Since the ocean bottom is a sedimentary environment wherein stratification is well developed, the use of an anisotropic model is best for studying its geology. Beginning with Maxwell's equations for an anisotropic mo...Since the ocean bottom is a sedimentary environment wherein stratification is well developed, the use of an anisotropic model is best for studying its geology. Beginning with Maxwell's equations for an anisotropic model, we introduce scalar potentials based on the divergence-free characteristic of the electric and magnetic (EM) fields. We then continue the EM fields down into the deep earth and upward into the seawater and couple them at the ocean bottom to the transmitting source. By studying both the DC apparent resistivity curves and their polar plots, we can resolve the anisotropy of the ocean bottom. Forward modeling of a high-resistivity thin layer in an anisotropic half-space demonstrates that the marine DC resistivity method in shallow water is very sensitive to the resistive reservoir but is not influenced by airwaves. As such, it is very suitable for oil and gas exploration in shallowwater areas but, to date, most modeling algorithms for studying marine DC resistivity are based on isotropic models. In this paper, we investigate one-dimensional anisotropic forward modeling for marine DC resistivity method, prove the algorithm to have high accuracy, and thus provide a theoretical basis for 2D and 3D forward modeling.展开更多
Based on the uniaxial compression creep experiments conducted on bauxite sandstone obtained from Sanmenxia,typical creep experiment curves were obtained.From the characteristics of strain component of creep curves,the...Based on the uniaxial compression creep experiments conducted on bauxite sandstone obtained from Sanmenxia,typical creep experiment curves were obtained.From the characteristics of strain component of creep curves,the creep strain is composed of instantaneous elastic strain,ε(me),instantaneous plastic strain,ε(mp),viscoelastic strain,ε(ce),and viscoplastic strain,ε(cp).Based on the characteristics of instantaneous plastic strain,a new element of instantaneous plastic rheology was introduced,instantaneous plastic modulus was defined,and the modified Burgers model was established.Then identification of direct screening method in this model was completed.According to the mechanical properties of rheological elements,one- and three-dimensional creep equations in different stress levels were obtained.One-dimensional model parameters were identified by the method of least squares,and in the process of computation,Gauss-Newton iteration method was applied.Finally,by fitting the experimental curves,the correctness of direct method model was verified,then the examination of posterior exclusive method of the model was accomplished.The results showed that in the improved Burgers models,the rheological characteristics of sandstone are embodied properly,microscopic analysis of creep curves is also achieved,and the correctness of comprehensive identification method of rheological model is verified.展开更多
A wavelet collocation method with nonlinear auto companding is proposed for behavioral modeling of switched current circuits.The companding function is automatically constructed according to the initial error distri...A wavelet collocation method with nonlinear auto companding is proposed for behavioral modeling of switched current circuits.The companding function is automatically constructed according to the initial error distribution obtained through approximating the input output function of the SI circuit by conventional wavelet collocation method.In practical applications,the proposed method is a general purpose approach,by which both the small signal effect and the large signal effect are modeled in a unified formulation to ease the process of modeling and simulation.Compared with the published modeling approaches,the proposed nonlinear auto companding method works more efficiently not only in controlling the error distribution but also in reducing the modeling errors.To demonstrate the promising features of the proposed method,several SI circuits are employed as examples to be modeled and simulated.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
基金the National Supercomputer Center in Tianjin for their patient assistance in providing the compilation environment.We thank the editor,Huajian Yao,for handling the manuscript and Mingming Li and another anonymous reviewer for their constructive comments.The research leading to these results has received funding from National Natural Science Foundation of China projects(Grant Nos.92355302 and 42121005)Taishan Scholar projects(Grant No.tspd20210305)others(Grant Nos.XDB0710000,L2324203,XK2023DXC001,LSKJ202204400,and ZR2021ZD09).
文摘The thermal evolution of the Earth’s interior and its dynamic effects are the focus of Earth sciences.However,the commonly adopted grid-based temperature solver is usually prone to numerical oscillations,especially in the presence of sharp thermal gradients,such as when modeling subducting slabs and rising plumes.This phenomenon prohibits the correct representation of thermal evolution and may cause incorrect implications of geodynamic processes.After examining several approaches for removing these numerical oscillations,we show that the Lagrangian method provides an ideal way to solve this problem.In this study,we propose a particle-in-cell method as a strategy for improving the solution to the energy equation and demonstrate its effectiveness in both one-dimensional and three-dimensional thermal problems,as well as in a global spherical simulation with data assimilation.We have implemented this method in the open-source finite-element code CitcomS,which features a spherical coordinate system,distributed memory parallel computing,and data assimilation algorithms.
文摘The selection and coordinated application of government innovation policies are crucial for guiding the direction of enterprise innovation and unleashing their innovation potential.However,due to the lengthy,voluminous,complex,and unstructured nature of regional innovation policy texts,traditional policy classification methods often overlook the reality that these texts cover multiple policy topics,leading to lack of objectivity.In contrast,topic mining technology can handle large-scale textual data,overcoming challenges such as the abundance of policy content and difficulty in classification.Although topic models can partition numerous policy texts into topics,they cannot analyze the interplay among policy topics and the impact of policy topic coordination on enterprise innovation in detail.Therefore,we propose a big data analysis scheme for policy coordination paths based on the latent Dirichlet allocation(LDA)model and the fuzzyset qualitative comparative analysis(fsQCA)method by combining topic models with qualitative comparative analysis.The LDA model was employed to derive the topic distribution of each document and the word distribution of each topic and enable automatic classi-fication through algorithms,providing reliable and objective textual classification results.Subsequently,the fsQCA method was used to analyze the coordination paths and dynamic characteristics.Finally,experimental analysis was conducted using innovation policy text data from 31 provincial-level administrative regions in China from 2012 to 2021 as research samples.The results suggest that the proposed method effectively partitions innovation policy topics and analyzes the policy configuration,driving enterprise innovation in different regions.
基金Projects(2021RC3007,2020RC3090)supported by the Science and Technology Innovation Program of Hunan Province,ChinaProjects(52374150,52174099)supported by the National Natural Science Foundation of China。
文摘Four key stress thresholds exist in the compression process of rocks,i.e.,crack closure stress(σ_(cc)),crack initiation stress(σ_(ci)),crack damage stress(σ_(cd))and compressive strength(σ_(c)).The quantitative identifications of the first three stress thresholds are of great significance for characterizing the microcrack growth and damage evolution of rocks under compression.In this paper,a new method based on damage constitutive model is proposed to quantitatively measure the stress thresholds of rocks.Firstly,two different damage constitutive models were constructed based on acoustic emission(AE)counts and Weibull distribution function considering the compaction stages of the rock and the bearing capacity of the damage element.Then,the accumulative AE counts method(ACLM),AE count rate method(CRM)and constitutive model method(CMM)were introduced to determine the stress thresholds of rocks.Finally,the stress thresholds of 9 different rocks were identified by ACLM,CRM,and CMM.The results show that the theoretical stress−strain curves obtained from the two damage constitutive models are in good agreement with that of the experimental data,and the differences between the two damage constitutive models mainly come from the evolutionary differences of the damage variables.The results of the stress thresholds identified by the CMM are in good agreement with those identified by the AE methods,i.e.,ACLM and CRM.Therefore,the proposed CMM can be used to determine the stress thresholds of rocks.
文摘In this paper,we explore bound preserving and high-order accurate local discontinuous Galerkin(LDG)schemes to solve a class of chemotaxis models,including the classical Keller-Segel(KS)model and two other density-dependent problems.We use the convex splitting method,the variant energy quadratization method,and the scalar auxiliary variable method coupled with the LDG method to construct first-order temporal accurate schemes based on the gradient flow structure of the models.These semi-implicit schemes are decoupled,energy stable,and can be extended to high accuracy schemes using the semi-implicit spectral deferred correction method.Many bound preserving DG discretizations are only worked on explicit time integration methods and are difficult to get high-order accuracy.To overcome these difficulties,we use the Lagrange multipliers to enforce the implicit or semi-implicit LDG schemes to satisfy the bound constraints at each time step.This bound preserving limiter results in the Karush-Kuhn-Tucker condition,which can be solved by an efficient active set semi-smooth Newton method.Various numerical experiments illustrate the high-order accuracy and the effect of bound preserving.
基金supported by the National Natural Science Foundation of China(42377354)the Natural Science Foundation of Hubei province(2024AFB951)the Chunhui Plan Cooperation Research Project of the Chinese Ministry of Education(202200199).
文摘Soil erosion has been recognized as a critical environmental issue worldwide.While previous studies have primarily focused on watershed-scale soil erosion vulnerability from a natural factor perspective,there is a notable gap in understanding the intricate interplay between natural and socio-economic factors,especially in the context of spatial heterogeneity and nonlinear impacts of human-land interactions.To address this,our study evaluates the soil erosion vulnerability at a provincial scale,taking Hubei Province as a case study to explore the combined effects of natural and socio-economic factors.We developed an evaluation index system based on 15 indicators of soil erosion vulnerability:exposure,sensitivity,and adaptability.In addition,the combination weighting method was applied to determine index weights,and the spatial interaction was analyzed using spatial autocorrelation,geographical temporally weighted regression and geographical detector.The results showed an overall decreasing soil erosion intensity in Hubei Province during 2000 and 2020.The soil erosion vulnerability increased before 2000 and then.The areas with high soil erosion vulnerability were mainly confined in the central and southern regions of Hubei Province(Xiantao,Tianmen,Qianjiang and Ezhou)with obvious spatial aggregation that intensified over time.Natural factors(habitat quality index)had negative impacts on soil erosion vulnerability,whereas socio-economic factors(population density)showed substantial spatial variability in their influences.There was a positive correlation between soil erosion vulnerability and erosion intensity,with the correlation coefficients ranging from-0.41 and 0.93.The increase of slope was found to enhance the positive correlation between soil erosion vulnerability and intensity.
基金Deanship of Research and Graduate Studies at King Khalid University for funding this work through large Research Project under Grant Number RGP2/302/45supported by the Deanship of Scientific Research,Vice Presidency forGraduate Studies and Scientific Research,King Faisal University,Saudi Arabia(Grant Number A426).
文摘Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challenge.This study investigates a bacterial meningitis model through deterministic and stochastic versions.Four-compartment population dynamics explain the concept,particularly the susceptible population,carrier,infected,and recovered.The model predicts the nonnegative equilibrium points and reproduction number,i.e.,the Meningitis-Free Equilibrium(MFE),and Meningitis-Existing Equilibrium(MEE).For the stochastic version of the existing deterministicmodel,the twomethodologies studied are transition probabilities and non-parametric perturbations.Also,positivity,boundedness,extinction,and disease persistence are studiedrigorouslywiththe helpofwell-known theorems.Standard and nonstandard techniques such as EulerMaruyama,stochastic Euler,stochastic Runge Kutta,and stochastic nonstandard finite difference in the sense of delay have been presented for computational analysis of the stochastic model.Unfortunately,standard methods fail to restore the biological properties of the model,so the stochastic nonstandard finite difference approximation is offered as an efficient,low-cost,and independent of time step size.In addition,the convergence,local,and global stability around the equilibria of the nonstandard computational method is studied by assuming the perturbation effect is zero.The simulations and comparison of the methods are presented to support the theoretical results and for the best visualization of results.
基金supported by the China Postdoctoral Science Foundation(2021M702304)Natural Science Foundation of Shandong Province(ZR20210E260).
文摘The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets.
文摘Black-Scholes Model (B-SM) simulates the dynamics of financial market and contains instruments such as options and puts which are major indices requiring solution. B-SM is known to estimate the correct prices of European Stock options and establish the theoretical foundation for Option pricing. Therefore, this paper evaluates the Black-Schole model in simulating the European call in a cash flow in the dependent drift and focuses on obtaining analytic and then approximate solution for the model. The work also examines Fokker Planck Equation (FPE) and extracts the link between FPE and B-SM for non equilibrium systems. The B-SM is then solved via the Elzaki transform method (ETM). The computational procedures were obtained using MAPLE 18 with the solution provided in the form of convergent series.
文摘This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations and can lead to varied or similar results in terms of precision and performance under certain assumptions. The article underlines the importance of comparing these two approaches to choose the one best suited to the context, available data and modeling objectives.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
文摘The rapid development of digital education provides new opportunities and challenges for teaching model innovation.This study aims to explore the application of the BOPPPS(Bridge-in,Objective,Pre-assessment,Participatory learning,Post-assessment,Summary)teaching method in the development of a blended teaching model for the Operations Research course under the background of digital education.In response to the characteristics of the course and the needs of the student group,the teaching design is reconstructed with a student-centered approach,increasing practical teaching links,improving the assessment and evaluation system,and effectively implementing it in conjunction with digital educational technology.This teaching model has shown significant effectiveness in the context of digital education,providing valuable experience and insights for the innovation of the Operations Research course.
基金supported by the Natural Science Foundation of China(Nos.41404057,41674077 and 411640034)the Nuclear Energy Development Project of China,and the‘555’Project of Gan Po Excellent People
文摘To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.
文摘On the basis of introducing the fundamental theory and the basic analysis steps of the sub model method, the strength of the new engine complex assembly structure was analyzed according to the properties of the engine structures, some of the key parts of the engine were analyzed with refined mesh by sub model method and the error of the FEM solution was estimated by the extrapolation method. The example showed that the sub model can not only analyze the comlex structures without the restriction of the software and hardware of the computers, but get the more precise analysis result also. This method is more suitable for the strength analysis of the complex assembly structure.
基金Project(51475156)supported by the National Natural Science Foundation of ChinaProject(2014ZX04002071)supported by the National Key Project of Science and Technology of ChinaProject(GXKFJ14-08)supported by the Opening Foundation of Key Laboratory for Non-Ferrous Metal and Featured Material Processing,Guangxi Zhuang Autonomous Region,China
文摘Hot plane strain compression tests of 6013 aluminum alloy were conducted within the temperature range of 613?773 K and the strain rate range of 0.001?10 s?1. Based on the corrected experimental data with temperature compensation, Kriging method is selected to model the constitutive relationship among flow stress, temperature, strain rate and strain. The predictability and reliability of the constructed Kriging model are evaluated by statistical measures, comparative analysis and leave-one-out cross-validation (LOO-CV). The accuracy of Kriging model is validated by the R-value of 0.999 and the AARE of 0.478%. Meanwhile, its superiority has been demonstrated while comparing with the improved Arrhenius-type model. Furthermore, the generalization capability of Kriging model is identified by LOO-CV with 25 times of testing. It is indicated that Kriging method is competent to develop accurate model for describing the hot deformation behavior and predicting the flow stress even beyond the experimental conditions in hot compression tests.
基金Projects(51161011,11364024)supported by the National Natural Science Foundation of ChinaProject(1204GKCA065)supported by the Key Technology R&D Program of Gansu Province,China+1 种基金Project(201210)supported by the Fundamental Research Funds for the Universities of Gansu Province,ChinaProject(J201304)supported by the Funds for Distinguished Young Scientists of Lanzhou University of Technology,China
文摘A mathematical model combined projection algorithm with phase-field method was applied. The adaptive finite element method was adopted to solve the model based on the non-uniform grid, and the behavior of dendritic growth was simulated from undercooled nickel melt under the forced flow. The simulation results show that the asymmetry behavior of the dendritic growth is caused by the forced flow. When the flow velocity is less than the critical value, the asymmetry of dendrite is little influenced by the forced flow. Once the flow velocity reaches or exceeds the critical value, the controlling factor of dendrite growth gradually changes from thermal diffusion to convection. With the increase of the flow velocity, the deflection angle towards upstream direction of the primary dendrite stem becomes larger. The effect of the dendrite growth on the flow field of the melt is apparent. With the increase of the dendrite size, the vortex is present in the downstream regions, and the vortex region is gradually enlarged. Dendrite tips appear to remelt. In addition, the adaptive finite element method can reduce CPU running time by one order of magnitude compared with uniform grid method, and the speed-up ratio is proportional to the size of computational domain.
基金financially supported by the National Hi-tech Research and Development Program of China(863 Program)(No.2012AA09A20103)
文摘Since the ocean bottom is a sedimentary environment wherein stratification is well developed, the use of an anisotropic model is best for studying its geology. Beginning with Maxwell's equations for an anisotropic model, we introduce scalar potentials based on the divergence-free characteristic of the electric and magnetic (EM) fields. We then continue the EM fields down into the deep earth and upward into the seawater and couple them at the ocean bottom to the transmitting source. By studying both the DC apparent resistivity curves and their polar plots, we can resolve the anisotropy of the ocean bottom. Forward modeling of a high-resistivity thin layer in an anisotropic half-space demonstrates that the marine DC resistivity method in shallow water is very sensitive to the resistive reservoir but is not influenced by airwaves. As such, it is very suitable for oil and gas exploration in shallowwater areas but, to date, most modeling algorithms for studying marine DC resistivity are based on isotropic models. In this paper, we investigate one-dimensional anisotropic forward modeling for marine DC resistivity method, prove the algorithm to have high accuracy, and thus provide a theoretical basis for 2D and 3D forward modeling.
基金Projects (51174228,51274249) supported by the National Natural Science Foundation of China
文摘Based on the uniaxial compression creep experiments conducted on bauxite sandstone obtained from Sanmenxia,typical creep experiment curves were obtained.From the characteristics of strain component of creep curves,the creep strain is composed of instantaneous elastic strain,ε(me),instantaneous plastic strain,ε(mp),viscoelastic strain,ε(ce),and viscoplastic strain,ε(cp).Based on the characteristics of instantaneous plastic strain,a new element of instantaneous plastic rheology was introduced,instantaneous plastic modulus was defined,and the modified Burgers model was established.Then identification of direct screening method in this model was completed.According to the mechanical properties of rheological elements,one- and three-dimensional creep equations in different stress levels were obtained.One-dimensional model parameters were identified by the method of least squares,and in the process of computation,Gauss-Newton iteration method was applied.Finally,by fitting the experimental curves,the correctness of direct method model was verified,then the examination of posterior exclusive method of the model was accomplished.The results showed that in the improved Burgers models,the rheological characteristics of sandstone are embodied properly,microscopic analysis of creep curves is also achieved,and the correctness of comprehensive identification method of rheological model is verified.
文摘A wavelet collocation method with nonlinear auto companding is proposed for behavioral modeling of switched current circuits.The companding function is automatically constructed according to the initial error distribution obtained through approximating the input output function of the SI circuit by conventional wavelet collocation method.In practical applications,the proposed method is a general purpose approach,by which both the small signal effect and the large signal effect are modeled in a unified formulation to ease the process of modeling and simulation.Compared with the published modeling approaches,the proposed nonlinear auto companding method works more efficiently not only in controlling the error distribution but also in reducing the modeling errors.To demonstrate the promising features of the proposed method,several SI circuits are employed as examples to be modeled and simulated.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.