Effectively managing extensive,multi-source,and multi-level real-scene 3D models for responsive retrieval scheduling and rapid visualization in the Web environment is a significant challenge in the current development...Effectively managing extensive,multi-source,and multi-level real-scene 3D models for responsive retrieval scheduling and rapid visualization in the Web environment is a significant challenge in the current development of real-scene 3D applications in China.In this paper,we address this challenge by reorganizing spatial and temporal information into a 3D geospatial grid.It introduces the Global 3D Geocoding System(G_(3)DGS),leveraging neighborhood similarity and uniqueness for efficient storage,retrieval,updating,and scheduling of these models.A combination of G_(3)DGS and non-relational databases is implemented,enhancing data storage scalability and flexibility.Additionally,a model detail management scheduling strategy(TLOD)based on G_(3)DGS and an importance factor T is designed.Compared with mainstream commercial and open-source platforms,this method significantly enhances the loadable capacity of massive multi-source real-scene 3D models in the Web environment by 33%,improves browsing efficiency by 48%,and accelerates invocation speed by 40%.展开更多
The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initiall...The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.展开更多
This paper presents a comprehensive framework for analyzing phase transitions in collective models such as theVicsek model under various noise types. The Vicsek model, focusing on understanding the collective behavior...This paper presents a comprehensive framework for analyzing phase transitions in collective models such as theVicsek model under various noise types. The Vicsek model, focusing on understanding the collective behaviors of socialanimals, is known due to its discontinuous phase transitions under vector noise. However, its behavior under scalar noiseremains less conclusive. Renowned for its efficacy in the analysis of complex systems under both equilibrium and nonequilibriumstates, the eigen microstate method is employed here for a quantitative examination of the phase transitions inthe Vicsek model under both vector and scalar noises. The study finds that the Vicsek model exhibits discontinuous phasetransitions regardless of noise type. Furthermore, the dichotomy method is utilized to identify the critical points for thesephase transitions. A significant finding is the observed increase in the critical point for discontinuous phase transitions withescalation of population density.展开更多
The thermal evolution of the Earth’s interior and its dynamic effects are the focus of Earth sciences.However,the commonly adopted grid-based temperature solver is usually prone to numerical oscillations,especially i...The thermal evolution of the Earth’s interior and its dynamic effects are the focus of Earth sciences.However,the commonly adopted grid-based temperature solver is usually prone to numerical oscillations,especially in the presence of sharp thermal gradients,such as when modeling subducting slabs and rising plumes.This phenomenon prohibits the correct representation of thermal evolution and may cause incorrect implications of geodynamic processes.After examining several approaches for removing these numerical oscillations,we show that the Lagrangian method provides an ideal way to solve this problem.In this study,we propose a particle-in-cell method as a strategy for improving the solution to the energy equation and demonstrate its effectiveness in both one-dimensional and three-dimensional thermal problems,as well as in a global spherical simulation with data assimilation.We have implemented this method in the open-source finite-element code CitcomS,which features a spherical coordinate system,distributed memory parallel computing,and data assimilation algorithms.展开更多
Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to pred...Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to predict the landslide runout but a fundamental problem remained is how to determine the reliable numerical parameters.This study proposes a framework to predict the runout of potential landslides through multi-source data collaboration and numerical analysis of historical landslide events.Specifically,for the historical landslide cases,the landslide-induced seismic signal,geophysical surveys,and possible in-situ drone/phone videos(multi-source data collaboration)can validate the numerical results in terms of landslide dynamics and deposit features and help calibrate the numerical(rheological)parameters.Subsequently,the calibrated numerical parameters can be used to numerically predict the runout of potential landslides in the region with a similar geological setting to the recorded events.Application of the runout prediction approach to the 2020 Jiashanying landslide in Guizhou,China gives reasonable results in comparison to the field observations.The numerical parameters are determined from the multi-source data collaboration analysis of a historical case in the region(2019 Shuicheng landslide).The proposed framework for landslide runout prediction can be of great utility for landslide risk assessment and disaster reduction in mountainous regions worldwide.展开更多
Four key stress thresholds exist in the compression process of rocks,i.e.,crack closure stress(σ_(cc)),crack initiation stress(σ_(ci)),crack damage stress(σ_(cd))and compressive strength(σ_(c)).The quantitative id...Four key stress thresholds exist in the compression process of rocks,i.e.,crack closure stress(σ_(cc)),crack initiation stress(σ_(ci)),crack damage stress(σ_(cd))and compressive strength(σ_(c)).The quantitative identifications of the first three stress thresholds are of great significance for characterizing the microcrack growth and damage evolution of rocks under compression.In this paper,a new method based on damage constitutive model is proposed to quantitatively measure the stress thresholds of rocks.Firstly,two different damage constitutive models were constructed based on acoustic emission(AE)counts and Weibull distribution function considering the compaction stages of the rock and the bearing capacity of the damage element.Then,the accumulative AE counts method(ACLM),AE count rate method(CRM)and constitutive model method(CMM)were introduced to determine the stress thresholds of rocks.Finally,the stress thresholds of 9 different rocks were identified by ACLM,CRM,and CMM.The results show that the theoretical stress−strain curves obtained from the two damage constitutive models are in good agreement with that of the experimental data,and the differences between the two damage constitutive models mainly come from the evolutionary differences of the damage variables.The results of the stress thresholds identified by the CMM are in good agreement with those identified by the AE methods,i.e.,ACLM and CRM.Therefore,the proposed CMM can be used to determine the stress thresholds of rocks.展开更多
The two-component cold atom systems with anisotropic hopping amplitudes can be phenomenologically described by a two-dimensional Ising-XY coupled model with spatial anisotropy.At low temperatures,theoretical predictio...The two-component cold atom systems with anisotropic hopping amplitudes can be phenomenologically described by a two-dimensional Ising-XY coupled model with spatial anisotropy.At low temperatures,theoretical predictions[Phys.Rev.A 72053604(2005)]and[arXiv:0706.1609]indicate the existence of a topological ordered phase characterized by Ising and XY disorder but with 2XY ordering.However,due to ergodic difficulties faced by Monte Carlo methods at low temperatures,this topological phase has not been numerically explored.We propose a linear cluster updating Monte Carlo method,which flips spins without rejection in the anisotropy limit but does not change the energy.Using this scheme and conventional Monte Carlo methods,we succeed in revealing the nature of topological phases with half-vortices and domain walls.In the constructed global phase diagram,Ising and XY-type transitions are very close to each other and differ significantly from the schematic phase diagram reported earlier.We also propose and explore a wide range of quantities,including magnetism,superfluidity,specific heat,susceptibility,and even percolation susceptibility,and obtain consistent and reliable results.Furthermore,we observed first-order transitions characterized by common intersection points in magnetizations for different system sizes,as opposed to the conventional phase transition where Binder cumulants of various sizes share common intersections.The critical exponents of different types of phase transitions are reasonably fitted.The results are useful to help cold atom experiments explore the half-vortex topological phase.展开更多
In this paper,we explore bound preserving and high-order accurate local discontinuous Galerkin(LDG)schemes to solve a class of chemotaxis models,including the classical Keller-Segel(KS)model and two other density-depe...In this paper,we explore bound preserving and high-order accurate local discontinuous Galerkin(LDG)schemes to solve a class of chemotaxis models,including the classical Keller-Segel(KS)model and two other density-dependent problems.We use the convex splitting method,the variant energy quadratization method,and the scalar auxiliary variable method coupled with the LDG method to construct first-order temporal accurate schemes based on the gradient flow structure of the models.These semi-implicit schemes are decoupled,energy stable,and can be extended to high accuracy schemes using the semi-implicit spectral deferred correction method.Many bound preserving DG discretizations are only worked on explicit time integration methods and are difficult to get high-order accuracy.To overcome these difficulties,we use the Lagrange multipliers to enforce the implicit or semi-implicit LDG schemes to satisfy the bound constraints at each time step.This bound preserving limiter results in the Karush-Kuhn-Tucker condition,which can be solved by an efficient active set semi-smooth Newton method.Various numerical experiments illustrate the high-order accuracy and the effect of bound preserving.展开更多
Soil erosion has been recognized as a critical environmental issue worldwide.While previous studies have primarily focused on watershed-scale soil erosion vulnerability from a natural factor perspective,there is a not...Soil erosion has been recognized as a critical environmental issue worldwide.While previous studies have primarily focused on watershed-scale soil erosion vulnerability from a natural factor perspective,there is a notable gap in understanding the intricate interplay between natural and socio-economic factors,especially in the context of spatial heterogeneity and nonlinear impacts of human-land interactions.To address this,our study evaluates the soil erosion vulnerability at a provincial scale,taking Hubei Province as a case study to explore the combined effects of natural and socio-economic factors.We developed an evaluation index system based on 15 indicators of soil erosion vulnerability:exposure,sensitivity,and adaptability.In addition,the combination weighting method was applied to determine index weights,and the spatial interaction was analyzed using spatial autocorrelation,geographical temporally weighted regression and geographical detector.The results showed an overall decreasing soil erosion intensity in Hubei Province during 2000 and 2020.The soil erosion vulnerability increased before 2000 and then.The areas with high soil erosion vulnerability were mainly confined in the central and southern regions of Hubei Province(Xiantao,Tianmen,Qianjiang and Ezhou)with obvious spatial aggregation that intensified over time.Natural factors(habitat quality index)had negative impacts on soil erosion vulnerability,whereas socio-economic factors(population density)showed substantial spatial variability in their influences.There was a positive correlation between soil erosion vulnerability and erosion intensity,with the correlation coefficients ranging from-0.41 and 0.93.The increase of slope was found to enhance the positive correlation between soil erosion vulnerability and intensity.展开更多
Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challeng...Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challenge.This study investigates a bacterial meningitis model through deterministic and stochastic versions.Four-compartment population dynamics explain the concept,particularly the susceptible population,carrier,infected,and recovered.The model predicts the nonnegative equilibrium points and reproduction number,i.e.,the Meningitis-Free Equilibrium(MFE),and Meningitis-Existing Equilibrium(MEE).For the stochastic version of the existing deterministicmodel,the twomethodologies studied are transition probabilities and non-parametric perturbations.Also,positivity,boundedness,extinction,and disease persistence are studiedrigorouslywiththe helpofwell-known theorems.Standard and nonstandard techniques such as EulerMaruyama,stochastic Euler,stochastic Runge Kutta,and stochastic nonstandard finite difference in the sense of delay have been presented for computational analysis of the stochastic model.Unfortunately,standard methods fail to restore the biological properties of the model,so the stochastic nonstandard finite difference approximation is offered as an efficient,low-cost,and independent of time step size.In addition,the convergence,local,and global stability around the equilibria of the nonstandard computational method is studied by assuming the perturbation effect is zero.The simulations and comparison of the methods are presented to support the theoretical results and for the best visualization of results.展开更多
The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the effi...The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.展开更多
The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinea...The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets.展开更多
The present study proposes a sub-grid scale model for the one-dimensional Burgers turbulence based on the neuralnetwork and deep learning method.The filtered data of the direct numerical simulation is used to establis...The present study proposes a sub-grid scale model for the one-dimensional Burgers turbulence based on the neuralnetwork and deep learning method.The filtered data of the direct numerical simulation is used to establish thetraining data set,the validation data set,and the test data set.The artificial neural network(ANN)methodand Back Propagation method are employed to train parameters in the ANN.The developed ANN is applied toconstruct the sub-grid scale model for the large eddy simulation of the Burgers turbulence in the one-dimensionalspace.The proposed model well predicts the time correlation and the space correlation of the Burgers turbulence.展开更多
This study introduces an innovative“Big Model”strategy to enhance Bridge Structural Health Monitoring(SHM)using a Convolutional Neural Network(CNN),time-frequency analysis,and fine element analysis.Leveraging ensemb...This study introduces an innovative“Big Model”strategy to enhance Bridge Structural Health Monitoring(SHM)using a Convolutional Neural Network(CNN),time-frequency analysis,and fine element analysis.Leveraging ensemble methods,collaborative learning,and distributed computing,the approach effectively manages the complexity and scale of large-scale bridge data.The CNN employs transfer learning,fine-tuning,and continuous monitoring to optimize models for adaptive and accurate structural health assessments,focusing on extracting meaningful features through time-frequency analysis.By integrating Finite Element Analysis,time-frequency analysis,and CNNs,the strategy provides a comprehensive understanding of bridge health.Utilizing diverse sensor data,sophisticated feature extraction,and advanced CNN architecture,the model is optimized through rigorous preprocessing and hyperparameter tuning.This approach significantly enhances the ability to make accurate predictions,monitor structural health,and support proactive maintenance practices,thereby ensuring the safety and longevity of critical infrastructure.展开更多
Black-Scholes Model (B-SM) simulates the dynamics of financial market and contains instruments such as options and puts which are major indices requiring solution. B-SM is known to estimate the correct prices of Europ...Black-Scholes Model (B-SM) simulates the dynamics of financial market and contains instruments such as options and puts which are major indices requiring solution. B-SM is known to estimate the correct prices of European Stock options and establish the theoretical foundation for Option pricing. Therefore, this paper evaluates the Black-Schole model in simulating the European call in a cash flow in the dependent drift and focuses on obtaining analytic and then approximate solution for the model. The work also examines Fokker Planck Equation (FPE) and extracts the link between FPE and B-SM for non equilibrium systems. The B-SM is then solved via the Elzaki transform method (ETM). The computational procedures were obtained using MAPLE 18 with the solution provided in the form of convergent series.展开更多
This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations...This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations and can lead to varied or similar results in terms of precision and performance under certain assumptions. The article underlines the importance of comparing these two approaches to choose the one best suited to the context, available data and modeling objectives.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
The rapid development of digital education provides new opportunities and challenges for teaching model innovation.This study aims to explore the application of the BOPPPS(Bridge-in,Objective,Pre-assessment,Participat...The rapid development of digital education provides new opportunities and challenges for teaching model innovation.This study aims to explore the application of the BOPPPS(Bridge-in,Objective,Pre-assessment,Participatory learning,Post-assessment,Summary)teaching method in the development of a blended teaching model for the Operations Research course under the background of digital education.In response to the characteristics of the course and the needs of the student group,the teaching design is reconstructed with a student-centered approach,increasing practical teaching links,improving the assessment and evaluation system,and effectively implementing it in conjunction with digital educational technology.This teaching model has shown significant effectiveness in the context of digital education,providing valuable experience and insights for the innovation of the Operations Research course.展开更多
Cyber Threat Intelligence(CTI)is a valuable resource for cybersecurity defense,but it also poses challenges due to its multi-source and heterogeneous nature.Security personnel may be unable to use CTI effectively to u...Cyber Threat Intelligence(CTI)is a valuable resource for cybersecurity defense,but it also poses challenges due to its multi-source and heterogeneous nature.Security personnel may be unable to use CTI effectively to understand the condition and trend of a cyberattack and respond promptly.To address these challenges,we propose a novel approach that consists of three steps.First,we construct the attack and defense analysis of the cybersecurity ontology(ADACO)model by integrating multiple cybersecurity databases.Second,we develop the threat evolution prediction algorithm(TEPA),which can automatically detect threats at device nodes,correlate and map multisource threat information,and dynamically infer the threat evolution process.TEPA leverages knowledge graphs to represent comprehensive threat scenarios and achieves better performance in simulated experiments by combining structural and textual features of entities.Third,we design the intelligent defense decision algorithm(IDDA),which can provide intelligent recommendations for security personnel regarding the most suitable defense techniques.IDDA outperforms the baseline methods in the comparative experiment.展开更多
Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of ...Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of human-land interaction.In this paper,based on multi-source big data include 250 m×250 m resolution cell phone data,1.81×105 Points of Interest(POI)data and administrative boundary data,we built a UFA identification method and demonstrated empirically in Shenyang City,China.We argue that the method we built can effectively identify multi-scale multi-type UFAs based on human activity and further reveal the spatial correlation between urban facilities and human activity.The empirical study suggests that the employment functional zones in Shenyang City are more concentrated in central cities than other single functional zones.There are more mix functional areas in the central city areas,while the planned industrial new cities need to develop comprehensive functions in Shenyang.UFAs have scale effects and human-land interaction patterns.We suggest that city decision makers should apply multi-sources big data to measure urban functional service in a more refined manner from a supply-demand perspective.展开更多
基金National Key Research and Development Program of China(No.2023YFB3907103).
文摘Effectively managing extensive,multi-source,and multi-level real-scene 3D models for responsive retrieval scheduling and rapid visualization in the Web environment is a significant challenge in the current development of real-scene 3D applications in China.In this paper,we address this challenge by reorganizing spatial and temporal information into a 3D geospatial grid.It introduces the Global 3D Geocoding System(G_(3)DGS),leveraging neighborhood similarity and uniqueness for efficient storage,retrieval,updating,and scheduling of these models.A combination of G_(3)DGS and non-relational databases is implemented,enhancing data storage scalability and flexibility.Additionally,a model detail management scheduling strategy(TLOD)based on G_(3)DGS and an importance factor T is designed.Compared with mainstream commercial and open-source platforms,this method significantly enhances the loadable capacity of massive multi-source real-scene 3D models in the Web environment by 33%,improves browsing efficiency by 48%,and accelerates invocation speed by 40%.
基金supported by the National Key Research and Development Program of China(grant number 2019YFE0123600)。
文摘The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.
基金the National Natural Science Foundation of China(Grant No.62273033).
文摘This paper presents a comprehensive framework for analyzing phase transitions in collective models such as theVicsek model under various noise types. The Vicsek model, focusing on understanding the collective behaviors of socialanimals, is known due to its discontinuous phase transitions under vector noise. However, its behavior under scalar noiseremains less conclusive. Renowned for its efficacy in the analysis of complex systems under both equilibrium and nonequilibriumstates, the eigen microstate method is employed here for a quantitative examination of the phase transitions inthe Vicsek model under both vector and scalar noises. The study finds that the Vicsek model exhibits discontinuous phasetransitions regardless of noise type. Furthermore, the dichotomy method is utilized to identify the critical points for thesephase transitions. A significant finding is the observed increase in the critical point for discontinuous phase transitions withescalation of population density.
基金the National Supercomputer Center in Tianjin for their patient assistance in providing the compilation environment.We thank the editor,Huajian Yao,for handling the manuscript and Mingming Li and another anonymous reviewer for their constructive comments.The research leading to these results has received funding from National Natural Science Foundation of China projects(Grant Nos.92355302 and 42121005)Taishan Scholar projects(Grant No.tspd20210305)others(Grant Nos.XDB0710000,L2324203,XK2023DXC001,LSKJ202204400,and ZR2021ZD09).
文摘The thermal evolution of the Earth’s interior and its dynamic effects are the focus of Earth sciences.However,the commonly adopted grid-based temperature solver is usually prone to numerical oscillations,especially in the presence of sharp thermal gradients,such as when modeling subducting slabs and rising plumes.This phenomenon prohibits the correct representation of thermal evolution and may cause incorrect implications of geodynamic processes.After examining several approaches for removing these numerical oscillations,we show that the Lagrangian method provides an ideal way to solve this problem.In this study,we propose a particle-in-cell method as a strategy for improving the solution to the energy equation and demonstrate its effectiveness in both one-dimensional and three-dimensional thermal problems,as well as in a global spherical simulation with data assimilation.We have implemented this method in the open-source finite-element code CitcomS,which features a spherical coordinate system,distributed memory parallel computing,and data assimilation algorithms.
基金supported by the National Natural Science Foundation of China(41977215)。
文摘Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to predict the landslide runout but a fundamental problem remained is how to determine the reliable numerical parameters.This study proposes a framework to predict the runout of potential landslides through multi-source data collaboration and numerical analysis of historical landslide events.Specifically,for the historical landslide cases,the landslide-induced seismic signal,geophysical surveys,and possible in-situ drone/phone videos(multi-source data collaboration)can validate the numerical results in terms of landslide dynamics and deposit features and help calibrate the numerical(rheological)parameters.Subsequently,the calibrated numerical parameters can be used to numerically predict the runout of potential landslides in the region with a similar geological setting to the recorded events.Application of the runout prediction approach to the 2020 Jiashanying landslide in Guizhou,China gives reasonable results in comparison to the field observations.The numerical parameters are determined from the multi-source data collaboration analysis of a historical case in the region(2019 Shuicheng landslide).The proposed framework for landslide runout prediction can be of great utility for landslide risk assessment and disaster reduction in mountainous regions worldwide.
基金Projects(2021RC3007,2020RC3090)supported by the Science and Technology Innovation Program of Hunan Province,ChinaProjects(52374150,52174099)supported by the National Natural Science Foundation of China。
文摘Four key stress thresholds exist in the compression process of rocks,i.e.,crack closure stress(σ_(cc)),crack initiation stress(σ_(ci)),crack damage stress(σ_(cd))and compressive strength(σ_(c)).The quantitative identifications of the first three stress thresholds are of great significance for characterizing the microcrack growth and damage evolution of rocks under compression.In this paper,a new method based on damage constitutive model is proposed to quantitatively measure the stress thresholds of rocks.Firstly,two different damage constitutive models were constructed based on acoustic emission(AE)counts and Weibull distribution function considering the compaction stages of the rock and the bearing capacity of the damage element.Then,the accumulative AE counts method(ACLM),AE count rate method(CRM)and constitutive model method(CMM)were introduced to determine the stress thresholds of rocks.Finally,the stress thresholds of 9 different rocks were identified by ACLM,CRM,and CMM.The results show that the theoretical stress−strain curves obtained from the two damage constitutive models are in good agreement with that of the experimental data,and the differences between the two damage constitutive models mainly come from the evolutionary differences of the damage variables.The results of the stress thresholds identified by the CMM are in good agreement with those identified by the AE methods,i.e.,ACLM and CRM.Therefore,the proposed CMM can be used to determine the stress thresholds of rocks.
基金Project supported by the Hefei National Research Center for Physical Sciences at the Microscale (Grant No.KF2021002)the Natural Science Foundation of Shanxi Province,China (Grant Nos.202303021221029 and 202103021224051)+2 种基金the National Natural Science Foundation of China (Grant Nos.11975024,12047503,and 12275263)the Anhui Provincial Supporting Program for Excellent Young Talents in Colleges and Universities (Grant No.gxyq ZD2019023)the National Key Research and Development Program of China (Grant No.2018YFA0306501)。
文摘The two-component cold atom systems with anisotropic hopping amplitudes can be phenomenologically described by a two-dimensional Ising-XY coupled model with spatial anisotropy.At low temperatures,theoretical predictions[Phys.Rev.A 72053604(2005)]and[arXiv:0706.1609]indicate the existence of a topological ordered phase characterized by Ising and XY disorder but with 2XY ordering.However,due to ergodic difficulties faced by Monte Carlo methods at low temperatures,this topological phase has not been numerically explored.We propose a linear cluster updating Monte Carlo method,which flips spins without rejection in the anisotropy limit but does not change the energy.Using this scheme and conventional Monte Carlo methods,we succeed in revealing the nature of topological phases with half-vortices and domain walls.In the constructed global phase diagram,Ising and XY-type transitions are very close to each other and differ significantly from the schematic phase diagram reported earlier.We also propose and explore a wide range of quantities,including magnetism,superfluidity,specific heat,susceptibility,and even percolation susceptibility,and obtain consistent and reliable results.Furthermore,we observed first-order transitions characterized by common intersection points in magnetizations for different system sizes,as opposed to the conventional phase transition where Binder cumulants of various sizes share common intersections.The critical exponents of different types of phase transitions are reasonably fitted.The results are useful to help cold atom experiments explore the half-vortex topological phase.
文摘In this paper,we explore bound preserving and high-order accurate local discontinuous Galerkin(LDG)schemes to solve a class of chemotaxis models,including the classical Keller-Segel(KS)model and two other density-dependent problems.We use the convex splitting method,the variant energy quadratization method,and the scalar auxiliary variable method coupled with the LDG method to construct first-order temporal accurate schemes based on the gradient flow structure of the models.These semi-implicit schemes are decoupled,energy stable,and can be extended to high accuracy schemes using the semi-implicit spectral deferred correction method.Many bound preserving DG discretizations are only worked on explicit time integration methods and are difficult to get high-order accuracy.To overcome these difficulties,we use the Lagrange multipliers to enforce the implicit or semi-implicit LDG schemes to satisfy the bound constraints at each time step.This bound preserving limiter results in the Karush-Kuhn-Tucker condition,which can be solved by an efficient active set semi-smooth Newton method.Various numerical experiments illustrate the high-order accuracy and the effect of bound preserving.
基金supported by the National Natural Science Foundation of China(42377354)the Natural Science Foundation of Hubei province(2024AFB951)the Chunhui Plan Cooperation Research Project of the Chinese Ministry of Education(202200199).
文摘Soil erosion has been recognized as a critical environmental issue worldwide.While previous studies have primarily focused on watershed-scale soil erosion vulnerability from a natural factor perspective,there is a notable gap in understanding the intricate interplay between natural and socio-economic factors,especially in the context of spatial heterogeneity and nonlinear impacts of human-land interactions.To address this,our study evaluates the soil erosion vulnerability at a provincial scale,taking Hubei Province as a case study to explore the combined effects of natural and socio-economic factors.We developed an evaluation index system based on 15 indicators of soil erosion vulnerability:exposure,sensitivity,and adaptability.In addition,the combination weighting method was applied to determine index weights,and the spatial interaction was analyzed using spatial autocorrelation,geographical temporally weighted regression and geographical detector.The results showed an overall decreasing soil erosion intensity in Hubei Province during 2000 and 2020.The soil erosion vulnerability increased before 2000 and then.The areas with high soil erosion vulnerability were mainly confined in the central and southern regions of Hubei Province(Xiantao,Tianmen,Qianjiang and Ezhou)with obvious spatial aggregation that intensified over time.Natural factors(habitat quality index)had negative impacts on soil erosion vulnerability,whereas socio-economic factors(population density)showed substantial spatial variability in their influences.There was a positive correlation between soil erosion vulnerability and erosion intensity,with the correlation coefficients ranging from-0.41 and 0.93.The increase of slope was found to enhance the positive correlation between soil erosion vulnerability and intensity.
基金Deanship of Research and Graduate Studies at King Khalid University for funding this work through large Research Project under Grant Number RGP2/302/45supported by the Deanship of Scientific Research,Vice Presidency forGraduate Studies and Scientific Research,King Faisal University,Saudi Arabia(Grant Number A426).
文摘Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challenge.This study investigates a bacterial meningitis model through deterministic and stochastic versions.Four-compartment population dynamics explain the concept,particularly the susceptible population,carrier,infected,and recovered.The model predicts the nonnegative equilibrium points and reproduction number,i.e.,the Meningitis-Free Equilibrium(MFE),and Meningitis-Existing Equilibrium(MEE).For the stochastic version of the existing deterministicmodel,the twomethodologies studied are transition probabilities and non-parametric perturbations.Also,positivity,boundedness,extinction,and disease persistence are studiedrigorouslywiththe helpofwell-known theorems.Standard and nonstandard techniques such as EulerMaruyama,stochastic Euler,stochastic Runge Kutta,and stochastic nonstandard finite difference in the sense of delay have been presented for computational analysis of the stochastic model.Unfortunately,standard methods fail to restore the biological properties of the model,so the stochastic nonstandard finite difference approximation is offered as an efficient,low-cost,and independent of time step size.In addition,the convergence,local,and global stability around the equilibria of the nonstandard computational method is studied by assuming the perturbation effect is zero.The simulations and comparison of the methods are presented to support the theoretical results and for the best visualization of results.
基金supported by the Innovation Fund Project of the Gansu Education Department(Grant No.2021B-099).
文摘The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.
基金supported by the China Postdoctoral Science Foundation(2021M702304)Natural Science Foundation of Shandong Province(ZR20210E260).
文摘The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets.
基金supported by the National Key R&D Program of China(Grant No.2022YFB3303500).
文摘The present study proposes a sub-grid scale model for the one-dimensional Burgers turbulence based on the neuralnetwork and deep learning method.The filtered data of the direct numerical simulation is used to establish thetraining data set,the validation data set,and the test data set.The artificial neural network(ANN)methodand Back Propagation method are employed to train parameters in the ANN.The developed ANN is applied toconstruct the sub-grid scale model for the large eddy simulation of the Burgers turbulence in the one-dimensionalspace.The proposed model well predicts the time correlation and the space correlation of the Burgers turbulence.
文摘This study introduces an innovative“Big Model”strategy to enhance Bridge Structural Health Monitoring(SHM)using a Convolutional Neural Network(CNN),time-frequency analysis,and fine element analysis.Leveraging ensemble methods,collaborative learning,and distributed computing,the approach effectively manages the complexity and scale of large-scale bridge data.The CNN employs transfer learning,fine-tuning,and continuous monitoring to optimize models for adaptive and accurate structural health assessments,focusing on extracting meaningful features through time-frequency analysis.By integrating Finite Element Analysis,time-frequency analysis,and CNNs,the strategy provides a comprehensive understanding of bridge health.Utilizing diverse sensor data,sophisticated feature extraction,and advanced CNN architecture,the model is optimized through rigorous preprocessing and hyperparameter tuning.This approach significantly enhances the ability to make accurate predictions,monitor structural health,and support proactive maintenance practices,thereby ensuring the safety and longevity of critical infrastructure.
文摘Black-Scholes Model (B-SM) simulates the dynamics of financial market and contains instruments such as options and puts which are major indices requiring solution. B-SM is known to estimate the correct prices of European Stock options and establish the theoretical foundation for Option pricing. Therefore, this paper evaluates the Black-Schole model in simulating the European call in a cash flow in the dependent drift and focuses on obtaining analytic and then approximate solution for the model. The work also examines Fokker Planck Equation (FPE) and extracts the link between FPE and B-SM for non equilibrium systems. The B-SM is then solved via the Elzaki transform method (ETM). The computational procedures were obtained using MAPLE 18 with the solution provided in the form of convergent series.
文摘This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations and can lead to varied or similar results in terms of precision and performance under certain assumptions. The article underlines the importance of comparing these two approaches to choose the one best suited to the context, available data and modeling objectives.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
文摘The rapid development of digital education provides new opportunities and challenges for teaching model innovation.This study aims to explore the application of the BOPPPS(Bridge-in,Objective,Pre-assessment,Participatory learning,Post-assessment,Summary)teaching method in the development of a blended teaching model for the Operations Research course under the background of digital education.In response to the characteristics of the course and the needs of the student group,the teaching design is reconstructed with a student-centered approach,increasing practical teaching links,improving the assessment and evaluation system,and effectively implementing it in conjunction with digital educational technology.This teaching model has shown significant effectiveness in the context of digital education,providing valuable experience and insights for the innovation of the Operations Research course.
文摘Cyber Threat Intelligence(CTI)is a valuable resource for cybersecurity defense,but it also poses challenges due to its multi-source and heterogeneous nature.Security personnel may be unable to use CTI effectively to understand the condition and trend of a cyberattack and respond promptly.To address these challenges,we propose a novel approach that consists of three steps.First,we construct the attack and defense analysis of the cybersecurity ontology(ADACO)model by integrating multiple cybersecurity databases.Second,we develop the threat evolution prediction algorithm(TEPA),which can automatically detect threats at device nodes,correlate and map multisource threat information,and dynamically infer the threat evolution process.TEPA leverages knowledge graphs to represent comprehensive threat scenarios and achieves better performance in simulated experiments by combining structural and textual features of entities.Third,we design the intelligent defense decision algorithm(IDDA),which can provide intelligent recommendations for security personnel regarding the most suitable defense techniques.IDDA outperforms the baseline methods in the comparative experiment.
基金Under the auspices of Natural Science Foundation of China(No.41971166)。
文摘Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of human-land interaction.In this paper,based on multi-source big data include 250 m×250 m resolution cell phone data,1.81×105 Points of Interest(POI)data and administrative boundary data,we built a UFA identification method and demonstrated empirically in Shenyang City,China.We argue that the method we built can effectively identify multi-scale multi-type UFAs based on human activity and further reveal the spatial correlation between urban facilities and human activity.The empirical study suggests that the employment functional zones in Shenyang City are more concentrated in central cities than other single functional zones.There are more mix functional areas in the central city areas,while the planned industrial new cities need to develop comprehensive functions in Shenyang.UFAs have scale effects and human-land interaction patterns.We suggest that city decision makers should apply multi-sources big data to measure urban functional service in a more refined manner from a supply-demand perspective.