This paper presents the results of Rainfall-Runoff modeling and simulation of hydrological responses under changing climate using HEC-HMS model. The basin spatial data was processed by HEC-GeoHMS and imported to HEC-H...This paper presents the results of Rainfall-Runoff modeling and simulation of hydrological responses under changing climate using HEC-HMS model. The basin spatial data was processed by HEC-GeoHMS and imported to HEC-HMS. The calibration and validation of the HEC-HMS model was done using the observed hydrometeorological data (1989-2018) and HEC-GeoHMS output data. The goodness-of-fit of the model was measured using three performance indices: Nash and Sutcliffe coefficient (NSE) = 0.8, Coefficient of Determination (R<sup>2</sup>) = 0.8, and Percent Difference (D) = 0.03, with values showing very good performance of the model. Finally, the optimized HEC-HMS model has been applied to simulate the hydrological responses of Upper Baro Basin to the projected climate change for mid-term (2040s) and long-term (2090s) A1B emission scenarios. The simulation results have shown a mean annual percent decrease of 3.6 and an increase of 8.1 for Baro River flow in the 2040s and 2090s scenarios, respectively, compared to the baseline period (2000s). A pronounced flow variation is rather observed on a seasonal basis, reaching a reduction of 50% in spring and an increase of 50% in autumn for both mid-term and long-term scenarios with respect to the base period. Generally, the rainfall-runoff model is developed to solve, in a complementary way, the two main problems in water resources management: the lack of gauged sites and future hydrological response to climate change data of the basin and the region in general. The study results imply that seasonal and time variation in the hydrologic cycle would most likely cause hydrologic extremes. And hence, the developed model and output data are of paramount importance for adaptive strategies and sustainable water resources development in the basin.展开更多
The Artificial Neural Network (ANN) approach has been successfully used in many hydrological studies especially the rainfall-runoff modeling using continuous data. The present study examines its applicability to model...The Artificial Neural Network (ANN) approach has been successfully used in many hydrological studies especially the rainfall-runoff modeling using continuous data. The present study examines its applicability to model the event-based rainfall-runoff process. A case study has been done for Ajay river basin to develop event-based rainfall-runoff model for the basin to simulate the hourly runoff at Sarath gauging site. The results demonstrate that ANN models are able to provide a good representation of an event-based rainfall-runoff process. The two important parameters, when predicting a flood hydrograph, are the magnitude of the peak discharge and the time to peak discharge. The developed ANN models have been able to predict this information with great accuracy. This shows that ANNs can be very efficient in modeling an event-based rainfall-runoff process for determining the peak discharge and time to the peak discharge very accurately. This is important in water resources design and management applications, where peak discharge and time to peak discharge are important input展开更多
Hydrological models are considered as necessary tools for water and environmental resource management. However, modelling poorly gauged watersheds has been a challenge to hydrologists and hydraulic engineers. Research...Hydrological models are considered as necessary tools for water and environmental resource management. However, modelling poorly gauged watersheds has been a challenge to hydrologists and hydraulic engineers. Research done recently has shown the potential to overcome this challenge through incorporating satellite based hydrological and meteorological data in the measured data. This paper presents results for a study that used the semi-distributed conceptual HBV Light Model to model the rainfall-runoff in the Mara River Basin, Kenya. The model simulates runoff as a function of rainfall. It is built on the basis established between satellite observed and in-situ rainfall, evaporation, temperature and the measured runoff. The model’s performance and reliability were evaluated over two sub-catchments namely: Nyangores and Amala in the Mara River Basin using the Nash-Sutcliffe Efficiency which the model referred to as Reff and the coefficient of determination (R2). The Reff for Nyangores and Amala during the calibration and (validation) period were 0.65 (0.68) and 0.59 (0.62) respectively. The model showed good flow simulations particularly during the recession flows, in the Nyangores sub-catchment whereas it simulated poorly the short term fluctuations of the high-flow for Amala sub-catchment. Results from this study can be used by water resources managers to make informed decision on planning and management of water resources.展开更多
The process of transformation of rainfall into runoff over a catchment is very complex and highly nonlinear and exhibits both tempor al and spatial variabilities. In this article, a rainfall-runoff model using th e ar...The process of transformation of rainfall into runoff over a catchment is very complex and highly nonlinear and exhibits both tempor al and spatial variabilities. In this article, a rainfall-runoff model using th e artificial neural networks (ANN) is proposed for simula ting the runoff in storm events. The study uses the data from a coa stal forest catchment located in Seto Inland Sea, Japan. This article studies the accuracy of the short-term rainfall forecast obta ined by ANN time-series analysis techniques and using antecedent rainfa ll depths and stream flow as the input information. The verification results from the proposed model indicate that the approach of ANN rai nfall-runoff model presented in this paper shows a reasonable agreement in rainfall-runoff modeling with high accuracy.展开更多
Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnectio...Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnection modes under various solar wind conditions after their respective launches in 2024 and 2025.Magnetosheath conditions,namely,plasma density,velocity,and temperature,are key parameters for predicting and analyzing soft X-ray images from the LEXI and SMILE missions.We developed a userfriendly model of magnetosheath that parameterizes number density,velocity,temperature,and magnetic field by utilizing the global Magnetohydrodynamics(MHD)model as well as the pre-existing gas-dynamic and analytic models.Using this parameterized magnetosheath model,scientists can easily reconstruct expected soft X-ray images and utilize them for analysis of observed images of LEXI and SMILE without simulating the complicated global magnetosphere models.First,we created an MHD-based magnetosheath model by running a total of 14 OpenGGCM global MHD simulations under 7 solar wind densities(1,5,10,15,20,25,and 30 cm)and 2 interplanetary magnetic field Bz components(±4 nT),and then parameterizing the results in new magnetosheath conditions.We compared the magnetosheath model result with THEMIS statistical data and it showed good agreement with a weighted Pearson correlation coefficient greater than 0.77,especially for plasma density and plasma velocity.Second,we compiled a suite of magnetosheath models incorporating previous magnetosheath models(gas-dynamic,analytic),and did two case studies to test the performance.The MHD-based model was comparable to or better than the previous models while providing self-consistency among the magnetosheath parameters.Third,we constructed a tool to calculate a soft X-ray image from any given vantage point,which can support the planning and data analysis of the aforementioned LEXI and SMILE missions.A release of the code has been uploaded to a Github repository.展开更多
Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the curr...Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.展开更多
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but...Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.展开更多
New fractional operators, the COVID-19 model has been studied in this paper. By using different numericaltechniques and the time fractional parameters, the mechanical characteristics of the fractional order model arei...New fractional operators, the COVID-19 model has been studied in this paper. By using different numericaltechniques and the time fractional parameters, the mechanical characteristics of the fractional order model areidentified. The uniqueness and existence have been established. Themodel’sUlam-Hyers stability analysis has beenfound. In order to justify the theoretical results, numerical simulations are carried out for the presented methodin the range of fractional order to show the implications of fractional and fractal orders.We applied very effectivenumerical techniques to obtain the solutions of themodel and simulations. Also, we present conditions of existencefor a solution to the proposed epidemicmodel and to calculate the reproduction number in certain state conditionsof the analyzed dynamic system. COVID-19 fractional order model for the case of Wuhan, China, is offered foranalysis with simulations in order to determine the possible efficacy of Coronavirus disease transmission in theCommunity. For this reason, we employed the COVID-19 fractal fractional derivative model in the example ofWuhan, China, with the given beginning conditions. In conclusion, again the mathematical models with fractionaloperators can facilitate the improvement of decision-making for measures to be taken in the management of anepidemic situation.展开更多
This work aimed to construct an epidemic model with fuzzy parameters.Since the classical epidemic model doesnot elaborate on the successful interaction of susceptible and infective people,the constructed fuzzy epidemi...This work aimed to construct an epidemic model with fuzzy parameters.Since the classical epidemic model doesnot elaborate on the successful interaction of susceptible and infective people,the constructed fuzzy epidemicmodel discusses the more detailed versions of the interactions between infective and susceptible people.Thenext-generation matrix approach is employed to find the reproduction number of a deterministic model.Thesensitivity analysis and local stability analysis of the systemare also provided.For solving the fuzzy epidemic model,a numerical scheme is constructed which consists of three time levels.The numerical scheme has an advantage overthe existing forward Euler scheme for determining the conditions of getting the positive solution.The establishedscheme also has an advantage over existing non-standard finite difference methods in terms of order of accuracy.The stability of the scheme for the considered fuzzy model is also provided.From the plotted results,it can beobserved that susceptible people decay by rising interaction parameters.展开更多
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p...BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.展开更多
Because of the features involved with their varied kernels,differential operators relying on convolution formulations have been acknowledged as effective mathematical resources for modeling real-world issues.In this p...Because of the features involved with their varied kernels,differential operators relying on convolution formulations have been acknowledged as effective mathematical resources for modeling real-world issues.In this paper,we constructed a stochastic fractional framework of measles spreading mechanisms with dual medication immunization considering the exponential decay and Mittag-Leffler kernels.In this approach,the overall population was separated into five cohorts.Furthermore,the descriptive behavior of the system was investigated,including prerequisites for the positivity of solutions,invariant domain of the solution,presence and stability of equilibrium points,and sensitivity analysis.We included a stochastic element in every cohort and employed linear growth and Lipschitz criteria to show the existence and uniqueness of solutions.Several numerical simulations for various fractional orders and randomization intensities are illustrated.展开更多
Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive...Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive disintegration and kinematics of multi-deformable rock blocks during rockslides.The present study proposes a discrete-continuous numerical model,based on a cohesive zone model,to explicitly incorporate the progressive fragmentation and intricate interparticle interactions inherent in rockslides.Breakable rock granular assemblies are released along an inclined plane and flow onto a horizontal plane.The numerical scenarios are established to incorporate variations in slope angle,initial height,friction coefficient,and particle number.The evolutions of fragmentation,kinematic,runout and depositional characteristics are quantitatively analyzed and compared with experimental and field data.A positive linear relationship between the equivalent friction coefficient and the apparent friction coefficient is identified.In general,the granular mass predominantly exhibits characteristics of a dense granular flow,with the Savage number exhibiting a decreasing trend as the volume of mass increases.The process of particle breakage gradually occurs in a bottom-up manner,leading to a significant increase in the angular velocities of the rock blocks with increasing depth.The simulation results reproduce the field observations of inverse grading and source stratigraphy preservation in the deposit.We propose a disintegration index that incorporates factors such as drop height,rock mass volume,and rock strength.Our findings demonstrate a consistent linear relationship between this index and the fragmentation degree in all tested scenarios.展开更多
In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and comput...In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.展开更多
Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction me...Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction method.This makes the accuracy of the surrogate model highly dependent on the experience of users and affects the accuracy of IMU methods.Therefore,an improved IMU method via the adaptive Kriging models is proposed.This method transforms the objective function of the IMU problem into two deterministic global optimization problems about the upper bound and the interval diameter through universal grey numbers.These optimization problems are addressed through the adaptive Kriging models and the particle swarm optimization(PSO)method to quantify the uncertain parameters,and the IMU is accomplished.During the construction of these adaptive Kriging models,the sample space is gridded according to sensitivity information.Local sampling is then performed in key subspaces based on the maximum mean square error(MMSE)criterion.The interval division coefficient and random sampling coefficient are adaptively adjusted without human interference until the model meets accuracy requirements.The effectiveness of the proposed method is demonstrated by a numerical example of a three-degree-of-freedom mass-spring system and an experimental example of a butted cylindrical shell.The results show that the updated results of the interval model are in good agreement with the experimental results.展开更多
Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochast...Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochastic models is not well understood.The present study aimed to address this gap by conducting a comparative study using the susceptible,exposed,infectious,and recovered(SEIR)model and its extended CMs from the coronavirus disease 2019 modeling literature.We demonstrated the equivalence of the numerical solution of CMs using the Euler scheme and their stochastic counterparts through theoretical analysis and simulations.Based on this equivalence,we proposed an efficient model calibration method that could replicate the exact solution of CMs in the corresponding stochastic models through parameter adjustment.The advancement in calibration techniques enhanced the accuracy of stochastic modeling in capturing the dynamics of epidemics.However,it should be noted that discrete-time stochastic models cannot perfectly reproduce the exact solution of continuous-time CMs.Additionally,we proposed a new stochastic compartment and agent mixed model as an alternative to agent-based models for large-scale population simulations with a limited number of agents.This model offered a balance between computational efficiency and accuracy.The results of this research contributed to the comparison and unification of deterministic CMs and stochastic models in epidemic modeling.Furthermore,the results had implications for the development of hybrid models that integrated the strengths of both frameworks.Overall,the present study has provided valuable epidemic modeling techniques and their practical applications for understanding and controlling the spread of infectious diseases.展开更多
We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our...We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our result,the rock density values in Hong Kong vary from 2101 to 2681 kg·m^(-3).These density values are typically smaller than the average density of 2670 kg·m^(-3),often adopted to represent the average density of the upper continental crust in physical geodesy and gravimetric geophysics applications.This finding reflects that the geological configuration in Hong Kong is mainly formed by light volcanic formations and lava flows with overlying sedimentary deposits at many locations,while the percentage of heavier metamorphic rocks is very low(less than 1%).This product will improve the accuracy of a detailed geoid model and orthometric heights.展开更多
The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The bas...The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The basic parameters of sandstone are determined through a series of static and dynamic tests,including uniaxial compression,Brazilian disc,triaxial compression under varying confining pressures,hydrostatic compression,and dynamic compression and tensile tests with a split Hopkinson pressure bar.Based on the sandstone test results from this study and previous research,a step-by-step procedure for parameter calibration is outlined,which accounts for the categories of the strength surface,equation of state(EOS),strain rate effect,and damage.The calibrated parameters are verified through numerical tests that correspond to the experimental loading conditions.Consistency between numerical results and experimental data indicates the precision and reliability of the calibrated parameters.The methodology presented in this study is scientifically sound,straightforward,and essential for improving the TSID model.Furthermore,it has the potential to contribute to other rock constitutive models,particularly new user-defined models.展开更多
文摘This paper presents the results of Rainfall-Runoff modeling and simulation of hydrological responses under changing climate using HEC-HMS model. The basin spatial data was processed by HEC-GeoHMS and imported to HEC-HMS. The calibration and validation of the HEC-HMS model was done using the observed hydrometeorological data (1989-2018) and HEC-GeoHMS output data. The goodness-of-fit of the model was measured using three performance indices: Nash and Sutcliffe coefficient (NSE) = 0.8, Coefficient of Determination (R<sup>2</sup>) = 0.8, and Percent Difference (D) = 0.03, with values showing very good performance of the model. Finally, the optimized HEC-HMS model has been applied to simulate the hydrological responses of Upper Baro Basin to the projected climate change for mid-term (2040s) and long-term (2090s) A1B emission scenarios. The simulation results have shown a mean annual percent decrease of 3.6 and an increase of 8.1 for Baro River flow in the 2040s and 2090s scenarios, respectively, compared to the baseline period (2000s). A pronounced flow variation is rather observed on a seasonal basis, reaching a reduction of 50% in spring and an increase of 50% in autumn for both mid-term and long-term scenarios with respect to the base period. Generally, the rainfall-runoff model is developed to solve, in a complementary way, the two main problems in water resources management: the lack of gauged sites and future hydrological response to climate change data of the basin and the region in general. The study results imply that seasonal and time variation in the hydrologic cycle would most likely cause hydrologic extremes. And hence, the developed model and output data are of paramount importance for adaptive strategies and sustainable water resources development in the basin.
文摘The Artificial Neural Network (ANN) approach has been successfully used in many hydrological studies especially the rainfall-runoff modeling using continuous data. The present study examines its applicability to model the event-based rainfall-runoff process. A case study has been done for Ajay river basin to develop event-based rainfall-runoff model for the basin to simulate the hourly runoff at Sarath gauging site. The results demonstrate that ANN models are able to provide a good representation of an event-based rainfall-runoff process. The two important parameters, when predicting a flood hydrograph, are the magnitude of the peak discharge and the time to peak discharge. The developed ANN models have been able to predict this information with great accuracy. This shows that ANNs can be very efficient in modeling an event-based rainfall-runoff process for determining the peak discharge and time to the peak discharge very accurately. This is important in water resources design and management applications, where peak discharge and time to peak discharge are important input
文摘Hydrological models are considered as necessary tools for water and environmental resource management. However, modelling poorly gauged watersheds has been a challenge to hydrologists and hydraulic engineers. Research done recently has shown the potential to overcome this challenge through incorporating satellite based hydrological and meteorological data in the measured data. This paper presents results for a study that used the semi-distributed conceptual HBV Light Model to model the rainfall-runoff in the Mara River Basin, Kenya. The model simulates runoff as a function of rainfall. It is built on the basis established between satellite observed and in-situ rainfall, evaporation, temperature and the measured runoff. The model’s performance and reliability were evaluated over two sub-catchments namely: Nyangores and Amala in the Mara River Basin using the Nash-Sutcliffe Efficiency which the model referred to as Reff and the coefficient of determination (R2). The Reff for Nyangores and Amala during the calibration and (validation) period were 0.65 (0.68) and 0.59 (0.62) respectively. The model showed good flow simulations particularly during the recession flows, in the Nyangores sub-catchment whereas it simulated poorly the short term fluctuations of the high-flow for Amala sub-catchment. Results from this study can be used by water resources managers to make informed decision on planning and management of water resources.
文摘The process of transformation of rainfall into runoff over a catchment is very complex and highly nonlinear and exhibits both tempor al and spatial variabilities. In this article, a rainfall-runoff model using th e artificial neural networks (ANN) is proposed for simula ting the runoff in storm events. The study uses the data from a coa stal forest catchment located in Seto Inland Sea, Japan. This article studies the accuracy of the short-term rainfall forecast obta ined by ANN time-series analysis techniques and using antecedent rainfa ll depths and stream flow as the input information. The verification results from the proposed model indicate that the approach of ANN rai nfall-runoff model presented in this paper shows a reasonable agreement in rainfall-runoff modeling with high accuracy.
基金supported by the NSF grant AGS-1928883the NASA grants,80NSSC20K1670 and 80MSFC20C0019+2 种基金support from NASA GSFC IRADHIFISFM funds。
文摘Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnection modes under various solar wind conditions after their respective launches in 2024 and 2025.Magnetosheath conditions,namely,plasma density,velocity,and temperature,are key parameters for predicting and analyzing soft X-ray images from the LEXI and SMILE missions.We developed a userfriendly model of magnetosheath that parameterizes number density,velocity,temperature,and magnetic field by utilizing the global Magnetohydrodynamics(MHD)model as well as the pre-existing gas-dynamic and analytic models.Using this parameterized magnetosheath model,scientists can easily reconstruct expected soft X-ray images and utilize them for analysis of observed images of LEXI and SMILE without simulating the complicated global magnetosphere models.First,we created an MHD-based magnetosheath model by running a total of 14 OpenGGCM global MHD simulations under 7 solar wind densities(1,5,10,15,20,25,and 30 cm)and 2 interplanetary magnetic field Bz components(±4 nT),and then parameterizing the results in new magnetosheath conditions.We compared the magnetosheath model result with THEMIS statistical data and it showed good agreement with a weighted Pearson correlation coefficient greater than 0.77,especially for plasma density and plasma velocity.Second,we compiled a suite of magnetosheath models incorporating previous magnetosheath models(gas-dynamic,analytic),and did two case studies to test the performance.The MHD-based model was comparable to or better than the previous models while providing self-consistency among the magnetosheath parameters.Third,we constructed a tool to calculate a soft X-ray image from any given vantage point,which can support the planning and data analysis of the aforementioned LEXI and SMILE missions.A release of the code has been uploaded to a Github repository.
基金supported by Ministry of Science and Technology of China (Grant No. 2018YFA0606501)National Natural Science Foundation of China (Grant No. 42075037)+1 种基金Key Laboratory Open Research Program of Xinjiang Science and Technology Department (Grant No. 2022D04009)the National Key Scientific and Technological Infrastructure project “Earth System Numerical Simulation Facility” (EarthLab)。
文摘Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.
基金supported by the Research Council of Norway under contracts 223252/F50 and 300844/F50the Trond Mohn Foundation。
文摘Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.
基金Lucian Blaga University of Sibiu&Hasso Plattner Foundation Research Grants LBUS-IRG-2020-06.
文摘New fractional operators, the COVID-19 model has been studied in this paper. By using different numericaltechniques and the time fractional parameters, the mechanical characteristics of the fractional order model areidentified. The uniqueness and existence have been established. Themodel’sUlam-Hyers stability analysis has beenfound. In order to justify the theoretical results, numerical simulations are carried out for the presented methodin the range of fractional order to show the implications of fractional and fractal orders.We applied very effectivenumerical techniques to obtain the solutions of themodel and simulations. Also, we present conditions of existencefor a solution to the proposed epidemicmodel and to calculate the reproduction number in certain state conditionsof the analyzed dynamic system. COVID-19 fractional order model for the case of Wuhan, China, is offered foranalysis with simulations in order to determine the possible efficacy of Coronavirus disease transmission in theCommunity. For this reason, we employed the COVID-19 fractal fractional derivative model in the example ofWuhan, China, with the given beginning conditions. In conclusion, again the mathematical models with fractionaloperators can facilitate the improvement of decision-making for measures to be taken in the management of anepidemic situation.
基金the support of Prince Sultan University for paying the article processing charges(APC)of this publication.
文摘This work aimed to construct an epidemic model with fuzzy parameters.Since the classical epidemic model doesnot elaborate on the successful interaction of susceptible and infective people,the constructed fuzzy epidemicmodel discusses the more detailed versions of the interactions between infective and susceptible people.Thenext-generation matrix approach is employed to find the reproduction number of a deterministic model.Thesensitivity analysis and local stability analysis of the systemare also provided.For solving the fuzzy epidemic model,a numerical scheme is constructed which consists of three time levels.The numerical scheme has an advantage overthe existing forward Euler scheme for determining the conditions of getting the positive solution.The establishedscheme also has an advantage over existing non-standard finite difference methods in terms of order of accuracy.The stability of the scheme for the considered fuzzy model is also provided.From the plotted results,it can beobserved that susceptible people decay by rising interaction parameters.
文摘BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.
文摘Because of the features involved with their varied kernels,differential operators relying on convolution formulations have been acknowledged as effective mathematical resources for modeling real-world issues.In this paper,we constructed a stochastic fractional framework of measles spreading mechanisms with dual medication immunization considering the exponential decay and Mittag-Leffler kernels.In this approach,the overall population was separated into five cohorts.Furthermore,the descriptive behavior of the system was investigated,including prerequisites for the positivity of solutions,invariant domain of the solution,presence and stability of equilibrium points,and sensitivity analysis.We included a stochastic element in every cohort and employed linear growth and Lipschitz criteria to show the existence and uniqueness of solutions.Several numerical simulations for various fractional orders and randomization intensities are illustrated.
基金support from the National Key R&D plan(Grant No.2022YFC3004303)the National Natural Science Foundation of China(Grant No.42107161)+3 种基金the State Key Laboratory of Hydroscience and Hydraulic Engineering(Grant No.2021-KY-04)the Open Research Fund Program of State Key Laboratory of Hydroscience and Engineering(sklhse-2023-C-01)the Open Research Fund Program of Key Laboratory of the Hydrosphere of the Ministry of Water Resources(mklhs-2023-04)the China Three Gorges Corporation(XLD/2117).
文摘Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive disintegration and kinematics of multi-deformable rock blocks during rockslides.The present study proposes a discrete-continuous numerical model,based on a cohesive zone model,to explicitly incorporate the progressive fragmentation and intricate interparticle interactions inherent in rockslides.Breakable rock granular assemblies are released along an inclined plane and flow onto a horizontal plane.The numerical scenarios are established to incorporate variations in slope angle,initial height,friction coefficient,and particle number.The evolutions of fragmentation,kinematic,runout and depositional characteristics are quantitatively analyzed and compared with experimental and field data.A positive linear relationship between the equivalent friction coefficient and the apparent friction coefficient is identified.In general,the granular mass predominantly exhibits characteristics of a dense granular flow,with the Savage number exhibiting a decreasing trend as the volume of mass increases.The process of particle breakage gradually occurs in a bottom-up manner,leading to a significant increase in the angular velocities of the rock blocks with increasing depth.The simulation results reproduce the field observations of inverse grading and source stratigraphy preservation in the deposit.We propose a disintegration index that incorporates factors such as drop height,rock mass volume,and rock strength.Our findings demonstrate a consistent linear relationship between this index and the fragmentation degree in all tested scenarios.
基金Supported by Sichuan Science and Technology Program(2021YFQ0003,2023YFSY0026,2023YFH0004).
文摘In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.
基金Project supported by the National Natural Science Foundation of China(Nos.12272211,12072181,12121002)。
文摘Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction method.This makes the accuracy of the surrogate model highly dependent on the experience of users and affects the accuracy of IMU methods.Therefore,an improved IMU method via the adaptive Kriging models is proposed.This method transforms the objective function of the IMU problem into two deterministic global optimization problems about the upper bound and the interval diameter through universal grey numbers.These optimization problems are addressed through the adaptive Kriging models and the particle swarm optimization(PSO)method to quantify the uncertain parameters,and the IMU is accomplished.During the construction of these adaptive Kriging models,the sample space is gridded according to sensitivity information.Local sampling is then performed in key subspaces based on the maximum mean square error(MMSE)criterion.The interval division coefficient and random sampling coefficient are adaptively adjusted without human interference until the model meets accuracy requirements.The effectiveness of the proposed method is demonstrated by a numerical example of a three-degree-of-freedom mass-spring system and an experimental example of a butted cylindrical shell.The results show that the updated results of the interval model are in good agreement with the experimental results.
基金supported by the National Natural Science Foundation of China(Grant Nos.82173620 to Yang Zhao and 82041024 to Feng Chen)partially supported by the Bill&Melinda Gates Foundation(Grant No.INV-006371 to Feng Chen)Priority Academic Program Development of Jiangsu Higher Education Institutions.
文摘Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochastic models is not well understood.The present study aimed to address this gap by conducting a comparative study using the susceptible,exposed,infectious,and recovered(SEIR)model and its extended CMs from the coronavirus disease 2019 modeling literature.We demonstrated the equivalence of the numerical solution of CMs using the Euler scheme and their stochastic counterparts through theoretical analysis and simulations.Based on this equivalence,we proposed an efficient model calibration method that could replicate the exact solution of CMs in the corresponding stochastic models through parameter adjustment.The advancement in calibration techniques enhanced the accuracy of stochastic modeling in capturing the dynamics of epidemics.However,it should be noted that discrete-time stochastic models cannot perfectly reproduce the exact solution of continuous-time CMs.Additionally,we proposed a new stochastic compartment and agent mixed model as an alternative to agent-based models for large-scale population simulations with a limited number of agents.This model offered a balance between computational efficiency and accuracy.The results of this research contributed to the comparison and unification of deterministic CMs and stochastic models in epidemic modeling.Furthermore,the results had implications for the development of hybrid models that integrated the strengths of both frameworks.Overall,the present study has provided valuable epidemic modeling techniques and their practical applications for understanding and controlling the spread of infectious diseases.
基金supported by the Hong Kong GRF RGC project 15217222:“Modernization of the leveling network in the Hong Kong territories.”。
文摘We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our result,the rock density values in Hong Kong vary from 2101 to 2681 kg·m^(-3).These density values are typically smaller than the average density of 2670 kg·m^(-3),often adopted to represent the average density of the upper continental crust in physical geodesy and gravimetric geophysics applications.This finding reflects that the geological configuration in Hong Kong is mainly formed by light volcanic formations and lava flows with overlying sedimentary deposits at many locations,while the percentage of heavier metamorphic rocks is very low(less than 1%).This product will improve the accuracy of a detailed geoid model and orthometric heights.
基金funded by the National Natural Science Foundation of China(Grant No.12272247)National Key Project(Grant No.GJXM92579)Major Research and Development Project of Metallurgical Corporation of China Ltd.in the Non-Steel Field(Grant No.2021-5).
文摘The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The basic parameters of sandstone are determined through a series of static and dynamic tests,including uniaxial compression,Brazilian disc,triaxial compression under varying confining pressures,hydrostatic compression,and dynamic compression and tensile tests with a split Hopkinson pressure bar.Based on the sandstone test results from this study and previous research,a step-by-step procedure for parameter calibration is outlined,which accounts for the categories of the strength surface,equation of state(EOS),strain rate effect,and damage.The calibrated parameters are verified through numerical tests that correspond to the experimental loading conditions.Consistency between numerical results and experimental data indicates the precision and reliability of the calibrated parameters.The methodology presented in this study is scientifically sound,straightforward,and essential for improving the TSID model.Furthermore,it has the potential to contribute to other rock constitutive models,particularly new user-defined models.