The spread of an advantageous mutation through a population is of fundamental interest in population genetics. While the classical Moran model is formulated for a well-mixed population, it has long been recognized tha...The spread of an advantageous mutation through a population is of fundamental interest in population genetics. While the classical Moran model is formulated for a well-mixed population, it has long been recognized that in real-world applications, the population usually has an explicit spatial structure which can significantly influence the dynamics. In the context of cancer initiation in epithelial tissue, several recent works have analyzed the dynamics of advantageous mutant spread on integer lattices, using the biased voter model from particle systems theory. In this spatial version of the Moran model, individuals first reproduce according to their fitness and then replace a neighboring individual. From a biological standpoint, the opposite dynamics, where individuals first die and are then replaced by a neighboring individual according to its fitness, are equally relevant. Here, we investigate this death-birth analogue of the biased voter model. We construct the process mathematically, derive the associated dual process, establish bounds on the survival probability of a single mutant, and prove that the process has an asymptotic shape. We also briefly discuss alternative birth-death and death-birth dynamics, depending on how the mutant fitness advantage affects the dynamics. We show that birth-death and death-birth formulations of the biased voter model are equivalent when fitness affects the former event of each update of the model, whereas the birth-death model is fundamentally different from the death-birth model when fitness affects the latter event.展开更多
Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Ar...Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Arctic multiyear sea ice,changes in newly formed sea ice indicate more thermodynamic and dynamic information on Arctic atmosphere–ocean–ice interaction and northern mid–high latitude atmospheric teleconnections. Here, we use a large multimodel ensemble from phase 6 of the Coupled Model Intercomparison Project(CMIP6) to investigate future changes in wintertime newly formed Arctic sea ice. The commonly used model-democracy approach that gives equal weight to each model essentially assumes that all models are independent and equally plausible, which contradicts with the fact that there are large interdependencies in the ensemble and discrepancies in models' performances in reproducing observations. Therefore, instead of using the arithmetic mean of well-performing models or all available models for projections like in previous studies, we employ a newly developed model weighting scheme that weights all models in the ensemble with consideration of their performance and independence to provide more reliable projections. Model democracy leads to evident bias and large intermodel spread in CMIP6 projections of newly formed Arctic sea ice. However, we show that both the bias and the intermodel spread can be effectively reduced by the weighting scheme. Projections from the weighted models indicate that wintertime newly formed Arctic sea ice is likely to increase dramatically until the middle of this century regardless of the emissions scenario.Thereafter, it may decrease(or remain stable) if the Arctic warming crosses a threshold(or is extensively constrained).展开更多
This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large mode...This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large models in vertical industries,outlines the challenges and issues confronted in applying large models in the oil and gas sector,and offers prospects for the application of large models in the oil and gas industry.The existing large models can be briefly divided into three categories:large language models,visual large models,and multimodal large models.The application of large models in the oil and gas industry is still in its infancy.Based on open-source large language models,some oil and gas enterprises have released large language model products using methods like fine-tuning and retrieval augmented generation.Scholars have attempted to develop scenario-specific models for oil and gas operations by using visual/multimodal foundation models.A few researchers have constructed pre-trained foundation models for seismic data processing and interpretation,as well as core analysis.The application of large models in the oil and gas industry faces challenges such as current data quantity and quality being difficult to support the training of large models,high research and development costs,and poor algorithm autonomy and control.The application of large models should be guided by the needs of oil and gas business,taking the application of large models as an opportunity to improve data lifecycle management,enhance data governance capabilities,promote the construction of computing power,strengthen the construction of“artificial intelligence+energy”composite teams,and boost the autonomy and control of large model technology.展开更多
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ...Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.展开更多
In this paper, we present an improved circuit model for single-photon avalanche diodes without any convergence problems. The device simulation is based on Orcad PSpice and all the employed components are available in ...In this paper, we present an improved circuit model for single-photon avalanche diodes without any convergence problems. The device simulation is based on Orcad PSpice and all the employed components are available in the standard library of the software. In particular, an intuitionistic and simple voltage-controlled current source is adopted to characterize the static behavior, which can better represent the voltage-current relationship than traditional model and reduce computational complexity of simulation. The derived can implement the self-sustaining, self-quenching and the recovery processes of the SPAD. And the simulation shows a reasonable result that the model can well emulate the avalanche process of SPAD.展开更多
Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive...Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive disintegration and kinematics of multi-deformable rock blocks during rockslides.The present study proposes a discrete-continuous numerical model,based on a cohesive zone model,to explicitly incorporate the progressive fragmentation and intricate interparticle interactions inherent in rockslides.Breakable rock granular assemblies are released along an inclined plane and flow onto a horizontal plane.The numerical scenarios are established to incorporate variations in slope angle,initial height,friction coefficient,and particle number.The evolutions of fragmentation,kinematic,runout and depositional characteristics are quantitatively analyzed and compared with experimental and field data.A positive linear relationship between the equivalent friction coefficient and the apparent friction coefficient is identified.In general,the granular mass predominantly exhibits characteristics of a dense granular flow,with the Savage number exhibiting a decreasing trend as the volume of mass increases.The process of particle breakage gradually occurs in a bottom-up manner,leading to a significant increase in the angular velocities of the rock blocks with increasing depth.The simulation results reproduce the field observations of inverse grading and source stratigraphy preservation in the deposit.We propose a disintegration index that incorporates factors such as drop height,rock mass volume,and rock strength.Our findings demonstrate a consistent linear relationship between this index and the fragmentation degree in all tested scenarios.展开更多
Cyclic loads generated by environmental factors,such as winds,waves,and trains,will likely lead to performance degradation in pile foundations,resulting in issues like permanent displacement accumulation and bearing c...Cyclic loads generated by environmental factors,such as winds,waves,and trains,will likely lead to performance degradation in pile foundations,resulting in issues like permanent displacement accumulation and bearing capacity attenuation.This paper presents a semi-analytical solution for predicting the axial cyclic behavior of piles in sands.The solution relies on two enhanced nonlinear load-transfer models considering stress-strain hysteresis and cyclic degradation in the pile-soil interaction.Model parameters are calibrated through cyclic shear tests of the sand-steel interface and laboratory geotechnical testing of sands.A novel aspect involves the meticulous formulation of the shaft loadtransfer function using an interface constitutive model,which inherently inherits the interface model’s advantages,such as capturing hysteresis,hardening,degradation,and particle breakage.The semi-analytical solution is computed numerically using the matrix displacement method,and the calculated values are validated through model tests performed on non-displacement and displacement piles in sands.The results demonstrate that the predicted values show excellent agreement with the measured values for both the static and cyclic responses of piles in sands.The displacement pile response,including factors such as bearing capacity,mobilized shaft resistance,and convergence rate of permanent settlement,exhibit improvements compared to non-displacement piles attributed to the soil squeezing effect.This methodology presents an innovative analytical framework,allowing for integrating cyclic interface models into the theoretical investigation of pile responses.展开更多
In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and comput...In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.展开更多
Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochast...Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochastic models is not well understood.The present study aimed to address this gap by conducting a comparative study using the susceptible,exposed,infectious,and recovered(SEIR)model and its extended CMs from the coronavirus disease 2019 modeling literature.We demonstrated the equivalence of the numerical solution of CMs using the Euler scheme and their stochastic counterparts through theoretical analysis and simulations.Based on this equivalence,we proposed an efficient model calibration method that could replicate the exact solution of CMs in the corresponding stochastic models through parameter adjustment.The advancement in calibration techniques enhanced the accuracy of stochastic modeling in capturing the dynamics of epidemics.However,it should be noted that discrete-time stochastic models cannot perfectly reproduce the exact solution of continuous-time CMs.Additionally,we proposed a new stochastic compartment and agent mixed model as an alternative to agent-based models for large-scale population simulations with a limited number of agents.This model offered a balance between computational efficiency and accuracy.The results of this research contributed to the comparison and unification of deterministic CMs and stochastic models in epidemic modeling.Furthermore,the results had implications for the development of hybrid models that integrated the strengths of both frameworks.Overall,the present study has provided valuable epidemic modeling techniques and their practical applications for understanding and controlling the spread of infectious diseases.展开更多
Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction me...Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction method.This makes the accuracy of the surrogate model highly dependent on the experience of users and affects the accuracy of IMU methods.Therefore,an improved IMU method via the adaptive Kriging models is proposed.This method transforms the objective function of the IMU problem into two deterministic global optimization problems about the upper bound and the interval diameter through universal grey numbers.These optimization problems are addressed through the adaptive Kriging models and the particle swarm optimization(PSO)method to quantify the uncertain parameters,and the IMU is accomplished.During the construction of these adaptive Kriging models,the sample space is gridded according to sensitivity information.Local sampling is then performed in key subspaces based on the maximum mean square error(MMSE)criterion.The interval division coefficient and random sampling coefficient are adaptively adjusted without human interference until the model meets accuracy requirements.The effectiveness of the proposed method is demonstrated by a numerical example of a three-degree-of-freedom mass-spring system and an experimental example of a butted cylindrical shell.The results show that the updated results of the interval model are in good agreement with the experimental results.展开更多
To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen s...To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation.展开更多
The inflection point is an important feature of sigmoidal height-diameter(H-D)models.It is often cited as one of the properties favoring sigmoidal model forms.However,there are very few studies analyzing the inflectio...The inflection point is an important feature of sigmoidal height-diameter(H-D)models.It is often cited as one of the properties favoring sigmoidal model forms.However,there are very few studies analyzing the inflection points of H-D models.The goals of this study were to theoretically and empirically examine the behaviors of inflection points of six common H-D models with a regional dataset.The six models were the Wykoff(WYK),Schumacher(SCH),Curtis(CUR),HossfeldⅣ(HOS),von Bertalanffy-Richards(VBR),and Gompertz(GPZ)models.The models were first fitted in their base forms with tree species as random effects and were then expanded to include functional traits and spatial distribution.The distributions of the estimated inflection points were similar between the two-parameter models WYK,SCH,and CUR,but were different between the threeparameter models HOS,VBR,and GPZ.GPZ produced some of the largest inflection points.HOS and VBR produced concave H-D curves without inflection points for 12.7%and 39.7%of the tree species.Evergreen species or decreasing shade tolerance resulted in larger inflection points.The trends in the estimated inflection points of HOS and VBR were entirely opposite across the landscape.Furthermore,HOS could produce concave H-D curves for portions of the landscape.Based on the studied behaviors,the choice between two-parameter models may not matter.We recommend comparing seve ral three-parameter model forms for consistency in estimated inflection points before deciding on one.Believing sigmoidal models to have inflection points does not necessarily mean that they will produce fitted curves with one.Our study highlights the need to integrate analysis of inflection points into modeling H-D relationships.展开更多
The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The bas...The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The basic parameters of sandstone are determined through a series of static and dynamic tests,including uniaxial compression,Brazilian disc,triaxial compression under varying confining pressures,hydrostatic compression,and dynamic compression and tensile tests with a split Hopkinson pressure bar.Based on the sandstone test results from this study and previous research,a step-by-step procedure for parameter calibration is outlined,which accounts for the categories of the strength surface,equation of state(EOS),strain rate effect,and damage.The calibrated parameters are verified through numerical tests that correspond to the experimental loading conditions.Consistency between numerical results and experimental data indicates the precision and reliability of the calibrated parameters.The methodology presented in this study is scientifically sound,straightforward,and essential for improving the TSID model.Furthermore,it has the potential to contribute to other rock constitutive models,particularly new user-defined models.展开更多
With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily meas...With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily measured by the number of parameters, but also the subsequent escalation in computational demands, hardware and software prerequisites for training, all culminating in a substantial financial investment as well. In this paper, we present novel techniques like supervision, parallelization, and scoring functions to get better results out of chains of smaller language models, rather than relying solely on scaling up model size. Firstly, we propose an approach to quantify the performance of a Smaller Language Models (SLM) by introducing a corresponding supervisor model that incrementally corrects the encountered errors. Secondly, we propose an approach to utilize two smaller language models (in a network) performing the same task and retrieving the best relevant output from the two, ensuring peak performance for a specific task. Experimental evaluations establish the quantitative accuracy improvements on financial reasoning and arithmetic calculation tasks from utilizing techniques like supervisor models (in a network of model scenario), threshold scoring and parallel processing over a baseline study.展开更多
In the R&D phase of Gravity-1(YL-1), a multi-domain modeling and simulation technology based on Modelica language was introduced, which was a recent attempt in the practice of modeling and simulation method for la...In the R&D phase of Gravity-1(YL-1), a multi-domain modeling and simulation technology based on Modelica language was introduced, which was a recent attempt in the practice of modeling and simulation method for launch vehicles in China. It realizes a complex coupling model within a unified model for different domains, so that technologists can work on one model. It ensured the success of YL-1 first launch mission, supports rapid iteration, full validation, and tight design collaboration.展开更多
Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,...Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,30°,45°,60°,and 90°),under multiple levels of direct shearing for the first time.The results show that the anisotropic creep of shale exhibits a significant stress-dependent behavior.Under a low shear stress,the creep compliance of shale increases linearly with the logarithm of time at all bedding orientations,and the increase depends on the bedding orientation and creep time.Under high shear stress conditions,the creep compliance of shale is minimal when the bedding orientation is 0°,and the steady-creep rate of shale increases significantly with increasing bedding orientations of 30°,45°,60°,and 90°.The stress-strain values corresponding to the inception of the accelerated creep stage show an increasing and then decreasing trend with the bedding orientation.A semilogarithmic model that could reflect the stress dependence of the steady-creep rate while considering the hardening and damage process is proposed.The model minimizes the deviation of the calculated steady-state creep rate from the observed value and reveals the behavior of the bedding orientation's influence on the steady-creep rate.The applicability of the five classical empirical creep models is quantitatively evaluated.It shows that the logarithmic model can well explain the experimental creep strain and creep rate,and it can accurately predict long-term shear creep deformation.Based on an improved logarithmic model,the variations in creep parameters with shear stress and bedding orientations are discussed.With abovementioned findings,a mathematical method for constructing an anisotropic shear creep model of shale is proposed,which can characterize the nonlinear dependence of the anisotropic shear creep behavior of shale on the bedding orientation.展开更多
BACKGROUND Colorectal cancer(CRC)is a serious threat worldwide.Although early screening is suggested to be the most effective method to prevent and control CRC,the current situation of early screening for CRC is still...BACKGROUND Colorectal cancer(CRC)is a serious threat worldwide.Although early screening is suggested to be the most effective method to prevent and control CRC,the current situation of early screening for CRC is still not optimistic.In China,the incidence of CRC in the Yangtze River Delta region is increasing dramatically,but few studies have been conducted.Therefore,it is necessary to develop a simple and efficient early screening model for CRC.AIM To develop and validate an early-screening nomogram model to identify individuals at high risk of CRC.METHODS Data of 64448 participants obtained from Ningbo Hospital,China between 2014 and 2017 were retrospectively analyzed.The cohort comprised 64448 individuals,of which,530 were excluded due to missing or incorrect data.Of 63918,7607(11.9%)individuals were considered to be high risk for CRC,and 56311(88.1%)were not.The participants were randomly allocated to a training set(44743)or validation set(19175).The discriminatory ability,predictive accuracy,and clinical utility of the model were evaluated by constructing and analyzing receiver operating characteristic(ROC)curves and calibration curves and by decision curve analysis.Finally,the model was validated internally using a bootstrap resampling technique.RESULTS Seven variables,including demographic,lifestyle,and family history information,were examined.Multifactorial logistic regression analysis revealed that age[odds ratio(OR):1.03,95%confidence interval(CI):1.02-1.03,P<0.001],body mass index(BMI)(OR:1.07,95%CI:1.06-1.08,P<0.001),waist circumference(WC)(OR:1.03,95%CI:1.02-1.03 P<0.001),lifestyle(OR:0.45,95%CI:0.42-0.48,P<0.001),and family history(OR:4.28,95%CI:4.04-4.54,P<0.001)were the most significant predictors of high-risk CRC.Healthy lifestyle was a protective factor,whereas family history was the most significant risk factor.The area under the curve was 0.734(95%CI:0.723-0.745)for the final validation set ROC curve and 0.735(95%CI:0.728-0.742)for the training set ROC curve.The calibration curve demonstrated a high correlation between the CRC high-risk population predicted by the nomogram model and the actual CRC high-risk population.CONCLUSION The early-screening nomogram model for CRC prediction in high-risk populations developed in this study based on age,BMI,WC,lifestyle,and family history exhibited high accuracy.展开更多
Flow units(FU)rock typing is a common technique for characterizing reservoir flow behavior,producing reliable porosity and permeability estimation even in complex geological settings.However,the lateral extrapolation ...Flow units(FU)rock typing is a common technique for characterizing reservoir flow behavior,producing reliable porosity and permeability estimation even in complex geological settings.However,the lateral extrapolation of FU away from the well into the whole reservoir grid is commonly a difficult task and using the seismic data as constraints is rarely a subject of study.This paper proposes a workflow to generate numerous possible 3D volumes of flow units,porosity and permeability below the seismic resolution limit,respecting the available seismic data at larger scales.The methodology is used in the Mero Field,a Brazilian presalt carbonate reservoir located in the Santos Basin,who presents a complex and heterogenic geological setting with different sedimentological processes and diagenetic history.We generated metric flow units using the conventional core analysis and transposed to the well log data.Then,given a Markov chain Monte Carlo algorithm,the seismic data and the well log statistics,we simulated acoustic impedance,decametric flow units(DFU),metric flow units(MFU),porosity and permeability volumes in the metric scale.The aim is to estimate a minimum amount of MFU able to calculate realistic scenarios porosity and permeability scenarios,without losing the seismic lateral control.In other words,every porosity and permeability volume simulated produces a synthetic seismic that match the real seismic of the area,even in the metric scale.The achieved 3D results represent a high-resolution fluid flow reservoir modelling considering the lateral control of the seismic during the process and can be directly incorporated in the dynamic characterization workflow.展开更多
Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currentl...Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currently,least squares(LS)+auto-regressive(AR)hybrid method is one of the main techniques of PM prediction.Besides,the weighted LS+AR hybrid method performs well for PM short-term prediction.However,the corresponding covariance information of LS fitting residuals deserves further exploration in the AR model.In this study,we have derived a modified stochastic model for the LS+AR hybrid method,namely the weighted LS+weighted AR hybrid method.By using the PM data products of IERS EOP 14 C04,the numerical results indicate that for PM short-term forecasting,the proposed weighted LS+weighted AR hybrid method shows an advantage over both the LS+AR hybrid method and the weighted LS+AR hybrid method.Compared to the mean absolute errors(MAEs)of PMX/PMY sho rt-term prediction of the LS+AR hybrid method and the weighted LS+AR hybrid method,the weighted LS+weighted AR hybrid method shows average improvements of 6.61%/12.08%and 0.24%/11.65%,respectively.Besides,for the slopes of the linear regression lines fitted to the errors of each method,the growth of the prediction error of the proposed method is slower than that of the other two methods.展开更多
基金supported in part by the NIH grant R01CA241134supported in part by the NSF grant CMMI-1552764+3 种基金supported in part by the NSF grants DMS-1349724 and DMS-2052465supported in part by the NSF grant CCF-1740761supported in part by the U.S.-Norway Fulbright Foundation and the Research Council of Norway R&D Grant 309273supported in part by the Norwegian Centennial Chair grant and the Doctoral Dissertation Fellowship from the University of Minnesota.
文摘The spread of an advantageous mutation through a population is of fundamental interest in population genetics. While the classical Moran model is formulated for a well-mixed population, it has long been recognized that in real-world applications, the population usually has an explicit spatial structure which can significantly influence the dynamics. In the context of cancer initiation in epithelial tissue, several recent works have analyzed the dynamics of advantageous mutant spread on integer lattices, using the biased voter model from particle systems theory. In this spatial version of the Moran model, individuals first reproduce according to their fitness and then replace a neighboring individual. From a biological standpoint, the opposite dynamics, where individuals first die and are then replaced by a neighboring individual according to its fitness, are equally relevant. Here, we investigate this death-birth analogue of the biased voter model. We construct the process mathematically, derive the associated dual process, establish bounds on the survival probability of a single mutant, and prove that the process has an asymptotic shape. We also briefly discuss alternative birth-death and death-birth dynamics, depending on how the mutant fitness advantage affects the dynamics. We show that birth-death and death-birth formulations of the biased voter model are equivalent when fitness affects the former event of each update of the model, whereas the birth-death model is fundamentally different from the death-birth model when fitness affects the latter event.
基金supported by the Chinese–Norwegian Collaboration Projects within Climate Systems jointly funded by the National Key Research and Development Program of China (Grant No.2022YFE0106800)the Research Council of Norway funded project,MAPARC (Grant No.328943)+2 种基金the support from the Research Council of Norway funded project,COMBINED (Grant No.328935)the National Natural Science Foundation of China (Grant No.42075030)the Postgraduate Research and Practice Innovation Program of Jiangsu Province (KYCX23_1314)。
文摘Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Arctic multiyear sea ice,changes in newly formed sea ice indicate more thermodynamic and dynamic information on Arctic atmosphere–ocean–ice interaction and northern mid–high latitude atmospheric teleconnections. Here, we use a large multimodel ensemble from phase 6 of the Coupled Model Intercomparison Project(CMIP6) to investigate future changes in wintertime newly formed Arctic sea ice. The commonly used model-democracy approach that gives equal weight to each model essentially assumes that all models are independent and equally plausible, which contradicts with the fact that there are large interdependencies in the ensemble and discrepancies in models' performances in reproducing observations. Therefore, instead of using the arithmetic mean of well-performing models or all available models for projections like in previous studies, we employ a newly developed model weighting scheme that weights all models in the ensemble with consideration of their performance and independence to provide more reliable projections. Model democracy leads to evident bias and large intermodel spread in CMIP6 projections of newly formed Arctic sea ice. However, we show that both the bias and the intermodel spread can be effectively reduced by the weighting scheme. Projections from the weighted models indicate that wintertime newly formed Arctic sea ice is likely to increase dramatically until the middle of this century regardless of the emissions scenario.Thereafter, it may decrease(or remain stable) if the Arctic warming crosses a threshold(or is extensively constrained).
基金Supported by the National Natural Science Foundation of China(72088101,42372175)PetroChina Science and Technology Innovation Fund Program(2021DQ02-0904)。
文摘This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large models in vertical industries,outlines the challenges and issues confronted in applying large models in the oil and gas sector,and offers prospects for the application of large models in the oil and gas industry.The existing large models can be briefly divided into three categories:large language models,visual large models,and multimodal large models.The application of large models in the oil and gas industry is still in its infancy.Based on open-source large language models,some oil and gas enterprises have released large language model products using methods like fine-tuning and retrieval augmented generation.Scholars have attempted to develop scenario-specific models for oil and gas operations by using visual/multimodal foundation models.A few researchers have constructed pre-trained foundation models for seismic data processing and interpretation,as well as core analysis.The application of large models in the oil and gas industry faces challenges such as current data quantity and quality being difficult to support the training of large models,high research and development costs,and poor algorithm autonomy and control.The application of large models should be guided by the needs of oil and gas business,taking the application of large models as an opportunity to improve data lifecycle management,enhance data governance capabilities,promote the construction of computing power,strengthen the construction of“artificial intelligence+energy”composite teams,and boost the autonomy and control of large model technology.
基金We acknowledge funding from NSFC Grant 62306283.
文摘Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.
文摘In this paper, we present an improved circuit model for single-photon avalanche diodes without any convergence problems. The device simulation is based on Orcad PSpice and all the employed components are available in the standard library of the software. In particular, an intuitionistic and simple voltage-controlled current source is adopted to characterize the static behavior, which can better represent the voltage-current relationship than traditional model and reduce computational complexity of simulation. The derived can implement the self-sustaining, self-quenching and the recovery processes of the SPAD. And the simulation shows a reasonable result that the model can well emulate the avalanche process of SPAD.
基金support from the National Key R&D plan(Grant No.2022YFC3004303)the National Natural Science Foundation of China(Grant No.42107161)+3 种基金the State Key Laboratory of Hydroscience and Hydraulic Engineering(Grant No.2021-KY-04)the Open Research Fund Program of State Key Laboratory of Hydroscience and Engineering(sklhse-2023-C-01)the Open Research Fund Program of Key Laboratory of the Hydrosphere of the Ministry of Water Resources(mklhs-2023-04)the China Three Gorges Corporation(XLD/2117).
文摘Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive disintegration and kinematics of multi-deformable rock blocks during rockslides.The present study proposes a discrete-continuous numerical model,based on a cohesive zone model,to explicitly incorporate the progressive fragmentation and intricate interparticle interactions inherent in rockslides.Breakable rock granular assemblies are released along an inclined plane and flow onto a horizontal plane.The numerical scenarios are established to incorporate variations in slope angle,initial height,friction coefficient,and particle number.The evolutions of fragmentation,kinematic,runout and depositional characteristics are quantitatively analyzed and compared with experimental and field data.A positive linear relationship between the equivalent friction coefficient and the apparent friction coefficient is identified.In general,the granular mass predominantly exhibits characteristics of a dense granular flow,with the Savage number exhibiting a decreasing trend as the volume of mass increases.The process of particle breakage gradually occurs in a bottom-up manner,leading to a significant increase in the angular velocities of the rock blocks with increasing depth.The simulation results reproduce the field observations of inverse grading and source stratigraphy preservation in the deposit.We propose a disintegration index that incorporates factors such as drop height,rock mass volume,and rock strength.Our findings demonstrate a consistent linear relationship between this index and the fragmentation degree in all tested scenarios.
基金the financial support provided by the National Natural Science Foundation of China(Grant No.42272310).
文摘Cyclic loads generated by environmental factors,such as winds,waves,and trains,will likely lead to performance degradation in pile foundations,resulting in issues like permanent displacement accumulation and bearing capacity attenuation.This paper presents a semi-analytical solution for predicting the axial cyclic behavior of piles in sands.The solution relies on two enhanced nonlinear load-transfer models considering stress-strain hysteresis and cyclic degradation in the pile-soil interaction.Model parameters are calibrated through cyclic shear tests of the sand-steel interface and laboratory geotechnical testing of sands.A novel aspect involves the meticulous formulation of the shaft loadtransfer function using an interface constitutive model,which inherently inherits the interface model’s advantages,such as capturing hysteresis,hardening,degradation,and particle breakage.The semi-analytical solution is computed numerically using the matrix displacement method,and the calculated values are validated through model tests performed on non-displacement and displacement piles in sands.The results demonstrate that the predicted values show excellent agreement with the measured values for both the static and cyclic responses of piles in sands.The displacement pile response,including factors such as bearing capacity,mobilized shaft resistance,and convergence rate of permanent settlement,exhibit improvements compared to non-displacement piles attributed to the soil squeezing effect.This methodology presents an innovative analytical framework,allowing for integrating cyclic interface models into the theoretical investigation of pile responses.
基金Supported by Sichuan Science and Technology Program(2021YFQ0003,2023YFSY0026,2023YFH0004).
文摘In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.
基金supported by the National Natural Science Foundation of China(Grant Nos.82173620 to Yang Zhao and 82041024 to Feng Chen)partially supported by the Bill&Melinda Gates Foundation(Grant No.INV-006371 to Feng Chen)Priority Academic Program Development of Jiangsu Higher Education Institutions.
文摘Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochastic models is not well understood.The present study aimed to address this gap by conducting a comparative study using the susceptible,exposed,infectious,and recovered(SEIR)model and its extended CMs from the coronavirus disease 2019 modeling literature.We demonstrated the equivalence of the numerical solution of CMs using the Euler scheme and their stochastic counterparts through theoretical analysis and simulations.Based on this equivalence,we proposed an efficient model calibration method that could replicate the exact solution of CMs in the corresponding stochastic models through parameter adjustment.The advancement in calibration techniques enhanced the accuracy of stochastic modeling in capturing the dynamics of epidemics.However,it should be noted that discrete-time stochastic models cannot perfectly reproduce the exact solution of continuous-time CMs.Additionally,we proposed a new stochastic compartment and agent mixed model as an alternative to agent-based models for large-scale population simulations with a limited number of agents.This model offered a balance between computational efficiency and accuracy.The results of this research contributed to the comparison and unification of deterministic CMs and stochastic models in epidemic modeling.Furthermore,the results had implications for the development of hybrid models that integrated the strengths of both frameworks.Overall,the present study has provided valuable epidemic modeling techniques and their practical applications for understanding and controlling the spread of infectious diseases.
基金Project supported by the National Natural Science Foundation of China(Nos.12272211,12072181,12121002)。
文摘Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction method.This makes the accuracy of the surrogate model highly dependent on the experience of users and affects the accuracy of IMU methods.Therefore,an improved IMU method via the adaptive Kriging models is proposed.This method transforms the objective function of the IMU problem into two deterministic global optimization problems about the upper bound and the interval diameter through universal grey numbers.These optimization problems are addressed through the adaptive Kriging models and the particle swarm optimization(PSO)method to quantify the uncertain parameters,and the IMU is accomplished.During the construction of these adaptive Kriging models,the sample space is gridded according to sensitivity information.Local sampling is then performed in key subspaces based on the maximum mean square error(MMSE)criterion.The interval division coefficient and random sampling coefficient are adaptively adjusted without human interference until the model meets accuracy requirements.The effectiveness of the proposed method is demonstrated by a numerical example of a three-degree-of-freedom mass-spring system and an experimental example of a butted cylindrical shell.The results show that the updated results of the interval model are in good agreement with the experimental results.
基金funding from the Paul ScherrerInstitute,Switzerland through the NES/GFA-ABE Cross Project。
文摘To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation.
文摘The inflection point is an important feature of sigmoidal height-diameter(H-D)models.It is often cited as one of the properties favoring sigmoidal model forms.However,there are very few studies analyzing the inflection points of H-D models.The goals of this study were to theoretically and empirically examine the behaviors of inflection points of six common H-D models with a regional dataset.The six models were the Wykoff(WYK),Schumacher(SCH),Curtis(CUR),HossfeldⅣ(HOS),von Bertalanffy-Richards(VBR),and Gompertz(GPZ)models.The models were first fitted in their base forms with tree species as random effects and were then expanded to include functional traits and spatial distribution.The distributions of the estimated inflection points were similar between the two-parameter models WYK,SCH,and CUR,but were different between the threeparameter models HOS,VBR,and GPZ.GPZ produced some of the largest inflection points.HOS and VBR produced concave H-D curves without inflection points for 12.7%and 39.7%of the tree species.Evergreen species or decreasing shade tolerance resulted in larger inflection points.The trends in the estimated inflection points of HOS and VBR were entirely opposite across the landscape.Furthermore,HOS could produce concave H-D curves for portions of the landscape.Based on the studied behaviors,the choice between two-parameter models may not matter.We recommend comparing seve ral three-parameter model forms for consistency in estimated inflection points before deciding on one.Believing sigmoidal models to have inflection points does not necessarily mean that they will produce fitted curves with one.Our study highlights the need to integrate analysis of inflection points into modeling H-D relationships.
基金funded by the National Natural Science Foundation of China(Grant No.12272247)National Key Project(Grant No.GJXM92579)Major Research and Development Project of Metallurgical Corporation of China Ltd.in the Non-Steel Field(Grant No.2021-5).
文摘The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The basic parameters of sandstone are determined through a series of static and dynamic tests,including uniaxial compression,Brazilian disc,triaxial compression under varying confining pressures,hydrostatic compression,and dynamic compression and tensile tests with a split Hopkinson pressure bar.Based on the sandstone test results from this study and previous research,a step-by-step procedure for parameter calibration is outlined,which accounts for the categories of the strength surface,equation of state(EOS),strain rate effect,and damage.The calibrated parameters are verified through numerical tests that correspond to the experimental loading conditions.Consistency between numerical results and experimental data indicates the precision and reliability of the calibrated parameters.The methodology presented in this study is scientifically sound,straightforward,and essential for improving the TSID model.Furthermore,it has the potential to contribute to other rock constitutive models,particularly new user-defined models.
文摘With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily measured by the number of parameters, but also the subsequent escalation in computational demands, hardware and software prerequisites for training, all culminating in a substantial financial investment as well. In this paper, we present novel techniques like supervision, parallelization, and scoring functions to get better results out of chains of smaller language models, rather than relying solely on scaling up model size. Firstly, we propose an approach to quantify the performance of a Smaller Language Models (SLM) by introducing a corresponding supervisor model that incrementally corrects the encountered errors. Secondly, we propose an approach to utilize two smaller language models (in a network) performing the same task and retrieving the best relevant output from the two, ensuring peak performance for a specific task. Experimental evaluations establish the quantitative accuracy improvements on financial reasoning and arithmetic calculation tasks from utilizing techniques like supervisor models (in a network of model scenario), threshold scoring and parallel processing over a baseline study.
文摘In the R&D phase of Gravity-1(YL-1), a multi-domain modeling and simulation technology based on Modelica language was introduced, which was a recent attempt in the practice of modeling and simulation method for launch vehicles in China. It realizes a complex coupling model within a unified model for different domains, so that technologists can work on one model. It ensured the success of YL-1 first launch mission, supports rapid iteration, full validation, and tight design collaboration.
基金funded by the National Natural Science Foundation of China(Grant Nos.U22A20166 and 12172230)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2023A1515012654)+1 种基金funded by the National Natural Science Foundation of China(Grant Nos.U22A20166 and 12172230)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2023A1515012654)。
文摘Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,30°,45°,60°,and 90°),under multiple levels of direct shearing for the first time.The results show that the anisotropic creep of shale exhibits a significant stress-dependent behavior.Under a low shear stress,the creep compliance of shale increases linearly with the logarithm of time at all bedding orientations,and the increase depends on the bedding orientation and creep time.Under high shear stress conditions,the creep compliance of shale is minimal when the bedding orientation is 0°,and the steady-creep rate of shale increases significantly with increasing bedding orientations of 30°,45°,60°,and 90°.The stress-strain values corresponding to the inception of the accelerated creep stage show an increasing and then decreasing trend with the bedding orientation.A semilogarithmic model that could reflect the stress dependence of the steady-creep rate while considering the hardening and damage process is proposed.The model minimizes the deviation of the calculated steady-state creep rate from the observed value and reveals the behavior of the bedding orientation's influence on the steady-creep rate.The applicability of the five classical empirical creep models is quantitatively evaluated.It shows that the logarithmic model can well explain the experimental creep strain and creep rate,and it can accurately predict long-term shear creep deformation.Based on an improved logarithmic model,the variations in creep parameters with shear stress and bedding orientations are discussed.With abovementioned findings,a mathematical method for constructing an anisotropic shear creep model of shale is proposed,which can characterize the nonlinear dependence of the anisotropic shear creep behavior of shale on the bedding orientation.
基金Supported by the Project of NINGBO Leading Medical Health Discipline,No.2022-B11Ningbo Natural Science Foundation,No.202003N4206Public Welfare Foundation of Ningbo,No.2021S108.
文摘BACKGROUND Colorectal cancer(CRC)is a serious threat worldwide.Although early screening is suggested to be the most effective method to prevent and control CRC,the current situation of early screening for CRC is still not optimistic.In China,the incidence of CRC in the Yangtze River Delta region is increasing dramatically,but few studies have been conducted.Therefore,it is necessary to develop a simple and efficient early screening model for CRC.AIM To develop and validate an early-screening nomogram model to identify individuals at high risk of CRC.METHODS Data of 64448 participants obtained from Ningbo Hospital,China between 2014 and 2017 were retrospectively analyzed.The cohort comprised 64448 individuals,of which,530 were excluded due to missing or incorrect data.Of 63918,7607(11.9%)individuals were considered to be high risk for CRC,and 56311(88.1%)were not.The participants were randomly allocated to a training set(44743)or validation set(19175).The discriminatory ability,predictive accuracy,and clinical utility of the model were evaluated by constructing and analyzing receiver operating characteristic(ROC)curves and calibration curves and by decision curve analysis.Finally,the model was validated internally using a bootstrap resampling technique.RESULTS Seven variables,including demographic,lifestyle,and family history information,were examined.Multifactorial logistic regression analysis revealed that age[odds ratio(OR):1.03,95%confidence interval(CI):1.02-1.03,P<0.001],body mass index(BMI)(OR:1.07,95%CI:1.06-1.08,P<0.001),waist circumference(WC)(OR:1.03,95%CI:1.02-1.03 P<0.001),lifestyle(OR:0.45,95%CI:0.42-0.48,P<0.001),and family history(OR:4.28,95%CI:4.04-4.54,P<0.001)were the most significant predictors of high-risk CRC.Healthy lifestyle was a protective factor,whereas family history was the most significant risk factor.The area under the curve was 0.734(95%CI:0.723-0.745)for the final validation set ROC curve and 0.735(95%CI:0.728-0.742)for the training set ROC curve.The calibration curve demonstrated a high correlation between the CRC high-risk population predicted by the nomogram model and the actual CRC high-risk population.CONCLUSION The early-screening nomogram model for CRC prediction in high-risk populations developed in this study based on age,BMI,WC,lifestyle,and family history exhibited high accuracy.
文摘Flow units(FU)rock typing is a common technique for characterizing reservoir flow behavior,producing reliable porosity and permeability estimation even in complex geological settings.However,the lateral extrapolation of FU away from the well into the whole reservoir grid is commonly a difficult task and using the seismic data as constraints is rarely a subject of study.This paper proposes a workflow to generate numerous possible 3D volumes of flow units,porosity and permeability below the seismic resolution limit,respecting the available seismic data at larger scales.The methodology is used in the Mero Field,a Brazilian presalt carbonate reservoir located in the Santos Basin,who presents a complex and heterogenic geological setting with different sedimentological processes and diagenetic history.We generated metric flow units using the conventional core analysis and transposed to the well log data.Then,given a Markov chain Monte Carlo algorithm,the seismic data and the well log statistics,we simulated acoustic impedance,decametric flow units(DFU),metric flow units(MFU),porosity and permeability volumes in the metric scale.The aim is to estimate a minimum amount of MFU able to calculate realistic scenarios porosity and permeability scenarios,without losing the seismic lateral control.In other words,every porosity and permeability volume simulated produces a synthetic seismic that match the real seismic of the area,even in the metric scale.The achieved 3D results represent a high-resolution fluid flow reservoir modelling considering the lateral control of the seismic during the process and can be directly incorporated in the dynamic characterization workflow.
基金supported by National Natural Science Foundation of China,China(No.42004016)HuBei Natural Science Fund,China(No.2020CFB329)+1 种基金HuNan Natural Science Fund,China(No.2023JJ60559,2023JJ60560)the State Key Laboratory of Geodesy and Earth’s Dynamics self-deployment project,China(No.S21L6101)。
文摘Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currently,least squares(LS)+auto-regressive(AR)hybrid method is one of the main techniques of PM prediction.Besides,the weighted LS+AR hybrid method performs well for PM short-term prediction.However,the corresponding covariance information of LS fitting residuals deserves further exploration in the AR model.In this study,we have derived a modified stochastic model for the LS+AR hybrid method,namely the weighted LS+weighted AR hybrid method.By using the PM data products of IERS EOP 14 C04,the numerical results indicate that for PM short-term forecasting,the proposed weighted LS+weighted AR hybrid method shows an advantage over both the LS+AR hybrid method and the weighted LS+AR hybrid method.Compared to the mean absolute errors(MAEs)of PMX/PMY sho rt-term prediction of the LS+AR hybrid method and the weighted LS+AR hybrid method,the weighted LS+weighted AR hybrid method shows average improvements of 6.61%/12.08%and 0.24%/11.65%,respectively.Besides,for the slopes of the linear regression lines fitted to the errors of each method,the growth of the prediction error of the proposed method is slower than that of the other two methods.