To effectively extract multi-scale information from observation data and improve computational efficiency,a multi-scale second-order autoregressive recursive filter(MSRF)method is designed.The second-order autoregress...To effectively extract multi-scale information from observation data and improve computational efficiency,a multi-scale second-order autoregressive recursive filter(MSRF)method is designed.The second-order autoregressive filter used in this study has been attempted to replace the traditional first-order recursive filter used in spatial multi-scale recursive filter(SMRF)method.The experimental results indicate that the MSRF scheme successfully extracts various scale information resolved by observations.Moreover,compared with the SMRF scheme,the MSRF scheme improves computational accuracy and efficiency to some extent.The MSRF scheme can not only propagate to a longer distance without the attenuation of innovation,but also reduce the mean absolute deviation between the reconstructed sea ice concentration results and observations reduced by about 3.2%compared to the SMRF scheme.On the other hand,compared with traditional first-order recursive filters using in the SMRF scheme that multiple filters are executed,the MSRF scheme only needs to perform two filter processes in one iteration,greatly improving filtering efficiency.In the two-dimensional experiment of sea ice concentration,the calculation time of the MSRF scheme is only 1/7 of that of SMRF scheme.This means that the MSRF scheme can achieve better performance with less computational cost,which is of great significance for further application in real-time ocean or sea ice data assimilation systems in the future.展开更多
To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen s...To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation.展开更多
Xinjiang Uygur Autonomous Region is a typical inland arid area in China with a sparse and uneven distribution of meteorological stations,limited access to precipitation data,and significant water scarcity.Evaluating a...Xinjiang Uygur Autonomous Region is a typical inland arid area in China with a sparse and uneven distribution of meteorological stations,limited access to precipitation data,and significant water scarcity.Evaluating and integrating precipitation datasets from different sources to accurately characterize precipitation patterns has become a challenge to provide more accurate and alternative precipitation information for the region,which can even improve the performance of hydrological modelling.This study evaluated the applicability of widely used five satellite-based precipitation products(Climate Hazards Group InfraRed Precipitation with Station(CHIRPS),China Meteorological Forcing Dataset(CMFD),Climate Prediction Center morphing method(CMORPH),Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record(PERSIANN-CDR),and Tropical Rainfall Measuring Mission Multi-satellite Precipitation Analysis(TMPA))and a reanalysis precipitation dataset(ECMWF Reanalysis v5-Land Dataset(ERA5-Land))in Xinjiang using ground-based observational precipitation data from a limited number of meteorological stations.Based on this assessment,we proposed a framework that integrated different precipitation datasets with varying spatial resolutions using a dynamic Bayesian model averaging(DBMA)approach,the expectation-maximization method,and the ordinary Kriging interpolation method.The daily precipitation data merged using the DBMA approach exhibited distinct spatiotemporal variability,with an outstanding performance,as indicated by low root mean square error(RMSE=1.40 mm/d)and high Person's correlation coefficient(CC=0.67).Compared with the traditional simple model averaging(SMA)and individual product data,although the DBMA-fused precipitation data were slightly lower than the best precipitation product(CMFD),the overall performance of DBMA was more robust.The error analysis between DBMA-fused precipitation dataset and the more advanced Integrated Multi-satellite Retrievals for Global Precipitation Measurement Final(IMERG-F)precipitation product,as well as hydrological simulations in the Ebinur Lake Basin,further demonstrated the superior performance of DBMA-fused precipitation dataset in the entire Xinjiang region.The proposed framework for solving the fusion problem of multi-source precipitation data with different spatial resolutions is feasible for application in inland arid areas,and aids in obtaining more accurate regional hydrological information and improving regional water resources management capabilities and meteorological research in these regions.展开更多
In this paper, a model averaging method is proposed for varying-coefficient models with response missing at random by establishing a weight selection criterion based on cross-validation. Under certain regularity condi...In this paper, a model averaging method is proposed for varying-coefficient models with response missing at random by establishing a weight selection criterion based on cross-validation. Under certain regularity conditions, it is proved that the proposed method is asymptotically optimal in the sense of achieving the minimum squared error.展开更多
In this article we consider the asymptotic behavior of extreme distribution with the extreme value index γ>0 . The rates of uniform convergence for Fréchet distribution are constructed under the second-order ...In this article we consider the asymptotic behavior of extreme distribution with the extreme value index γ>0 . The rates of uniform convergence for Fréchet distribution are constructed under the second-order regular variation condition.展开更多
Background Rumen bacterial groups can affect growth performance,such as average daily gain(ADG),feed intake,and efficiency.The study aimed to investigate the inter-relationship of rumen bacterial composition,rumen fer...Background Rumen bacterial groups can affect growth performance,such as average daily gain(ADG),feed intake,and efficiency.The study aimed to investigate the inter-relationship of rumen bacterial composition,rumen fermentation indicators,serum indicators,and growth performance of Holstein heifer calves with different ADG.Twelve calves were chosen from a trail with 60 calves and divided into higher ADG(HADG,high pre-and post-weaning ADG,n=6)and lower ADG(LADG,low pre-and post-weaning ADG,n=6)groups to investigate differences in bacterial composition and functions and host phenotype.Results During the preweaning period,the relative abundances of propionate producers,including g_norank_f_Butyricicoccaceae,g_Pyramidobacter,and g_norank_f_norank_o_Clostridia_vadin BB60_group,were higher in HADG calves(LDA>2,P<0.05).Enrichment of these bacteria resulted in increased levels of propionate,a gluconeogenic precursor,in preweaning HADG calves(adjusted P<0.05),which consequently raised serum glucose concentrations(adjusted P<0.05).In contrast,the relative abundances of rumen bacteria in post-weaning HADG calves did not exert this effect.Moreover,no significant differences were observed in rumen fermentation parameters and serum indices between the two groups.Conclusions The findings of this study revealed that the preweaning period is the window of opportunity for rumen bacteria to regulate the ADG of calves.展开更多
Second-order axially moving systems are common models in the field of dynamics, such as axially moving strings, cables, and belts. In the traditional research work, it is difficult to obtain closed-form solutions for ...Second-order axially moving systems are common models in the field of dynamics, such as axially moving strings, cables, and belts. In the traditional research work, it is difficult to obtain closed-form solutions for the forced vibration when the damping effect and the coupling effect of multiple second-order models are considered.In this paper, Green's function method based on the Laplace transform is used to obtain closed-form solutions for the forced vibration of second-order axially moving systems. By taking the axially moving damping string system and multi-string system connected by springs as examples, the detailed solution methods and the analytical Green's functions of these second-order systems are given. The mode functions and frequency equations are also obtained by the obtained Green's functions. The reliability and convenience of the results are verified by several examples. This paper provides a systematic analytical method for the dynamic analysis of second-order axially moving systems, and the obtained Green's functions are applicable to different second-order systems rather than just string systems. In addition, the work of this paper also has positive significance for the study on the forced vibration of high-order systems.展开更多
Periodic components are of great significance for fault diagnosis and health monitoring of rotating machinery.Time synchronous averaging is an effective and convenient technique for extracting those components.However...Periodic components are of great significance for fault diagnosis and health monitoring of rotating machinery.Time synchronous averaging is an effective and convenient technique for extracting those components.However,the performance of time synchronous averaging is seriously limited when the separate segments are poorly synchronized.This paper proposes a new averaging method capable of extracting periodic components without external reference and an accurate period to solve this problem.With this approach,phase detection and compensation eliminate all segments'phase differences,which enables the segments to be well synchronized.The effectiveness of the proposed method is validated by numerical and experimental signals.展开更多
This study is concerned with the three-dimensional(3D)stagnation-point for the mixed convection flow past a vertical surface considering the first-order and secondorder velocity slips.To the authors’knowledge,this is...This study is concerned with the three-dimensional(3D)stagnation-point for the mixed convection flow past a vertical surface considering the first-order and secondorder velocity slips.To the authors’knowledge,this is the first study presenting this very interesting analysis.Nonlinear partial differential equations for the flow problem are transformed into nonlinear ordinary differential equations(ODEs)by using appropriate similarity transformation.These ODEs with the corresponding boundary conditions are numerically solved by utilizing the bvp4c solver in MATLAB programming language.The effects of the governing parameters on the non-dimensional velocity profiles,temperature profiles,skin friction coefficients,and the local Nusselt number are presented in detail through a series of graphs and tables.Interestingly,it is reported that the reduced skin friction coefficient decreases for the assisting flow situation and increases for the opposing flow situation.The numerical computations of the present work are compared with those from other research available in specific situations,and an excellent consensus is observed.Another exciting feature for this work is the existence of dual solutions.An important remark is that the dual solutions exist for both assisting and opposing flows.A linear stability analysis is performed showing that one solution is stable and the other solution is not stable.We notice that the mixed convection and velocity slip parameters have strong effects on the flow characteristics.These effects are depicted in graphs and discussed in this paper.The obtained results show that the first-order and second-order slip parameters have a considerable effect on the flow,as well as on the heat transfer characteristics.展开更多
In this paper, we define some new sets of non-elementary functions in a group of solutions x(t) that are sine and cosine to the upper limit of integration in a non-elementary integral that can be arbitrary. We are usi...In this paper, we define some new sets of non-elementary functions in a group of solutions x(t) that are sine and cosine to the upper limit of integration in a non-elementary integral that can be arbitrary. We are using Abel’s methods, described by Armitage and Eberlein. The key is to start with a non-elementary integral function, differentiating and inverting, and then define a set of three functions that belong together. Differentiating these functions twice gives second-order nonlinear ODEs that have the defined set of functions as solutions. We will study some of the second-order nonlinear ODEs, especially those that exhibit limit cycles. Using the methods described in this paper, it is possible to define many other sets of non-elementary functions that are giving solutions to some second-order nonlinear autonomous ODEs.展开更多
A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming probl...A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming problem can be converted into the single objective function by various methods as Chandra Sen’s method, weighted sum method, ranking function method, statistical averaging method. In this paper, Chandra Sen’s method and statistical averaging method both are used here for making single objective function from multi-objective function. Two multi-objective programming problems are solved to verify the result. One is numerical example and the other is real life example. Then the problems are solved by ordinary simplex method and fuzzy programming method. It can be seen that fuzzy programming method gives better optimal values than the ordinary simplex method.展开更多
The study of average convection in a rotating cavity subjected to modulated rotation is an interesting area for the development of both fundamental and applied science.This phenomenon finds application in the field of...The study of average convection in a rotating cavity subjected to modulated rotation is an interesting area for the development of both fundamental and applied science.This phenomenon finds application in the field of mass transfer and fluid flow control,relevant examples being crystal growth under reduced gravity and fluid mixing in microfluidic devices for cell cultures.In this study,the averaged flow generated by the oscillating motion of a fluid in a planar layer rotating about a horizontal axis is experimentally investigated.The boundaries of the layer are maintained at constant temperatures,while the lateral cylindrical wall is thermally insulated.It is demonstrated that libration results in intense oscillatory fluid motion,which in turn produces a time-averaged flow.For the first time,quantitative measures for the instantaneous velocity field are obtained using the Particle Image Velocimetry technique.It is revealed that the flow has the form of counter-rotating vortices.The vortex circulations sense changes during a libration cycle.An increase in the rotation rate and amplitude of the cavity libration results in an increase in the flow intensity.The heat transfer and time-averaged velocity are examined accordingly as a function of the dimensionless oscillation frequency,and resonant excitation of heat transfer and average oscillation velocity are revealed.The threshold curve for the onset of the averaged convection is identified in the plane of control parameters(dimensionless rotational velocity and pulsation Reynolds number).It is found that an increase in the dimensionless rotational velocity has a stabilizing effect on the onset of convection.展开更多
This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this met...This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this methodology incorporates second-order uncertainties (means and covariances) and second-order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best- Estimate Results with Reduced Uncertainties” and the last letter (“D”) in the acronym indicates “deterministic,” referring to the deterministic inclusion of the computational model responses. The 2<sup>nd</sup>-BERRU-PMD methodology is fundamentally based on the maximum entropy (MaxEnt) principle. This principle is in contradistinction to the fundamental principle that underlies the extant data assimilation and/or adjustment procedures which minimize in a least-square sense a subjective user-defined functional which is meant to represent the discrepancies between measured and computed model responses. It is shown that the 2<sup>nd</sup>-BERRU-PMD methodology generalizes and extends current data assimilation and/or data adjustment procedures while overcoming the fundamental limitations of these procedures. In the accompanying work (Part II), the alternative framework for developing the “second- order MaxEnt predictive modelling methodology” is presented by incorporating probabilistically (as opposed to “deterministically”) the computed model responses.展开更多
This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and par...This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and parameters. This methodology is designated by the acronym 2<sup>nd</sup>-BERRU-PMP, where the attribute “2<sup>nd</sup>” indicates that this methodology incorporates second- order uncertainties (means and covariances) and second (and higher) order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best-Estimate Results with Reduced Uncertainties” and the last letter (“P”) in the acronym indicates “probabilistic,” referring to the MaxEnt probabilistic inclusion of the computational model responses. This is in contradistinction to the 2<sup>nd</sup>-BERRU-PMD methodology, which deterministically combines the computed model responses with the experimental information, as presented in the accompanying work (Part I). Although both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies yield expressions that include second (and higher) order sensitivities of responses to model parameters, the respective expressions for the predicted responses, for the calibrated predicted parameters and for their predicted uncertainties (covariances), are not identical to each other. Nevertheless, the results predicted by both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies encompass, as particular cases, the results produced by the extant data assimilation and data adjustment procedures, which rely on the minimization, in a least-square sense, of a user-defined functional meant to represent the discrepancies between measured and computed model responses.展开更多
By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning...By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning of mobile robots.However,the traditional DDQN algorithm suffers from sparse rewards and inefficient utilization of high-quality data.Targeting those problems,an improved DDQN algorithm based on average Q-value estimation and reward redistribution was proposed.First,to enhance the precision of the target Q-value,the average of multiple previously learned Q-values from the target Q network is used to replace the single Q-value from the current target Q network.Next,a reward redistribution mechanism is designed to overcome the sparse reward problem by adjusting the final reward of each action using the round reward from trajectory information.Additionally,a reward-prioritized experience selection method is introduced,which ranks experience samples according to reward values to ensure frequent utilization of high-quality data.Finally,simulation experiments are conducted to verify the effectiveness of the proposed algorithm in fixed-position scenario and random environments.The experimental results show that compared to the traditional DDQN algorithm,the proposed algorithm achieves shorter average running time,higher average return and fewer average steps.The performance of the proposed algorithm is improved by 11.43%in the fixed scenario and 8.33%in random environments.It not only plans economic and safe paths but also significantly improves efficiency and generalization in path planning,making it suitable for widespread application in autonomous navigation and industrial automation.展开更多
This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the ...This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the acronym BERRU denotes “best-estimate results with reduced uncertainties” and “PM” denotes “predictive modeling.” The physical system selected for this illustrative application is a polyethylene-reflected plutonium (acronym: PERP) OECD/NEA reactor physics benchmark. This benchmark is modeled using the neutron transport Boltzmann equation (involving 21,976 uncertain parameters), the solution of which is representative of “large-scale computations.” The results obtained in this work confirm the fact that the 2<sup>nd</sup>-BERRU-PM methodology predicts best-estimate results that fall in between the corresponding computed and measured values, while reducing the predicted standard deviations of the predicted results to values smaller than either the experimentally measured or the computed values of the respective standard deviations. The obtained results also indicate that 2<sup>nd</sup>-order response sensitivities must always be included to quantify the need for including (or not) the 3<sup>rd</sup>- and/or 4<sup>th</sup>-order sensitivities. When the parameters are known with high precision, the contributions of the higher-order sensitivities diminish with increasing order, so that the inclusion of the 1<sup>st</sup>- and 2<sup>nd</sup>-order sensitivities may suffice for obtaining accurate predicted best- estimate response values and best-estimate standard deviations. On the other hand, when the parameters’ standard deviations are sufficiently large to approach (or be outside of) the radius of convergence of the multivariate Taylor-series which represents the response in the phase-space of model parameters, the contributions stemming from the 3<sup>rd</sup>- and even 4<sup>th</sup>-order sensitivities are necessary to ensure consistency between the computed and measured response. In such cases, the use of only the 1<sup>st</sup>-order sensitivities erroneously indicates that the computed results are inconsistent with the respective measured response. Ongoing research aims at extending the 2<sup>nd</sup>-BERRU-PM methodology to fourth-order, thus enabling the computation of third-order response correlations (skewness) and fourth-order response correlations (kurtosis).展开更多
In recent times, lithium-ion batteries have been widely used owing to their high energy density, extended cycle lifespan, and minimal self-discharge rate. The design of high-speed rechargeable lithium-ion batteries fa...In recent times, lithium-ion batteries have been widely used owing to their high energy density, extended cycle lifespan, and minimal self-discharge rate. The design of high-speed rechargeable lithium-ion batteries faces a significant challenge owing to the need to increase average electric power during charging. This challenge results from the direct influence of the power level on the rate of chemical reactions occurring in the battery electrodes. In this study, the Taguchi optimization method was used to enhance the average electric power during the charging process of lithium-ion batteries. The Taguchi technique is a statistical strategy that facilitates the systematic and efficient evaluation of numerous experimental variables. The proposed method involved varying seven input factors, including positive electrode thickness, positive electrode material, positive electrode active material volume fraction, negative electrode active material volume fraction, separator thickness, positive current collector thickness, and negative current collector thickness. Three levels were assigned to each control factor to identify the optimal conditions and maximize the average electric power during charging. Moreover, a variance assessment analysis was conducted to validate the results obtained from the Taguchi analysis. The results revealed that the Taguchi method was an eff ective approach for optimizing the average electric power during the charging of lithium-ion batteries. This indicates that the positive electrode material, followed by the separator thickness and the negative electrode active material volume fraction, was key factors significantly infl uencing the average electric power during the charging of lithium-ion batteries response. The identification of optimal conditions resulted in the improved performance of lithium-ion batteries, extending their potential in various applications. Particularly, lithium-ion batteries with average electric power of 16 W and 17 W during charging were designed and simulated in the range of 0-12000 s using COMSOL Multiphysics software. This study efficiently employs the Taguchi optimization technique to develop lithium-ion batteries capable of storing a predetermined average electric power during the charging phase. Therefore, this method enables the battery to achieve complete charging within a specific timeframe tailored to a specificapplication. The implementation of this method can save costs, time, and materials compared with other alternative methods, such as the trial-and-error approach.展开更多
Integrated satellite unmanned aerial vehicle relay networks(ISUAVRNs)have become a prominent topic in recent years.This paper investigates the average secrecy capacity(ASC)for reconfigurable intelligent surface(RIS)-e...Integrated satellite unmanned aerial vehicle relay networks(ISUAVRNs)have become a prominent topic in recent years.This paper investigates the average secrecy capacity(ASC)for reconfigurable intelligent surface(RIS)-enabled ISUAVRNs.Especially,an eve is considered to intercept the legitimate information from the considered secrecy system.Besides,we get detailed expressions for the ASC of the regarded secrecy system with the aid of the reconfigurable intelligent.Furthermore,to gain insightful results of the major parameters on the ASC in high signalto-noise ratio regime,the approximate investigations are further gotten,which give an efficient method to value the secrecy analysis.At last,some representative computer results are obtained to prove the theoretical findings.展开更多
In order to reveal the complex network characteristics and evolution principle of China aviation network,the relationship between the average degree and the average path length of edge vertices of China aviation netwo...In order to reveal the complex network characteristics and evolution principle of China aviation network,the relationship between the average degree and the average path length of edge vertices of China aviation network in 1988,1994,2001,2008 and 2015 was studied.According to the theory and method of complex network,the network system was constructed with the city where the airport was located as the network node and the airline as the edge of the network.On the basis of the statistical data,the average degree and average path length of edge vertices of China aviation network in 1988,1994,2001,2008 and 2015 were calculated.Through regression analysis,it was found that the average degree had a logarithmic relationship with the average path length of edge vertices and the two parameters of the logarithmic relationship had linear evolutionary trace.展开更多
In order to reveal the complex network characteristics and evolution principle of China aviation network,the probability distribution and evolution trace of arithmetic average of edge vertices nearest neighbor average...In order to reveal the complex network characteristics and evolution principle of China aviation network,the probability distribution and evolution trace of arithmetic average of edge vertices nearest neighbor average degree values of China aviation network were studied based on the statistics data of China civil aviation network in 1988,1994,2001,2008 and 2015.According to the theory and method of complex network,the network system was constructed with the city where the airport was located as the network node and the route between cities as the edge of the network.Based on the statistical data,the arithmetic averages of edge vertices nearest neighbor average degree values of China aviation network in 1988,1994,2001,2008 and 2015 were calculated.Using the probability statistical analysis method,it was found that the arithmetic average of edge vertices nearest neighbor average degree values had the probability distribution of normal function and the position parameters and scale parameters of the probability distribution had linear evolution trace.展开更多
基金The National Key Research and Development Program of China under contract No.2023YFC3107701the National Natural Science Foundation of China under contract No.42375143.
文摘To effectively extract multi-scale information from observation data and improve computational efficiency,a multi-scale second-order autoregressive recursive filter(MSRF)method is designed.The second-order autoregressive filter used in this study has been attempted to replace the traditional first-order recursive filter used in spatial multi-scale recursive filter(SMRF)method.The experimental results indicate that the MSRF scheme successfully extracts various scale information resolved by observations.Moreover,compared with the SMRF scheme,the MSRF scheme improves computational accuracy and efficiency to some extent.The MSRF scheme can not only propagate to a longer distance without the attenuation of innovation,but also reduce the mean absolute deviation between the reconstructed sea ice concentration results and observations reduced by about 3.2%compared to the SMRF scheme.On the other hand,compared with traditional first-order recursive filters using in the SMRF scheme that multiple filters are executed,the MSRF scheme only needs to perform two filter processes in one iteration,greatly improving filtering efficiency.In the two-dimensional experiment of sea ice concentration,the calculation time of the MSRF scheme is only 1/7 of that of SMRF scheme.This means that the MSRF scheme can achieve better performance with less computational cost,which is of great significance for further application in real-time ocean or sea ice data assimilation systems in the future.
基金funding from the Paul ScherrerInstitute,Switzerland through the NES/GFA-ABE Cross Project。
文摘To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation.
基金supported by The Technology Innovation Team(Tianshan Innovation Team),Innovative Team for Efficient Utilization of Water Resources in Arid Regions(2022TSYCTD0001)the National Natural Science Foundation of China(42171269)the Xinjiang Academician Workstation Cooperative Research Project(2020.B-001).
文摘Xinjiang Uygur Autonomous Region is a typical inland arid area in China with a sparse and uneven distribution of meteorological stations,limited access to precipitation data,and significant water scarcity.Evaluating and integrating precipitation datasets from different sources to accurately characterize precipitation patterns has become a challenge to provide more accurate and alternative precipitation information for the region,which can even improve the performance of hydrological modelling.This study evaluated the applicability of widely used five satellite-based precipitation products(Climate Hazards Group InfraRed Precipitation with Station(CHIRPS),China Meteorological Forcing Dataset(CMFD),Climate Prediction Center morphing method(CMORPH),Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record(PERSIANN-CDR),and Tropical Rainfall Measuring Mission Multi-satellite Precipitation Analysis(TMPA))and a reanalysis precipitation dataset(ECMWF Reanalysis v5-Land Dataset(ERA5-Land))in Xinjiang using ground-based observational precipitation data from a limited number of meteorological stations.Based on this assessment,we proposed a framework that integrated different precipitation datasets with varying spatial resolutions using a dynamic Bayesian model averaging(DBMA)approach,the expectation-maximization method,and the ordinary Kriging interpolation method.The daily precipitation data merged using the DBMA approach exhibited distinct spatiotemporal variability,with an outstanding performance,as indicated by low root mean square error(RMSE=1.40 mm/d)and high Person's correlation coefficient(CC=0.67).Compared with the traditional simple model averaging(SMA)and individual product data,although the DBMA-fused precipitation data were slightly lower than the best precipitation product(CMFD),the overall performance of DBMA was more robust.The error analysis between DBMA-fused precipitation dataset and the more advanced Integrated Multi-satellite Retrievals for Global Precipitation Measurement Final(IMERG-F)precipitation product,as well as hydrological simulations in the Ebinur Lake Basin,further demonstrated the superior performance of DBMA-fused precipitation dataset in the entire Xinjiang region.The proposed framework for solving the fusion problem of multi-source precipitation data with different spatial resolutions is feasible for application in inland arid areas,and aids in obtaining more accurate regional hydrological information and improving regional water resources management capabilities and meteorological research in these regions.
文摘In this paper, a model averaging method is proposed for varying-coefficient models with response missing at random by establishing a weight selection criterion based on cross-validation. Under certain regularity conditions, it is proved that the proposed method is asymptotically optimal in the sense of achieving the minimum squared error.
文摘In this article we consider the asymptotic behavior of extreme distribution with the extreme value index γ>0 . The rates of uniform convergence for Fréchet distribution are constructed under the second-order regular variation condition.
基金funded by National Key R&D Program of China(2022YFA1304204)Agricultural Science and Technology Innovation Program(CAAS-ASTIP-2017-FRI-04)Beijing Innovation Consortium of livestock Research System(BAIC05-2023)。
文摘Background Rumen bacterial groups can affect growth performance,such as average daily gain(ADG),feed intake,and efficiency.The study aimed to investigate the inter-relationship of rumen bacterial composition,rumen fermentation indicators,serum indicators,and growth performance of Holstein heifer calves with different ADG.Twelve calves were chosen from a trail with 60 calves and divided into higher ADG(HADG,high pre-and post-weaning ADG,n=6)and lower ADG(LADG,low pre-and post-weaning ADG,n=6)groups to investigate differences in bacterial composition and functions and host phenotype.Results During the preweaning period,the relative abundances of propionate producers,including g_norank_f_Butyricicoccaceae,g_Pyramidobacter,and g_norank_f_norank_o_Clostridia_vadin BB60_group,were higher in HADG calves(LDA>2,P<0.05).Enrichment of these bacteria resulted in increased levels of propionate,a gluconeogenic precursor,in preweaning HADG calves(adjusted P<0.05),which consequently raised serum glucose concentrations(adjusted P<0.05).In contrast,the relative abundances of rumen bacteria in post-weaning HADG calves did not exert this effect.Moreover,no significant differences were observed in rumen fermentation parameters and serum indices between the two groups.Conclusions The findings of this study revealed that the preweaning period is the window of opportunity for rumen bacteria to regulate the ADG of calves.
基金Project supported by the National Natural Science Foundation of China (No. 12272323)。
文摘Second-order axially moving systems are common models in the field of dynamics, such as axially moving strings, cables, and belts. In the traditional research work, it is difficult to obtain closed-form solutions for the forced vibration when the damping effect and the coupling effect of multiple second-order models are considered.In this paper, Green's function method based on the Laplace transform is used to obtain closed-form solutions for the forced vibration of second-order axially moving systems. By taking the axially moving damping string system and multi-string system connected by springs as examples, the detailed solution methods and the analytical Green's functions of these second-order systems are given. The mode functions and frequency equations are also obtained by the obtained Green's functions. The reliability and convenience of the results are verified by several examples. This paper provides a systematic analytical method for the dynamic analysis of second-order axially moving systems, and the obtained Green's functions are applicable to different second-order systems rather than just string systems. In addition, the work of this paper also has positive significance for the study on the forced vibration of high-order systems.
基金Supported by National Postdoctoral Program for Innovative Talent of China (Grant No.BX20180031)。
文摘Periodic components are of great significance for fault diagnosis and health monitoring of rotating machinery.Time synchronous averaging is an effective and convenient technique for extracting those components.However,the performance of time synchronous averaging is seriously limited when the separate segments are poorly synchronized.This paper proposes a new averaging method capable of extracting periodic components without external reference and an accurate period to solve this problem.With this approach,phase detection and compensation eliminate all segments'phase differences,which enables the segments to be well synchronized.The effectiveness of the proposed method is validated by numerical and experimental signals.
基金Project supported by the Executive Agency for Higher Education Research Development and Innovation Funding of Romania(No.PN-III-P4-PCE-2021-0993)。
文摘This study is concerned with the three-dimensional(3D)stagnation-point for the mixed convection flow past a vertical surface considering the first-order and secondorder velocity slips.To the authors’knowledge,this is the first study presenting this very interesting analysis.Nonlinear partial differential equations for the flow problem are transformed into nonlinear ordinary differential equations(ODEs)by using appropriate similarity transformation.These ODEs with the corresponding boundary conditions are numerically solved by utilizing the bvp4c solver in MATLAB programming language.The effects of the governing parameters on the non-dimensional velocity profiles,temperature profiles,skin friction coefficients,and the local Nusselt number are presented in detail through a series of graphs and tables.Interestingly,it is reported that the reduced skin friction coefficient decreases for the assisting flow situation and increases for the opposing flow situation.The numerical computations of the present work are compared with those from other research available in specific situations,and an excellent consensus is observed.Another exciting feature for this work is the existence of dual solutions.An important remark is that the dual solutions exist for both assisting and opposing flows.A linear stability analysis is performed showing that one solution is stable and the other solution is not stable.We notice that the mixed convection and velocity slip parameters have strong effects on the flow characteristics.These effects are depicted in graphs and discussed in this paper.The obtained results show that the first-order and second-order slip parameters have a considerable effect on the flow,as well as on the heat transfer characteristics.
文摘In this paper, we define some new sets of non-elementary functions in a group of solutions x(t) that are sine and cosine to the upper limit of integration in a non-elementary integral that can be arbitrary. We are using Abel’s methods, described by Armitage and Eberlein. The key is to start with a non-elementary integral function, differentiating and inverting, and then define a set of three functions that belong together. Differentiating these functions twice gives second-order nonlinear ODEs that have the defined set of functions as solutions. We will study some of the second-order nonlinear ODEs, especially those that exhibit limit cycles. Using the methods described in this paper, it is possible to define many other sets of non-elementary functions that are giving solutions to some second-order nonlinear autonomous ODEs.
文摘A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming problem can be converted into the single objective function by various methods as Chandra Sen’s method, weighted sum method, ranking function method, statistical averaging method. In this paper, Chandra Sen’s method and statistical averaging method both are used here for making single objective function from multi-objective function. Two multi-objective programming problems are solved to verify the result. One is numerical example and the other is real life example. Then the problems are solved by ordinary simplex method and fuzzy programming method. It can be seen that fuzzy programming method gives better optimal values than the ordinary simplex method.
基金supported by the Russian Science Foundation(Grant No.22-71-00086).
文摘The study of average convection in a rotating cavity subjected to modulated rotation is an interesting area for the development of both fundamental and applied science.This phenomenon finds application in the field of mass transfer and fluid flow control,relevant examples being crystal growth under reduced gravity and fluid mixing in microfluidic devices for cell cultures.In this study,the averaged flow generated by the oscillating motion of a fluid in a planar layer rotating about a horizontal axis is experimentally investigated.The boundaries of the layer are maintained at constant temperatures,while the lateral cylindrical wall is thermally insulated.It is demonstrated that libration results in intense oscillatory fluid motion,which in turn produces a time-averaged flow.For the first time,quantitative measures for the instantaneous velocity field are obtained using the Particle Image Velocimetry technique.It is revealed that the flow has the form of counter-rotating vortices.The vortex circulations sense changes during a libration cycle.An increase in the rotation rate and amplitude of the cavity libration results in an increase in the flow intensity.The heat transfer and time-averaged velocity are examined accordingly as a function of the dimensionless oscillation frequency,and resonant excitation of heat transfer and average oscillation velocity are revealed.The threshold curve for the onset of the averaged convection is identified in the plane of control parameters(dimensionless rotational velocity and pulsation Reynolds number).It is found that an increase in the dimensionless rotational velocity has a stabilizing effect on the onset of convection.
文摘This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this methodology incorporates second-order uncertainties (means and covariances) and second-order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best- Estimate Results with Reduced Uncertainties” and the last letter (“D”) in the acronym indicates “deterministic,” referring to the deterministic inclusion of the computational model responses. The 2<sup>nd</sup>-BERRU-PMD methodology is fundamentally based on the maximum entropy (MaxEnt) principle. This principle is in contradistinction to the fundamental principle that underlies the extant data assimilation and/or adjustment procedures which minimize in a least-square sense a subjective user-defined functional which is meant to represent the discrepancies between measured and computed model responses. It is shown that the 2<sup>nd</sup>-BERRU-PMD methodology generalizes and extends current data assimilation and/or data adjustment procedures while overcoming the fundamental limitations of these procedures. In the accompanying work (Part II), the alternative framework for developing the “second- order MaxEnt predictive modelling methodology” is presented by incorporating probabilistically (as opposed to “deterministically”) the computed model responses.
文摘This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and parameters. This methodology is designated by the acronym 2<sup>nd</sup>-BERRU-PMP, where the attribute “2<sup>nd</sup>” indicates that this methodology incorporates second- order uncertainties (means and covariances) and second (and higher) order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best-Estimate Results with Reduced Uncertainties” and the last letter (“P”) in the acronym indicates “probabilistic,” referring to the MaxEnt probabilistic inclusion of the computational model responses. This is in contradistinction to the 2<sup>nd</sup>-BERRU-PMD methodology, which deterministically combines the computed model responses with the experimental information, as presented in the accompanying work (Part I). Although both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies yield expressions that include second (and higher) order sensitivities of responses to model parameters, the respective expressions for the predicted responses, for the calibrated predicted parameters and for their predicted uncertainties (covariances), are not identical to each other. Nevertheless, the results predicted by both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies encompass, as particular cases, the results produced by the extant data assimilation and data adjustment procedures, which rely on the minimization, in a least-square sense, of a user-defined functional meant to represent the discrepancies between measured and computed model responses.
基金funded by National Natural Science Foundation of China(No.62063006)Guangxi Science and Technology Major Program(No.2022AA05002)+1 种基金Key Laboratory of AI and Information Processing(Hechi University),Education Department of Guangxi Zhuang Autonomous Region(No.2022GXZDSY003)Central Leading Local Science and Technology Development Fund Project of Wuzhou(No.202201001).
文摘By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning of mobile robots.However,the traditional DDQN algorithm suffers from sparse rewards and inefficient utilization of high-quality data.Targeting those problems,an improved DDQN algorithm based on average Q-value estimation and reward redistribution was proposed.First,to enhance the precision of the target Q-value,the average of multiple previously learned Q-values from the target Q network is used to replace the single Q-value from the current target Q network.Next,a reward redistribution mechanism is designed to overcome the sparse reward problem by adjusting the final reward of each action using the round reward from trajectory information.Additionally,a reward-prioritized experience selection method is introduced,which ranks experience samples according to reward values to ensure frequent utilization of high-quality data.Finally,simulation experiments are conducted to verify the effectiveness of the proposed algorithm in fixed-position scenario and random environments.The experimental results show that compared to the traditional DDQN algorithm,the proposed algorithm achieves shorter average running time,higher average return and fewer average steps.The performance of the proposed algorithm is improved by 11.43%in the fixed scenario and 8.33%in random environments.It not only plans economic and safe paths but also significantly improves efficiency and generalization in path planning,making it suitable for widespread application in autonomous navigation and industrial automation.
文摘This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the acronym BERRU denotes “best-estimate results with reduced uncertainties” and “PM” denotes “predictive modeling.” The physical system selected for this illustrative application is a polyethylene-reflected plutonium (acronym: PERP) OECD/NEA reactor physics benchmark. This benchmark is modeled using the neutron transport Boltzmann equation (involving 21,976 uncertain parameters), the solution of which is representative of “large-scale computations.” The results obtained in this work confirm the fact that the 2<sup>nd</sup>-BERRU-PM methodology predicts best-estimate results that fall in between the corresponding computed and measured values, while reducing the predicted standard deviations of the predicted results to values smaller than either the experimentally measured or the computed values of the respective standard deviations. The obtained results also indicate that 2<sup>nd</sup>-order response sensitivities must always be included to quantify the need for including (or not) the 3<sup>rd</sup>- and/or 4<sup>th</sup>-order sensitivities. When the parameters are known with high precision, the contributions of the higher-order sensitivities diminish with increasing order, so that the inclusion of the 1<sup>st</sup>- and 2<sup>nd</sup>-order sensitivities may suffice for obtaining accurate predicted best- estimate response values and best-estimate standard deviations. On the other hand, when the parameters’ standard deviations are sufficiently large to approach (or be outside of) the radius of convergence of the multivariate Taylor-series which represents the response in the phase-space of model parameters, the contributions stemming from the 3<sup>rd</sup>- and even 4<sup>th</sup>-order sensitivities are necessary to ensure consistency between the computed and measured response. In such cases, the use of only the 1<sup>st</sup>-order sensitivities erroneously indicates that the computed results are inconsistent with the respective measured response. Ongoing research aims at extending the 2<sup>nd</sup>-BERRU-PM methodology to fourth-order, thus enabling the computation of third-order response correlations (skewness) and fourth-order response correlations (kurtosis).
文摘In recent times, lithium-ion batteries have been widely used owing to their high energy density, extended cycle lifespan, and minimal self-discharge rate. The design of high-speed rechargeable lithium-ion batteries faces a significant challenge owing to the need to increase average electric power during charging. This challenge results from the direct influence of the power level on the rate of chemical reactions occurring in the battery electrodes. In this study, the Taguchi optimization method was used to enhance the average electric power during the charging process of lithium-ion batteries. The Taguchi technique is a statistical strategy that facilitates the systematic and efficient evaluation of numerous experimental variables. The proposed method involved varying seven input factors, including positive electrode thickness, positive electrode material, positive electrode active material volume fraction, negative electrode active material volume fraction, separator thickness, positive current collector thickness, and negative current collector thickness. Three levels were assigned to each control factor to identify the optimal conditions and maximize the average electric power during charging. Moreover, a variance assessment analysis was conducted to validate the results obtained from the Taguchi analysis. The results revealed that the Taguchi method was an eff ective approach for optimizing the average electric power during the charging of lithium-ion batteries. This indicates that the positive electrode material, followed by the separator thickness and the negative electrode active material volume fraction, was key factors significantly infl uencing the average electric power during the charging of lithium-ion batteries response. The identification of optimal conditions resulted in the improved performance of lithium-ion batteries, extending their potential in various applications. Particularly, lithium-ion batteries with average electric power of 16 W and 17 W during charging were designed and simulated in the range of 0-12000 s using COMSOL Multiphysics software. This study efficiently employs the Taguchi optimization technique to develop lithium-ion batteries capable of storing a predetermined average electric power during the charging phase. Therefore, this method enables the battery to achieve complete charging within a specific timeframe tailored to a specificapplication. The implementation of this method can save costs, time, and materials compared with other alternative methods, such as the trial-and-error approach.
基金the National Natural Science Foundation of China under Grants 62001517 and 61971474the Beijing Nova Program under Grant Z201100006820121.
文摘Integrated satellite unmanned aerial vehicle relay networks(ISUAVRNs)have become a prominent topic in recent years.This paper investigates the average secrecy capacity(ASC)for reconfigurable intelligent surface(RIS)-enabled ISUAVRNs.Especially,an eve is considered to intercept the legitimate information from the considered secrecy system.Besides,we get detailed expressions for the ASC of the regarded secrecy system with the aid of the reconfigurable intelligent.Furthermore,to gain insightful results of the major parameters on the ASC in high signalto-noise ratio regime,the approximate investigations are further gotten,which give an efficient method to value the secrecy analysis.At last,some representative computer results are obtained to prove the theoretical findings.
文摘In order to reveal the complex network characteristics and evolution principle of China aviation network,the relationship between the average degree and the average path length of edge vertices of China aviation network in 1988,1994,2001,2008 and 2015 was studied.According to the theory and method of complex network,the network system was constructed with the city where the airport was located as the network node and the airline as the edge of the network.On the basis of the statistical data,the average degree and average path length of edge vertices of China aviation network in 1988,1994,2001,2008 and 2015 were calculated.Through regression analysis,it was found that the average degree had a logarithmic relationship with the average path length of edge vertices and the two parameters of the logarithmic relationship had linear evolutionary trace.
文摘In order to reveal the complex network characteristics and evolution principle of China aviation network,the probability distribution and evolution trace of arithmetic average of edge vertices nearest neighbor average degree values of China aviation network were studied based on the statistics data of China civil aviation network in 1988,1994,2001,2008 and 2015.According to the theory and method of complex network,the network system was constructed with the city where the airport was located as the network node and the route between cities as the edge of the network.Based on the statistical data,the arithmetic averages of edge vertices nearest neighbor average degree values of China aviation network in 1988,1994,2001,2008 and 2015 were calculated.Using the probability statistical analysis method,it was found that the arithmetic average of edge vertices nearest neighbor average degree values had the probability distribution of normal function and the position parameters and scale parameters of the probability distribution had linear evolution trace.