The state of in situ stress is a crucial parameter in subsurface engineering,especially for critical projects like nuclear waste repository.As one of the two ISRM suggested methods,the overcoring(OC)method is widely u...The state of in situ stress is a crucial parameter in subsurface engineering,especially for critical projects like nuclear waste repository.As one of the two ISRM suggested methods,the overcoring(OC)method is widely used to estimate the full stress tensors in rocks by independent regression analysis of the data from each OC test.However,such customary independent analysis of individual OC tests,known as no pooling,is liable to yield unreliable test-specific stress estimates due to various uncertainty sources involved in the OC method.To address this problem,a practical and no-cost solution is considered by incorporating into OC data analysis additional information implied within adjacent OC tests,which are usually available in OC measurement campaigns.Hence,this paper presents a Bayesian partial pooling(hierarchical)model for combined analysis of adjacent OC tests.We performed five case studies using OC test data made at a nuclear waste repository research site of Sweden.The results demonstrate that partial pooling of adjacent OC tests indeed allows borrowing of information across adjacent tests,and yields improved stress tensor estimates with reduced uncertainties simultaneously for all individual tests than they are independently analysed as no pooling,particularly for those unreliable no pooling stress estimates.A further model comparison shows that the partial pooling model also gives better predictive performance,and thus confirms that the information borrowed across adjacent OC tests is relevant and effective.展开更多
The zero_failure data research is a new field in the recent years, but it is required urgently in practical projects, so the work has more theory and practical values. In this paper, for zero_failure data (t i,n i...The zero_failure data research is a new field in the recent years, but it is required urgently in practical projects, so the work has more theory and practical values. In this paper, for zero_failure data (t i,n i) at moment t i , if the prior distribution of the failure probability p i=p{T【t i} is quasi_exponential distribution, the author gives the p i Bayesian estimation and hierarchical Bayesian estimation and the reliability under zero_failure date condition is also obtained.展开更多
Since orthogonal time-frequency space(OTFS)can effectively handle the problems caused by Doppler effect in high-mobility environment,it has gradually become a promising candidate for modulation scheme in the next gene...Since orthogonal time-frequency space(OTFS)can effectively handle the problems caused by Doppler effect in high-mobility environment,it has gradually become a promising candidate for modulation scheme in the next generation of mobile communication.However,the inter-Doppler interference(IDI)problem caused by fractional Doppler poses great challenges to channel estimation.To avoid this problem,this paper proposes a joint time and delayDoppler(DD)domain based on sparse Bayesian learning(SBL)channel estimation algorithm.Firstly,we derive the original channel response(OCR)from the time domain channel impulse response(CIR),which can reflect the channel variation during one OTFS symbol.Compare with the traditional channel model,the OCR can avoid the IDI problem.After that,the dimension of OCR is reduced by using the basis expansion model(BEM)and the relationship between the time and DD domain channel model,so that we have turned the underdetermined problem into an overdetermined problem.Finally,in terms of sparsity of channel in delay domain,SBL algorithm is used to estimate the basis coefficients in the BEM without any priori information of channel.The simulation results show the effectiveness and superiority of the proposed channel estimation algorithm.展开更多
Decision-theoretic interval estimation requires the use of loss functions that, typically, take into account the size and the coverage of the sets. We here consider the class of monotone loss functions that, under qui...Decision-theoretic interval estimation requires the use of loss functions that, typically, take into account the size and the coverage of the sets. We here consider the class of monotone loss functions that, under quite general conditions, guarantee Bayesian optimality of highest posterior probability sets. We focus on three specific families of monotone losses, namely the linear, the exponential and the rational losses whose difference consists in the way the sizes of the sets are penalized. Within the standard yet important set-up of a normal model we propose: 1) an optimality analysis, to compare the solutions yielded by the alternative classes of losses;2) a regret analysis, to evaluate the additional loss of standard non-optimal intervals of fixed credibility. The article uses an application to a clinical trial as an illustrative example.展开更多
This article introduces a novel variant of the generalized linear exponential(GLE)distribution,known as the sine generalized linear exponential(SGLE)distribution.The SGLE distribution utilizes the sine transformation ...This article introduces a novel variant of the generalized linear exponential(GLE)distribution,known as the sine generalized linear exponential(SGLE)distribution.The SGLE distribution utilizes the sine transformation to enhance its capabilities.The updated distribution is very adaptable and may be efficiently used in the modeling of survival data and dependability issues.The suggested model incorporates a hazard rate function(HRF)that may display a rising,J-shaped,or bathtub form,depending on its unique characteristics.This model includes many well-known lifespan distributions as separate sub-models.The suggested model is accompanied with a range of statistical features.The model parameters are examined using the techniques of maximum likelihood and Bayesian estimation using progressively censored data.In order to evaluate the effectiveness of these techniques,we provide a set of simulated data for testing purposes.The relevance of the newly presented model is shown via two real-world dataset applications,highlighting its superiority over other respected similar models.展开更多
The accurate estimation of parameters is the premise for establishing a high-fidelity simulation model of a valve-controlled cylinder system.Bench test data are easily obtained,but it is challenging to emulate actual ...The accurate estimation of parameters is the premise for establishing a high-fidelity simulation model of a valve-controlled cylinder system.Bench test data are easily obtained,but it is challenging to emulate actual loads in the research on parameter estimation of valve-controlled cylinder system.Despite the actual load information contained in the operating data of the control valve,its acquisition remains challenging.This paper proposes a method that fuses bench test and operating data for parameter estimation to address the aforementioned problems.The proposed method is based on Bayesian theory,and its core is a pool fusion of prior information from bench test and operating data.Firstly,a system model is established,and the parameters in the model are analysed.Secondly,the bench and operating data of the system are collected.Then,the model parameters and weight coefficients are estimated using the data fusion method.Finally,the estimated effects of the data fusion method,Bayesian method,and particle swarm optimisation(PSO)algorithm on system model parameters are compared.The research shows that the weight coefficient represents the contribution of different prior information to the parameter estimation result.The effect of parameter estimation based on the data fusion method is better than that of the Bayesian method and the PSO algorithm.Increasing load complexity leads to a decrease in model accuracy,highlighting the crucial role of the data fusion method in parameter estimation studies.展开更多
Xinjiang Uygur Autonomous Region is a typical inland arid area in China with a sparse and uneven distribution of meteorological stations,limited access to precipitation data,and significant water scarcity.Evaluating a...Xinjiang Uygur Autonomous Region is a typical inland arid area in China with a sparse and uneven distribution of meteorological stations,limited access to precipitation data,and significant water scarcity.Evaluating and integrating precipitation datasets from different sources to accurately characterize precipitation patterns has become a challenge to provide more accurate and alternative precipitation information for the region,which can even improve the performance of hydrological modelling.This study evaluated the applicability of widely used five satellite-based precipitation products(Climate Hazards Group InfraRed Precipitation with Station(CHIRPS),China Meteorological Forcing Dataset(CMFD),Climate Prediction Center morphing method(CMORPH),Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record(PERSIANN-CDR),and Tropical Rainfall Measuring Mission Multi-satellite Precipitation Analysis(TMPA))and a reanalysis precipitation dataset(ECMWF Reanalysis v5-Land Dataset(ERA5-Land))in Xinjiang using ground-based observational precipitation data from a limited number of meteorological stations.Based on this assessment,we proposed a framework that integrated different precipitation datasets with varying spatial resolutions using a dynamic Bayesian model averaging(DBMA)approach,the expectation-maximization method,and the ordinary Kriging interpolation method.The daily precipitation data merged using the DBMA approach exhibited distinct spatiotemporal variability,with an outstanding performance,as indicated by low root mean square error(RMSE=1.40 mm/d)and high Person's correlation coefficient(CC=0.67).Compared with the traditional simple model averaging(SMA)and individual product data,although the DBMA-fused precipitation data were slightly lower than the best precipitation product(CMFD),the overall performance of DBMA was more robust.The error analysis between DBMA-fused precipitation dataset and the more advanced Integrated Multi-satellite Retrievals for Global Precipitation Measurement Final(IMERG-F)precipitation product,as well as hydrological simulations in the Ebinur Lake Basin,further demonstrated the superior performance of DBMA-fused precipitation dataset in the entire Xinjiang region.The proposed framework for solving the fusion problem of multi-source precipitation data with different spatial resolutions is feasible for application in inland arid areas,and aids in obtaining more accurate regional hydrological information and improving regional water resources management capabilities and meteorological research in these regions.展开更多
Small area estimation (SAE) tackles the problem of providing reliable estimates for small areas, i.e., subsets of the population for which sample information is not sufficient to warrant the use of a direct estimator....Small area estimation (SAE) tackles the problem of providing reliable estimates for small areas, i.e., subsets of the population for which sample information is not sufficient to warrant the use of a direct estimator. Hierarchical Bayesian approach to SAE problems offers several advantages over traditional SAE models including the ability of appropriately accounting for the type of surveyed variable. In this paper, a number of model specifications for estimating small area counts are discussed and their relative merits are illustrated. We conducted a simulation study by reproducing in a simplified form the Italian Labour Force Survey and taking the Local Labor Markets as target areas. Simulated data were generated by assuming population characteristics of interest as well as survey sampling design as known. In one set of experiments, numbers of employment/unemployment from census data were utilized, in others population characteristics were varied. Results show persistent model failures for some standard Fay-Herriot specifications and for generalized linear Poisson models with (log-)normal sampling stage, whilst either unmatched or nonnormal sampling stage models get the best performance in terms of bias, accuracy and reliability. Though, the study also found that any model noticeably improves on its performance by letting sampling variances be stochastically determined rather than assumed as known as is the general practice. Moreover, we address the issue of model determination to point out limits and possible deceptions of commonly used criteria for model selection and checking in SAE context.展开更多
This article presents an up-to-date tutorial review of nonlinear Bayesian estimation. State estimation for nonlinear systems has been a challenge encountered in a wide range of engineering fields, attracting decades o...This article presents an up-to-date tutorial review of nonlinear Bayesian estimation. State estimation for nonlinear systems has been a challenge encountered in a wide range of engineering fields, attracting decades of research effort. To date,one of the most promising and popular approaches is to view and address the problem from a Bayesian probabilistic perspective,which enables estimation of the unknown state variables by tracking their probabilistic distribution or statistics(e.g., mean and covariance) conditioned on a system's measurement data.This article offers a systematic introduction to the Bayesian state estimation framework and reviews various Kalman filtering(KF)techniques, progressively from the standard KF for linear systems to extended KF, unscented KF and ensemble KF for nonlinear systems. It also overviews other prominent or emerging Bayesian estimation methods including Gaussian filtering, Gaussian-sum filtering, particle filtering and moving horizon estimation and extends the discussion of state estimation to more complicated problems such as simultaneous state and parameter/input estimation.展开更多
Measurement error of unbalance's vibration response plays a crucial role in calibration and on-line updating of influence coefficient(IC). Focusing on the two problems that the moment estimator of data used in cali...Measurement error of unbalance's vibration response plays a crucial role in calibration and on-line updating of influence coefficient(IC). Focusing on the two problems that the moment estimator of data used in calibration process cannot fulfill the accuracy requirement under small sample and the disturbance of measurement error cannot be effectively suppressed in updating process, an IC calibration and on-line updating method based on hierarchical Bayesian method for automatic dynamic balancing machine was proposed. During calibration process, for the repeatedly-measured data obtained from experiments with different trial weights, according to the fact that measurement error of each sensor had the same statistical characteristics, the joint posterior distribution model for the true values of the vibration response under all trial weights and measurement error was established. During the updating process, information obtained from calibration was regarded as prior information, which was utilized to update the posterior distribution of IC combined with the real-time reference information to implement online updating. Moreover, Gibbs sampling method of Markov Chain Monte Carlo(MCMC) was adopted to obtain the maximum posterior estimation of parameters to be estimated. On the independent developed dynamic balancing testbed, prediction was carried out for multiple groups of data through the proposed method and the traditional method respectively, the result indicated that estimator of influence coefficient obtained through the proposed method had higher accuracy; the proposed updating method more effectively guaranteed the measurement accuracy during the whole producing process, and meantime more reasonably compromised between the sensitivity of IC change and suppression of randomness of vibration response.展开更多
A Bayesian estimation method to separate multicomponent signals with single channel observation is presented in this paper. By using the basis function projection, the component separation becomes a problem of limited...A Bayesian estimation method to separate multicomponent signals with single channel observation is presented in this paper. By using the basis function projection, the component separation becomes a problem of limited parameter estimation. Then, a Bayesian model for estimating parameters is set up. The reversible jump MCMC (Monte Carlo Markov Chain) algorithmis adopted to perform the Bayesian computation. The method can jointly estimate the parameters of each component and the component number. Simulation results demonstrate that the method has low SNR threshold and better performance.展开更多
An efficient despeclding algorithm is proposed based on stationary wavelet transform (SWT) for synthetic aperture radar (SAR) images. The statistical model of wavelet coefficients is analyzed and its performance i...An efficient despeclding algorithm is proposed based on stationary wavelet transform (SWT) for synthetic aperture radar (SAR) images. The statistical model of wavelet coefficients is analyzed and its performance is modeled with a mixture density of two zero-mean Gaussian distributions. A fuzzy shrinkage factor is derived based on the minimum mean square error (MMSE) criteria with Bayesian estimation. In the case above, the ideas of region division and fuzzy shrinkage arc adopted according to the interscale dependencies among wavelet coefficients. The noise-free wavelet coefficients are estimated accurately. Experimental results show that the algorithm proposed is superior to the refined Lee filter, wavelet soft thresbolding shrinkage and SWT shrinkage algorithms in terms of smoothing effects and edges preservation.展开更多
This paper develops a new method, named E-Bayesian estimation method, to estimate the reliability parameters. The E-Bayesian estimation method of the reliability are derived for the zero-failure data from the product ...This paper develops a new method, named E-Bayesian estimation method, to estimate the reliability parameters. The E-Bayesian estimation method of the reliability are derived for the zero-failure data from the product with Binomial distribution. Firstly, for the product reliability, the definitions of E-Bayesian estimation were given, and on the base, expressions of the E-Bayesian estimation and hierarchical Bayesian estimation of the products reliability was given. Secondly, discuss properties of the E-Bayesian estimation. Finally, the new method is applied to a real zero-failure data set, and as can be seen, it is both efficient and easy to operate.展开更多
In the hierarchical random effect linear model, the Bayes estimator of random parameter are not only dependent on specific prior distribution but also it is difficult to calculate in most cases. This paper derives the...In the hierarchical random effect linear model, the Bayes estimator of random parameter are not only dependent on specific prior distribution but also it is difficult to calculate in most cases. This paper derives the distributed-free optimal linear estimator of random parameters in the model by means of the credibility theory method. The estimators the authors derive can be applied in more extensive practical scenarios since they are only dependent on the first two moments of prior parameter rather than on specific prior distribution. Finally, the results are compared with some classical models and a numerical example is given to show the effectiveness of the estimators.展开更多
The accuracy of target threat estimation has a great impact on command decision-making.The Bayesian network,as an effective way to deal with the problem of uncertainty,can be used to track the change of the target thr...The accuracy of target threat estimation has a great impact on command decision-making.The Bayesian network,as an effective way to deal with the problem of uncertainty,can be used to track the change of the target threat level.Unfortunately,the traditional discrete dynamic Bayesian network(DDBN)has the problems of poor parameter learning and poor reasoning accuracy in a small sample environment with partial prior information missing.Considering the finiteness and discreteness of DDBN parameters,a fuzzy k-nearest neighbor(KNN)algorithm based on correlation of feature quantities(CF-FKNN)is proposed for DDBN parameter learning.Firstly,the correlation between feature quantities is calculated,and then the KNN algorithm with fuzzy weight is introduced to fill the missing data.On this basis,a reasonable DDBN structure is constructed by using expert experience to complete DDBN parameter learning and reasoning.Simulation results show that the CF-FKNN algorithm can accurately fill in the data when the samples are seriously missing,and improve the effect of DDBN parameter learning in the case of serious sample missing.With the proposed method,the final target threat assessment results are reasonable,which meets the needs of engineering applications.展开更多
This paper considers the Bayesian and expected Bayesian(E-Bayesian) estimations of the parameter and reliability function for competing risk model from Gompertz distribution under Type-I progressively hybrid censori...This paper considers the Bayesian and expected Bayesian(E-Bayesian) estimations of the parameter and reliability function for competing risk model from Gompertz distribution under Type-I progressively hybrid censoring scheme(PHCS). The estimations are obtained based on Gamma conjugate prior for the parameter under squared error(SE) and Linex loss functions. The simulation results are provided for the comparison purpose and one data set is analyzed.展开更多
In this paper, we provide a Word Emotion Topic (WET) model to predict the complex word e- motion information from text, and discover the dis- trbution of emotions among different topics. A complex emotion is defined...In this paper, we provide a Word Emotion Topic (WET) model to predict the complex word e- motion information from text, and discover the dis- trbution of emotions among different topics. A complex emotion is defined as the combination of one or more singular emotions from following 8 basic emotion categories: joy, love, expectation, sur- prise, anxiety, sorrow, anger and hate. We use a hi- erarchical Bayesian network to model the emotions and topics in the text. Both the complex emotions and topics are drawn from raw texts, without con- sidering any complicated language features. Our ex- periment shows promising results of word emotion prediction, which outperforms the traditional parsing methods such as the Hidden Markov Model and the Conditional Random Fields(CRFs) on raw text. We also explore the topic distribution by examining the emotion topic variation in an emotion topic diagram.展开更多
This study developed a hierarchical Bayesian(HB)model for local and regional flood frequency analysis in the Dongting Lake Basin,in China.The annual maximum daily flows from 15 streamflow-gauged sites in the study are...This study developed a hierarchical Bayesian(HB)model for local and regional flood frequency analysis in the Dongting Lake Basin,in China.The annual maximum daily flows from 15 streamflow-gauged sites in the study area were analyzed with the HB model.The generalized extreme value(GEV)distribution was selected as the extreme flood distribution,and the GEV distribution location and scale parameters were spatially modeled through a regression approach with the drainage area as a covariate.The Markov chain Monte Carlo(MCMC)method with Gibbs sampling was employed to calculate the posterior distribution in the HB model.The results showed that the proposed HB model provided satisfactory Bayesian credible intervals for flood quantiles,while the traditional delta method could not provide reliable uncertainty estimations for large flood quantiles,due to the fact that the lower confidence bounds tended to decrease as the return periods increased.Furthermore,the HB model for regional analysis allowed for a reduction in the value of some restrictive assumptions in the traditional index flood method,such as the homogeneity region assumption and the scale invariance assumption.The HB model can also provide an uncertainty band of flood quantile prediction at a poorly gauged or ungauged site,but the index flood method with L-moments does not demonstrate this uncertainty directly.Therefore,the HB model is an effective method of implementing the flexible local and regional frequency analysis scheme,and of quantifying the associated predictive uncertainty.展开更多
In this paper, we consider the problem of determining the order ofINAR(Q) model on the basis of the Bayesian estimation theory. The Bayesian es-timator for the order is given with respect to a squared-error loss fu...In this paper, we consider the problem of determining the order ofINAR(Q) model on the basis of the Bayesian estimation theory. The Bayesian es-timator for the order is given with respect to a squared-error loss function. The consistency of the estimator is discussed. The results of a simulation study for the estimation method are presented.展开更多
Hydrocracking is a catalytic reaction process in the petroleum refineries for converting the higher boiling temperature residue of crude oil into a lighter fraction of hydrocarbons such as gasoline and diesel. In this...Hydrocracking is a catalytic reaction process in the petroleum refineries for converting the higher boiling temperature residue of crude oil into a lighter fraction of hydrocarbons such as gasoline and diesel. In this study, a modified continuous lumping kinetic approach is applied to model the hydro-cracking of vacuum gas oil. The model is modified to take into consideration the reactor temperature on the reaction yield distribution. The model is calibrated by maximizing the likelihood function between the modeled and measured data at four different reactor temperatures. Bayesian approach parameter estimation is also applied to obtain the confidence interval of model parameters by considering the uncertainty associated with the measured errors and the model structural errors. Then Monte Carlo simulation is applied to the posterior range of the model parameters to obtain the 95% confidence interval of the model outputs for each individual fraction of the hydrocracking products. A good agreement is observed between the output of the calibrated model and the measured data points. The Bayesian approach based on the Markov Chain Monte Carlo simulation is shown to be efficient to quantify the uncertainty associated with the parameter values of the continuous lumping model.展开更多
基金supported by the Guangdong Basic and Applied Basic Research Foundation(2023A1515011244).
文摘The state of in situ stress is a crucial parameter in subsurface engineering,especially for critical projects like nuclear waste repository.As one of the two ISRM suggested methods,the overcoring(OC)method is widely used to estimate the full stress tensors in rocks by independent regression analysis of the data from each OC test.However,such customary independent analysis of individual OC tests,known as no pooling,is liable to yield unreliable test-specific stress estimates due to various uncertainty sources involved in the OC method.To address this problem,a practical and no-cost solution is considered by incorporating into OC data analysis additional information implied within adjacent OC tests,which are usually available in OC measurement campaigns.Hence,this paper presents a Bayesian partial pooling(hierarchical)model for combined analysis of adjacent OC tests.We performed five case studies using OC test data made at a nuclear waste repository research site of Sweden.The results demonstrate that partial pooling of adjacent OC tests indeed allows borrowing of information across adjacent tests,and yields improved stress tensor estimates with reduced uncertainties simultaneously for all individual tests than they are independently analysed as no pooling,particularly for those unreliable no pooling stress estimates.A further model comparison shows that the partial pooling model also gives better predictive performance,and thus confirms that the information borrowed across adjacent OC tests is relevant and effective.
文摘The zero_failure data research is a new field in the recent years, but it is required urgently in practical projects, so the work has more theory and practical values. In this paper, for zero_failure data (t i,n i) at moment t i , if the prior distribution of the failure probability p i=p{T【t i} is quasi_exponential distribution, the author gives the p i Bayesian estimation and hierarchical Bayesian estimation and the reliability under zero_failure date condition is also obtained.
基金supported by the Natural Science Foundation of Chongqing(No.cstc2019jcyj-msxmX0017)。
文摘Since orthogonal time-frequency space(OTFS)can effectively handle the problems caused by Doppler effect in high-mobility environment,it has gradually become a promising candidate for modulation scheme in the next generation of mobile communication.However,the inter-Doppler interference(IDI)problem caused by fractional Doppler poses great challenges to channel estimation.To avoid this problem,this paper proposes a joint time and delayDoppler(DD)domain based on sparse Bayesian learning(SBL)channel estimation algorithm.Firstly,we derive the original channel response(OCR)from the time domain channel impulse response(CIR),which can reflect the channel variation during one OTFS symbol.Compare with the traditional channel model,the OCR can avoid the IDI problem.After that,the dimension of OCR is reduced by using the basis expansion model(BEM)and the relationship between the time and DD domain channel model,so that we have turned the underdetermined problem into an overdetermined problem.Finally,in terms of sparsity of channel in delay domain,SBL algorithm is used to estimate the basis coefficients in the BEM without any priori information of channel.The simulation results show the effectiveness and superiority of the proposed channel estimation algorithm.
文摘Decision-theoretic interval estimation requires the use of loss functions that, typically, take into account the size and the coverage of the sets. We here consider the class of monotone loss functions that, under quite general conditions, guarantee Bayesian optimality of highest posterior probability sets. We focus on three specific families of monotone losses, namely the linear, the exponential and the rational losses whose difference consists in the way the sizes of the sets are penalized. Within the standard yet important set-up of a normal model we propose: 1) an optimality analysis, to compare the solutions yielded by the alternative classes of losses;2) a regret analysis, to evaluate the additional loss of standard non-optimal intervals of fixed credibility. The article uses an application to a clinical trial as an illustrative example.
基金This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-RG23142).
文摘This article introduces a novel variant of the generalized linear exponential(GLE)distribution,known as the sine generalized linear exponential(SGLE)distribution.The SGLE distribution utilizes the sine transformation to enhance its capabilities.The updated distribution is very adaptable and may be efficiently used in the modeling of survival data and dependability issues.The suggested model incorporates a hazard rate function(HRF)that may display a rising,J-shaped,or bathtub form,depending on its unique characteristics.This model includes many well-known lifespan distributions as separate sub-models.The suggested model is accompanied with a range of statistical features.The model parameters are examined using the techniques of maximum likelihood and Bayesian estimation using progressively censored data.In order to evaluate the effectiveness of these techniques,we provide a set of simulated data for testing purposes.The relevance of the newly presented model is shown via two real-world dataset applications,highlighting its superiority over other respected similar models.
基金Supported by National Key R&D Program of China(Grant Nos.2020YFB1709901,2020YFB1709904)National Natural Science Foundation of China(Grant Nos.51975495,51905460)+1 种基金Guangdong Provincial Basic and Applied Basic Research Foundation of China(Grant No.2021-A1515012286)Science and Technology Plan Project of Fuzhou City of China(Grant No.2022-P-022).
文摘The accurate estimation of parameters is the premise for establishing a high-fidelity simulation model of a valve-controlled cylinder system.Bench test data are easily obtained,but it is challenging to emulate actual loads in the research on parameter estimation of valve-controlled cylinder system.Despite the actual load information contained in the operating data of the control valve,its acquisition remains challenging.This paper proposes a method that fuses bench test and operating data for parameter estimation to address the aforementioned problems.The proposed method is based on Bayesian theory,and its core is a pool fusion of prior information from bench test and operating data.Firstly,a system model is established,and the parameters in the model are analysed.Secondly,the bench and operating data of the system are collected.Then,the model parameters and weight coefficients are estimated using the data fusion method.Finally,the estimated effects of the data fusion method,Bayesian method,and particle swarm optimisation(PSO)algorithm on system model parameters are compared.The research shows that the weight coefficient represents the contribution of different prior information to the parameter estimation result.The effect of parameter estimation based on the data fusion method is better than that of the Bayesian method and the PSO algorithm.Increasing load complexity leads to a decrease in model accuracy,highlighting the crucial role of the data fusion method in parameter estimation studies.
基金supported by The Technology Innovation Team(Tianshan Innovation Team),Innovative Team for Efficient Utilization of Water Resources in Arid Regions(2022TSYCTD0001)the National Natural Science Foundation of China(42171269)the Xinjiang Academician Workstation Cooperative Research Project(2020.B-001).
文摘Xinjiang Uygur Autonomous Region is a typical inland arid area in China with a sparse and uneven distribution of meteorological stations,limited access to precipitation data,and significant water scarcity.Evaluating and integrating precipitation datasets from different sources to accurately characterize precipitation patterns has become a challenge to provide more accurate and alternative precipitation information for the region,which can even improve the performance of hydrological modelling.This study evaluated the applicability of widely used five satellite-based precipitation products(Climate Hazards Group InfraRed Precipitation with Station(CHIRPS),China Meteorological Forcing Dataset(CMFD),Climate Prediction Center morphing method(CMORPH),Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record(PERSIANN-CDR),and Tropical Rainfall Measuring Mission Multi-satellite Precipitation Analysis(TMPA))and a reanalysis precipitation dataset(ECMWF Reanalysis v5-Land Dataset(ERA5-Land))in Xinjiang using ground-based observational precipitation data from a limited number of meteorological stations.Based on this assessment,we proposed a framework that integrated different precipitation datasets with varying spatial resolutions using a dynamic Bayesian model averaging(DBMA)approach,the expectation-maximization method,and the ordinary Kriging interpolation method.The daily precipitation data merged using the DBMA approach exhibited distinct spatiotemporal variability,with an outstanding performance,as indicated by low root mean square error(RMSE=1.40 mm/d)and high Person's correlation coefficient(CC=0.67).Compared with the traditional simple model averaging(SMA)and individual product data,although the DBMA-fused precipitation data were slightly lower than the best precipitation product(CMFD),the overall performance of DBMA was more robust.The error analysis between DBMA-fused precipitation dataset and the more advanced Integrated Multi-satellite Retrievals for Global Precipitation Measurement Final(IMERG-F)precipitation product,as well as hydrological simulations in the Ebinur Lake Basin,further demonstrated the superior performance of DBMA-fused precipitation dataset in the entire Xinjiang region.The proposed framework for solving the fusion problem of multi-source precipitation data with different spatial resolutions is feasible for application in inland arid areas,and aids in obtaining more accurate regional hydrological information and improving regional water resources management capabilities and meteorological research in these regions.
文摘Small area estimation (SAE) tackles the problem of providing reliable estimates for small areas, i.e., subsets of the population for which sample information is not sufficient to warrant the use of a direct estimator. Hierarchical Bayesian approach to SAE problems offers several advantages over traditional SAE models including the ability of appropriately accounting for the type of surveyed variable. In this paper, a number of model specifications for estimating small area counts are discussed and their relative merits are illustrated. We conducted a simulation study by reproducing in a simplified form the Italian Labour Force Survey and taking the Local Labor Markets as target areas. Simulated data were generated by assuming population characteristics of interest as well as survey sampling design as known. In one set of experiments, numbers of employment/unemployment from census data were utilized, in others population characteristics were varied. Results show persistent model failures for some standard Fay-Herriot specifications and for generalized linear Poisson models with (log-)normal sampling stage, whilst either unmatched or nonnormal sampling stage models get the best performance in terms of bias, accuracy and reliability. Though, the study also found that any model noticeably improves on its performance by letting sampling variances be stochastically determined rather than assumed as known as is the general practice. Moreover, we address the issue of model determination to point out limits and possible deceptions of commonly used criteria for model selection and checking in SAE context.
文摘This article presents an up-to-date tutorial review of nonlinear Bayesian estimation. State estimation for nonlinear systems has been a challenge encountered in a wide range of engineering fields, attracting decades of research effort. To date,one of the most promising and popular approaches is to view and address the problem from a Bayesian probabilistic perspective,which enables estimation of the unknown state variables by tracking their probabilistic distribution or statistics(e.g., mean and covariance) conditioned on a system's measurement data.This article offers a systematic introduction to the Bayesian state estimation framework and reviews various Kalman filtering(KF)techniques, progressively from the standard KF for linear systems to extended KF, unscented KF and ensemble KF for nonlinear systems. It also overviews other prominent or emerging Bayesian estimation methods including Gaussian filtering, Gaussian-sum filtering, particle filtering and moving horizon estimation and extends the discussion of state estimation to more complicated problems such as simultaneous state and parameter/input estimation.
基金supported by National Hi-tech Research and Development Program of China (863 Program, Grant No. 2008 AA04Z114)
文摘Measurement error of unbalance's vibration response plays a crucial role in calibration and on-line updating of influence coefficient(IC). Focusing on the two problems that the moment estimator of data used in calibration process cannot fulfill the accuracy requirement under small sample and the disturbance of measurement error cannot be effectively suppressed in updating process, an IC calibration and on-line updating method based on hierarchical Bayesian method for automatic dynamic balancing machine was proposed. During calibration process, for the repeatedly-measured data obtained from experiments with different trial weights, according to the fact that measurement error of each sensor had the same statistical characteristics, the joint posterior distribution model for the true values of the vibration response under all trial weights and measurement error was established. During the updating process, information obtained from calibration was regarded as prior information, which was utilized to update the posterior distribution of IC combined with the real-time reference information to implement online updating. Moreover, Gibbs sampling method of Markov Chain Monte Carlo(MCMC) was adopted to obtain the maximum posterior estimation of parameters to be estimated. On the independent developed dynamic balancing testbed, prediction was carried out for multiple groups of data through the proposed method and the traditional method respectively, the result indicated that estimator of influence coefficient obtained through the proposed method had higher accuracy; the proposed updating method more effectively guaranteed the measurement accuracy during the whole producing process, and meantime more reasonably compromised between the sensitivity of IC change and suppression of randomness of vibration response.
文摘A Bayesian estimation method to separate multicomponent signals with single channel observation is presented in this paper. By using the basis function projection, the component separation becomes a problem of limited parameter estimation. Then, a Bayesian model for estimating parameters is set up. The reversible jump MCMC (Monte Carlo Markov Chain) algorithmis adopted to perform the Bayesian computation. The method can jointly estimate the parameters of each component and the component number. Simulation results demonstrate that the method has low SNR threshold and better performance.
基金A Postdoctoral Science Foundation of China (J63104020156) National Defence Foundation of China
文摘An efficient despeclding algorithm is proposed based on stationary wavelet transform (SWT) for synthetic aperture radar (SAR) images. The statistical model of wavelet coefficients is analyzed and its performance is modeled with a mixture density of two zero-mean Gaussian distributions. A fuzzy shrinkage factor is derived based on the minimum mean square error (MMSE) criteria with Bayesian estimation. In the case above, the ideas of region division and fuzzy shrinkage arc adopted according to the interscale dependencies among wavelet coefficients. The noise-free wavelet coefficients are estimated accurately. Experimental results show that the algorithm proposed is superior to the refined Lee filter, wavelet soft thresbolding shrinkage and SWT shrinkage algorithms in terms of smoothing effects and edges preservation.
基金Supported by the Fujian Province NSFC(2009J01001)
文摘This paper develops a new method, named E-Bayesian estimation method, to estimate the reliability parameters. The E-Bayesian estimation method of the reliability are derived for the zero-failure data from the product with Binomial distribution. Firstly, for the product reliability, the definitions of E-Bayesian estimation were given, and on the base, expressions of the E-Bayesian estimation and hierarchical Bayesian estimation of the products reliability was given. Secondly, discuss properties of the E-Bayesian estimation. Finally, the new method is applied to a real zero-failure data set, and as can be seen, it is both efficient and easy to operate.
基金supported by the National Science Foundation of China under Grant Nos.71361015,71340010,71371074the Jiangxi Provincial Natural Science Foundation under Grant No.20142BAB201013+2 种基金China Postdoctoral Science Foundation under Grant No.2013M540534China Postdoctoral Fund special Project under Grant No.2014T70615Jiangxi Postdoctoral Science Foundation under Grant No.2013KY53
文摘In the hierarchical random effect linear model, the Bayes estimator of random parameter are not only dependent on specific prior distribution but also it is difficult to calculate in most cases. This paper derives the distributed-free optimal linear estimator of random parameters in the model by means of the credibility theory method. The estimators the authors derive can be applied in more extensive practical scenarios since they are only dependent on the first two moments of prior parameter rather than on specific prior distribution. Finally, the results are compared with some classical models and a numerical example is given to show the effectiveness of the estimators.
基金supported by the Fundamental Scientific Research Business Expenses for Central Universities(3072021CFJ0803)the Advanced Marine Communication and Information Technology Ministry of Industry and Information Technology Key Laboratory Project(AMCIT21V3).
文摘The accuracy of target threat estimation has a great impact on command decision-making.The Bayesian network,as an effective way to deal with the problem of uncertainty,can be used to track the change of the target threat level.Unfortunately,the traditional discrete dynamic Bayesian network(DDBN)has the problems of poor parameter learning and poor reasoning accuracy in a small sample environment with partial prior information missing.Considering the finiteness and discreteness of DDBN parameters,a fuzzy k-nearest neighbor(KNN)algorithm based on correlation of feature quantities(CF-FKNN)is proposed for DDBN parameter learning.Firstly,the correlation between feature quantities is calculated,and then the KNN algorithm with fuzzy weight is introduced to fill the missing data.On this basis,a reasonable DDBN structure is constructed by using expert experience to complete DDBN parameter learning and reasoning.Simulation results show that the CF-FKNN algorithm can accurately fill in the data when the samples are seriously missing,and improve the effect of DDBN parameter learning in the case of serious sample missing.With the proposed method,the final target threat assessment results are reasonable,which meets the needs of engineering applications.
基金supported by the National Natural Science Foundation of China(7117116471401134+1 种基金71571144)the Natural Science Basic Research Program of Shaanxi Province(2015JM1003)
文摘This paper considers the Bayesian and expected Bayesian(E-Bayesian) estimations of the parameter and reliability function for competing risk model from Gompertz distribution under Type-I progressively hybrid censoring scheme(PHCS). The estimations are obtained based on Gamma conjugate prior for the parameter under squared error(SE) and Linex loss functions. The simulation results are provided for the comparison purpose and one data set is analyzed.
基金supported by the Ministry of Education,Science,Sports and Culture,Grant-in-Aid for Scientific Research under Grant No.22240021the Grant-in-Aid for Challenging Exploratory Research under Grant No.21650030
文摘In this paper, we provide a Word Emotion Topic (WET) model to predict the complex word e- motion information from text, and discover the dis- trbution of emotions among different topics. A complex emotion is defined as the combination of one or more singular emotions from following 8 basic emotion categories: joy, love, expectation, sur- prise, anxiety, sorrow, anger and hate. We use a hi- erarchical Bayesian network to model the emotions and topics in the text. Both the complex emotions and topics are drawn from raw texts, without con- sidering any complicated language features. Our ex- periment shows promising results of word emotion prediction, which outperforms the traditional parsing methods such as the Hidden Markov Model and the Conditional Random Fields(CRFs) on raw text. We also explore the topic distribution by examining the emotion topic variation in an emotion topic diagram.
基金supported by the National Natural Science Foundation of China(Grants No.51779074 and 41371052)the Special Fund for the Public Welfare Industry of the Ministry of Water Resources of China(Grant No.201501059)+3 种基金the National Key Research and Development Program of China(Grant No.2017YFC0404304)the Jiangsu Water Conservancy Science and Technology Project(Grant No.2017027)the Program for Outstanding Young Talents in Colleges and Universities of Anhui Province(Grant No.gxyq2018143)the Natural Science Foundation of Wanjiang University of Technology(Grant No.WG18030)
文摘This study developed a hierarchical Bayesian(HB)model for local and regional flood frequency analysis in the Dongting Lake Basin,in China.The annual maximum daily flows from 15 streamflow-gauged sites in the study area were analyzed with the HB model.The generalized extreme value(GEV)distribution was selected as the extreme flood distribution,and the GEV distribution location and scale parameters were spatially modeled through a regression approach with the drainage area as a covariate.The Markov chain Monte Carlo(MCMC)method with Gibbs sampling was employed to calculate the posterior distribution in the HB model.The results showed that the proposed HB model provided satisfactory Bayesian credible intervals for flood quantiles,while the traditional delta method could not provide reliable uncertainty estimations for large flood quantiles,due to the fact that the lower confidence bounds tended to decrease as the return periods increased.Furthermore,the HB model for regional analysis allowed for a reduction in the value of some restrictive assumptions in the traditional index flood method,such as the homogeneity region assumption and the scale invariance assumption.The HB model can also provide an uncertainty band of flood quantile prediction at a poorly gauged or ungauged site,but the index flood method with L-moments does not demonstrate this uncertainty directly.Therefore,the HB model is an effective method of implementing the flexible local and regional frequency analysis scheme,and of quantifying the associated predictive uncertainty.
文摘In this paper, we consider the problem of determining the order ofINAR(Q) model on the basis of the Bayesian estimation theory. The Bayesian es-timator for the order is given with respect to a squared-error loss function. The consistency of the estimator is discussed. The results of a simulation study for the estimation method are presented.
文摘Hydrocracking is a catalytic reaction process in the petroleum refineries for converting the higher boiling temperature residue of crude oil into a lighter fraction of hydrocarbons such as gasoline and diesel. In this study, a modified continuous lumping kinetic approach is applied to model the hydro-cracking of vacuum gas oil. The model is modified to take into consideration the reactor temperature on the reaction yield distribution. The model is calibrated by maximizing the likelihood function between the modeled and measured data at four different reactor temperatures. Bayesian approach parameter estimation is also applied to obtain the confidence interval of model parameters by considering the uncertainty associated with the measured errors and the model structural errors. Then Monte Carlo simulation is applied to the posterior range of the model parameters to obtain the 95% confidence interval of the model outputs for each individual fraction of the hydrocracking products. A good agreement is observed between the output of the calibrated model and the measured data points. The Bayesian approach based on the Markov Chain Monte Carlo simulation is shown to be efficient to quantify the uncertainty associated with the parameter values of the continuous lumping model.