This article introduces a novel variant of the generalized linear exponential(GLE)distribution,known as the sine generalized linear exponential(SGLE)distribution.The SGLE distribution utilizes the sine transformation ...This article introduces a novel variant of the generalized linear exponential(GLE)distribution,known as the sine generalized linear exponential(SGLE)distribution.The SGLE distribution utilizes the sine transformation to enhance its capabilities.The updated distribution is very adaptable and may be efficiently used in the modeling of survival data and dependability issues.The suggested model incorporates a hazard rate function(HRF)that may display a rising,J-shaped,or bathtub form,depending on its unique characteristics.This model includes many well-known lifespan distributions as separate sub-models.The suggested model is accompanied with a range of statistical features.The model parameters are examined using the techniques of maximum likelihood and Bayesian estimation using progressively censored data.In order to evaluate the effectiveness of these techniques,we provide a set of simulated data for testing purposes.The relevance of the newly presented model is shown via two real-world dataset applications,highlighting its superiority over other respected similar models.展开更多
Propensity score (PS) adjustment can control confounding effects and reduce bias when estimating treatment effects in non-randomized trials or observational studies. PS methods are becoming increasingly used to estima...Propensity score (PS) adjustment can control confounding effects and reduce bias when estimating treatment effects in non-randomized trials or observational studies. PS methods are becoming increasingly used to estimate causal effects, including when the sample size is small compared to the number of confounders. With numerous confounders, quasi-complete separation can easily occur in logistic regression used for estimating the PS, but this has not been addressed. We focused on a Bayesian PS method to address the limitations of quasi-complete separation faced by small trials. Bayesian methods are useful because they estimate the PS and causal effects simultaneously while considering the uncertainty of the PS by modelling it as a latent variable. In this study, we conducted simulations to evaluate the performance of Bayesian simultaneous PS estimation by considering the specification of prior distributions for model comparison. We propose a method to improve predictive performance with discrete outcomes in small trials. We found that the specification of prior distributions assigned to logistic regression coefficients was more important in the second step than in the first step, even when there was a quasi-complete separation in the first step. Assigning Cauchy (0, 2.5) to coefficients improved the predictive performance for estimating causal effects and improving the balancing properties of the confounder.展开更多
The zero_failure data research is a new field in the recent years, but it is required urgently in practical projects, so the work has more theory and practical values. In this paper, for zero_failure data (t i,n i...The zero_failure data research is a new field in the recent years, but it is required urgently in practical projects, so the work has more theory and practical values. In this paper, for zero_failure data (t i,n i) at moment t i , if the prior distribution of the failure probability p i=p{T【t i} is quasi_exponential distribution, the author gives the p i Bayesian estimation and hierarchical Bayesian estimation and the reliability under zero_failure date condition is also obtained.展开更多
This paper deals with the Bayesian estimation of Shannon entropy for the generalized inverse exponential distribution.Assuming that the observed samples are taken from the upper record ranked set sampling(URRSS)and up...This paper deals with the Bayesian estimation of Shannon entropy for the generalized inverse exponential distribution.Assuming that the observed samples are taken from the upper record ranked set sampling(URRSS)and upper record values(URV)schemes.Formulas of Bayesian estimators are derived depending on a gamma prior distribution considering the squared error,linear exponential and precautionary loss functions,in addition,we obtain Bayesian credible intervals.The random-walk Metropolis-Hastings algorithm is handled to generate Markov chain Monte Carlo samples from the posterior distribution.Then,the behavior of the estimates is examined at various record values.The output of the study shows that the entropy Bayesian estimates under URRSS are more convenient than the other estimates under URV in the majority of the situations.Also,the entropy Bayesian estimates perform well as the number of records increases.The obtained results validate the usefulness and efficiency of the URV method.Real data is analyzed for more clarifying purposes which validate the theoretical results.展开更多
The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction ...The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.展开更多
The Atlantic tripletail(Lobotes surinamensis)is a high revenue-generating fish species predominantly caught by mechanized artisanal fishers community and the most available member of its family in Bangladesh.This is a...The Atlantic tripletail(Lobotes surinamensis)is a high revenue-generating fish species predominantly caught by mechanized artisanal fishers community and the most available member of its family in Bangladesh.This is a ground work of fish stock assessment study in the Bay of Bengal region to explore the life history parameters and associated biomass of this species,using three length-based approaches of TropFishR,the length-based Bayesian biomass estimation(LBB),and Froese’s length based indicators(LBIs).An almost homogenous body growth pattern(b=3.07;R^(2)=0.98)was observed in the length-weight relationship of tripletail.The life history parameters for tripletail,as determined by the von Bertalanffy Growth Function(VBGF)model,were L_(∞)=113.36 cm and k=0.51/a.The length converted catch curve(LCCC)yielded an estimation of the total mortality(Z=1.77/a),with the natural mortality estimated at(M=0.53/a)and the fishing mortality estimated at(F=1.24/a).But,the ratio of mortality(F/M=0.15)by LBB captured the non-fully exploited status of biomass(B/B_(MSY)=2.1).LBI analysis indicated that the tripletail fishery’s spawning stock biomass is greater than the target and limit reference points,indicating a healthy state of biomass.展开更多
This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations...This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations and can lead to varied or similar results in terms of precision and performance under certain assumptions. The article underlines the importance of comparing these two approaches to choose the one best suited to the context, available data and modeling objectives.展开更多
A modified unscented particle filtering scheme for nonlinear tracking is proposed, in view of the potential drawbacks (such as, particle impoverishment and numerical sensitivity in calculating the prior) of the conv...A modified unscented particle filtering scheme for nonlinear tracking is proposed, in view of the potential drawbacks (such as, particle impoverishment and numerical sensitivity in calculating the prior) of the conventional unscented particle filter (UPF) confronted in practice. Specifically, a different derivation of the importance weight is presented in detail. The proposed method can avoid the calculation of the prior and reduce the effects of the impoverishment problem caused by sampling from the proposal distribution, Simulations have been performed using two illustrative examples and results have been provided to demonstrate the validity of the modified UPF as well as its improved performance over the conventional one.展开更多
Seismic inversion performed in the time or frequency domain cannot always recover the long-wavelength background of subsurface parameters due to the lack of low-frequency seismic records. Since the low-frequency respo...Seismic inversion performed in the time or frequency domain cannot always recover the long-wavelength background of subsurface parameters due to the lack of low-frequency seismic records. Since the low-frequency response becomes much richer in the Laplace mixed domains, one novel Bayesian impedance inversion approach in the complex Laplace mixed domains is established in this study to solve the model dependency problem. The derivation of a Laplace mixed-domain formula of the Robinson convolution is the first step in our work. With this formula, the Laplace seismic spectrum, the wavelet spectrum and time-domain reflectivity are joined together. Next, to improve inversion stability, the object inversion function accompanied by the initial constraint of the linear increment model is launched under a Bayesian framework. The likelihood function and prior probability distribution can be combined together by Bayesian formula to calculate the posterior probability distribution of subsurface parameters. By achieving the optimal solution corresponding to maximum posterior probability distribution, the low-frequency background of subsurface parameters can be obtained successfully. Then, with the regularization constraint of estimated low frequency in the Laplace mixed domains, multi-scale Bayesian inversion inthe pure frequency domain is exploited to obtain the absolute model parameters. The effectiveness, anti-noise capability and lateral continuity of Laplace mixed-domain inversion are illustrated by synthetic tests. Furthermore,one field case in the east of China is discussed carefully with different input frequency components and different inversion algorithms. This provides adequate proof to illustrate the reliability improvement in low-frequency estimation and resolution enhancement of subsurface parameters, in comparison with conventional Bayesian inversion in the frequency domain.展开更多
A Bayesian estimation method to separate multicomponent signals with single channel observation is presented in this paper. By using the basis function projection, the component separation becomes a problem of limited...A Bayesian estimation method to separate multicomponent signals with single channel observation is presented in this paper. By using the basis function projection, the component separation becomes a problem of limited parameter estimation. Then, a Bayesian model for estimating parameters is set up. The reversible jump MCMC (Monte Carlo Markov Chain) algorithmis adopted to perform the Bayesian computation. The method can jointly estimate the parameters of each component and the component number. Simulation results demonstrate that the method has low SNR threshold and better performance.展开更多
An efficient despeclding algorithm is proposed based on stationary wavelet transform (SWT) for synthetic aperture radar (SAR) images. The statistical model of wavelet coefficients is analyzed and its performance i...An efficient despeclding algorithm is proposed based on stationary wavelet transform (SWT) for synthetic aperture radar (SAR) images. The statistical model of wavelet coefficients is analyzed and its performance is modeled with a mixture density of two zero-mean Gaussian distributions. A fuzzy shrinkage factor is derived based on the minimum mean square error (MMSE) criteria with Bayesian estimation. In the case above, the ideas of region division and fuzzy shrinkage arc adopted according to the interscale dependencies among wavelet coefficients. The noise-free wavelet coefficients are estimated accurately. Experimental results show that the algorithm proposed is superior to the refined Lee filter, wavelet soft thresbolding shrinkage and SWT shrinkage algorithms in terms of smoothing effects and edges preservation.展开更多
The accuracy of target threat estimation has a great impact on command decision-making.The Bayesian network,as an effective way to deal with the problem of uncertainty,can be used to track the change of the target thr...The accuracy of target threat estimation has a great impact on command decision-making.The Bayesian network,as an effective way to deal with the problem of uncertainty,can be used to track the change of the target threat level.Unfortunately,the traditional discrete dynamic Bayesian network(DDBN)has the problems of poor parameter learning and poor reasoning accuracy in a small sample environment with partial prior information missing.Considering the finiteness and discreteness of DDBN parameters,a fuzzy k-nearest neighbor(KNN)algorithm based on correlation of feature quantities(CF-FKNN)is proposed for DDBN parameter learning.Firstly,the correlation between feature quantities is calculated,and then the KNN algorithm with fuzzy weight is introduced to fill the missing data.On this basis,a reasonable DDBN structure is constructed by using expert experience to complete DDBN parameter learning and reasoning.Simulation results show that the CF-FKNN algorithm can accurately fill in the data when the samples are seriously missing,and improve the effect of DDBN parameter learning in the case of serious sample missing.With the proposed method,the final target threat assessment results are reasonable,which meets the needs of engineering applications.展开更多
This paper considers the Bayesian and expected Bayesian(E-Bayesian) estimations of the parameter and reliability function for competing risk model from Gompertz distribution under Type-I progressively hybrid censori...This paper considers the Bayesian and expected Bayesian(E-Bayesian) estimations of the parameter and reliability function for competing risk model from Gompertz distribution under Type-I progressively hybrid censoring scheme(PHCS). The estimations are obtained based on Gamma conjugate prior for the parameter under squared error(SE) and Linex loss functions. The simulation results are provided for the comparison purpose and one data set is analyzed.展开更多
In this paper, we consider the problem of determining the order ofINAR(Q) model on the basis of the Bayesian estimation theory. The Bayesian es-timator for the order is given with respect to a squared-error loss fu...In this paper, we consider the problem of determining the order ofINAR(Q) model on the basis of the Bayesian estimation theory. The Bayesian es-timator for the order is given with respect to a squared-error loss function. The consistency of the estimator is discussed. The results of a simulation study for the estimation method are presented.展开更多
This paper develops a new method, named E-Bayesian estimation method, to estimate the reliability parameters. The E-Bayesian estimation method of the reliability are derived for the zero-failure data from the product ...This paper develops a new method, named E-Bayesian estimation method, to estimate the reliability parameters. The E-Bayesian estimation method of the reliability are derived for the zero-failure data from the product with Binomial distribution. Firstly, for the product reliability, the definitions of E-Bayesian estimation were given, and on the base, expressions of the E-Bayesian estimation and hierarchical Bayesian estimation of the products reliability was given. Secondly, discuss properties of the E-Bayesian estimation. Finally, the new method is applied to a real zero-failure data set, and as can be seen, it is both efficient and easy to operate.展开更多
Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustme...Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustment Kalman filter(EAKF) and sequential importance resampling particle filter(SIR-PF), using a well-known nonlinear and non-Gaussian model(Lorenz '63 model). The EAKF, which is a deterministic scheme of the ensemble Kalman filter(En KF), performs better than the classical(stochastic) En KF in a general framework. Comparison between the SIR-PF and the EAKF reveals that the former outperforms the latter if ensemble size is so large that can avoid the filter degeneracy, and vice versa. The impact of the probability density functions and effective ensemble sizes on assimilation performances are also explored. On the basis of comparisons between the SIR-PF and the EAKF, a mixture filter, called ensemble adjustment Kalman particle filter(EAKPF), is proposed to combine their both merits. Similar to the ensemble Kalman particle filter, which combines the stochastic En KF and SIR-PF analysis schemes with a tuning parameter, the new mixture filter essentially provides a continuous interpolation between the EAKF and SIR-PF. The same Lorenz '63 model is used as a testbed, showing that the EAKPF is able to overcome filter degeneracy while maintaining the non-Gaussian nature, and performs better than the EAKF given limited ensemble size.展开更多
In order to apply speech recognition systems to actual circumstances such as inspection and maintenance operations in industrial factories to recording and reporting routines at construction sites, etc. where hand-wri...In order to apply speech recognition systems to actual circumstances such as inspection and maintenance operations in industrial factories to recording and reporting routines at construction sites, etc. where hand-writing is difficult, some countermeasure methods for surrounding noise are indispensable. In this study, a signal detection method to remove the noise for actual speech signals is proposed by using Bayesian estimation with the aid of bone-conducted speech. More specifically, by introducing Bayes’ theorem based on the observation of air-conducted speech contaminated by surrounding background noise, a new type of algorithm for noise removal is theoretically derived. In the proposed speech detection method, bone-conducted speech is utilized in order to obtain precise estimation for speech signals. The effectiveness of the proposed method is experimentally confirmed by applying it to air- and bone-conducted speeches measured in real environment under the existence of surrounding background noise.展开更多
A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith’s discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method...A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith’s discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.展开更多
The finite strip controlling equation of pinned curve box was deduced on basis of Novozhilov theory and with flexibility method, and the problem of continuous curve box was resolved. Dynamic Bayesian error function of...The finite strip controlling equation of pinned curve box was deduced on basis of Novozhilov theory and with flexibility method, and the problem of continuous curve box was resolved. Dynamic Bayesian error function of displacement parameters of continuous curve box was found. The corresponding formulas of dynamic Bayesian expectation and variance were derived. After the method of solving the automatic search of step length was put forward, the optimization estimation computing formulas were also obtained by adapting conjugate gradient method. Then the steps of dynamic Bayesian estimation were given in detail. Through analysis of a Classic example, the criterion of judging the precision of the known information is gained as well as some other important conclusions about dynamic Bayesian stochastic estimation of displacement parameters of continuous curve box.展开更多
The Bayesian approach is considered as the most general formulation of the state estimation for dynamic systems. However, most of the existing Bayesian estimators of stochastic hybrid systems only focus on the Markov ...The Bayesian approach is considered as the most general formulation of the state estimation for dynamic systems. However, most of the existing Bayesian estimators of stochastic hybrid systems only focus on the Markov jump system, few liter- ature is related to the estimation problem of nonlinear stochastic hybrid systems with state dependent transitions. According to this problem, a new methodology which relaxes quite a restrictive as- sumption that the mode transition process must satisfy Markov properties is proposed. In this method, a general approach is presented to model the state dependent transitions, the state and output spaces are discreted into cell space which handles the nonlinearities and computationally intensive problem offline. Then maximum a posterior estimation is obtained by using the Bayesian theory. The efficacy of the estimator is illustrated by a simulated example .展开更多
基金This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-RG23142).
文摘This article introduces a novel variant of the generalized linear exponential(GLE)distribution,known as the sine generalized linear exponential(SGLE)distribution.The SGLE distribution utilizes the sine transformation to enhance its capabilities.The updated distribution is very adaptable and may be efficiently used in the modeling of survival data and dependability issues.The suggested model incorporates a hazard rate function(HRF)that may display a rising,J-shaped,or bathtub form,depending on its unique characteristics.This model includes many well-known lifespan distributions as separate sub-models.The suggested model is accompanied with a range of statistical features.The model parameters are examined using the techniques of maximum likelihood and Bayesian estimation using progressively censored data.In order to evaluate the effectiveness of these techniques,we provide a set of simulated data for testing purposes.The relevance of the newly presented model is shown via two real-world dataset applications,highlighting its superiority over other respected similar models.
文摘Propensity score (PS) adjustment can control confounding effects and reduce bias when estimating treatment effects in non-randomized trials or observational studies. PS methods are becoming increasingly used to estimate causal effects, including when the sample size is small compared to the number of confounders. With numerous confounders, quasi-complete separation can easily occur in logistic regression used for estimating the PS, but this has not been addressed. We focused on a Bayesian PS method to address the limitations of quasi-complete separation faced by small trials. Bayesian methods are useful because they estimate the PS and causal effects simultaneously while considering the uncertainty of the PS by modelling it as a latent variable. In this study, we conducted simulations to evaluate the performance of Bayesian simultaneous PS estimation by considering the specification of prior distributions for model comparison. We propose a method to improve predictive performance with discrete outcomes in small trials. We found that the specification of prior distributions assigned to logistic regression coefficients was more important in the second step than in the first step, even when there was a quasi-complete separation in the first step. Assigning Cauchy (0, 2.5) to coefficients improved the predictive performance for estimating causal effects and improving the balancing properties of the confounder.
文摘The zero_failure data research is a new field in the recent years, but it is required urgently in practical projects, so the work has more theory and practical values. In this paper, for zero_failure data (t i,n i) at moment t i , if the prior distribution of the failure probability p i=p{T【t i} is quasi_exponential distribution, the author gives the p i Bayesian estimation and hierarchical Bayesian estimation and the reliability under zero_failure date condition is also obtained.
基金A.R.A.Alanzi would like to thank the Deanship of Scientific Research at Majmaah University for financial support and encouragement.
文摘This paper deals with the Bayesian estimation of Shannon entropy for the generalized inverse exponential distribution.Assuming that the observed samples are taken from the upper record ranked set sampling(URRSS)and upper record values(URV)schemes.Formulas of Bayesian estimators are derived depending on a gamma prior distribution considering the squared error,linear exponential and precautionary loss functions,in addition,we obtain Bayesian credible intervals.The random-walk Metropolis-Hastings algorithm is handled to generate Markov chain Monte Carlo samples from the posterior distribution.Then,the behavior of the estimates is examined at various record values.The output of the study shows that the entropy Bayesian estimates under URRSS are more convenient than the other estimates under URV in the majority of the situations.Also,the entropy Bayesian estimates perform well as the number of records increases.The obtained results validate the usefulness and efficiency of the URV method.Real data is analyzed for more clarifying purposes which validate the theoretical results.
基金the National Natural Science Foundation of China(Grant No.61973033)Preliminary Research of Equipment(Grant No.9090102010305)for funding the experiments。
文摘The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.
基金Supported by the special research fund of Ocean University of China(No.201562030)。
文摘The Atlantic tripletail(Lobotes surinamensis)is a high revenue-generating fish species predominantly caught by mechanized artisanal fishers community and the most available member of its family in Bangladesh.This is a ground work of fish stock assessment study in the Bay of Bengal region to explore the life history parameters and associated biomass of this species,using three length-based approaches of TropFishR,the length-based Bayesian biomass estimation(LBB),and Froese’s length based indicators(LBIs).An almost homogenous body growth pattern(b=3.07;R^(2)=0.98)was observed in the length-weight relationship of tripletail.The life history parameters for tripletail,as determined by the von Bertalanffy Growth Function(VBGF)model,were L_(∞)=113.36 cm and k=0.51/a.The length converted catch curve(LCCC)yielded an estimation of the total mortality(Z=1.77/a),with the natural mortality estimated at(M=0.53/a)and the fishing mortality estimated at(F=1.24/a).But,the ratio of mortality(F/M=0.15)by LBB captured the non-fully exploited status of biomass(B/B_(MSY)=2.1).LBI analysis indicated that the tripletail fishery’s spawning stock biomass is greater than the target and limit reference points,indicating a healthy state of biomass.
文摘This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations and can lead to varied or similar results in terms of precision and performance under certain assumptions. The article underlines the importance of comparing these two approaches to choose the one best suited to the context, available data and modeling objectives.
文摘A modified unscented particle filtering scheme for nonlinear tracking is proposed, in view of the potential drawbacks (such as, particle impoverishment and numerical sensitivity in calculating the prior) of the conventional unscented particle filter (UPF) confronted in practice. Specifically, a different derivation of the importance weight is presented in detail. The proposed method can avoid the calculation of the prior and reduce the effects of the impoverishment problem caused by sampling from the proposal distribution, Simulations have been performed using two illustrative examples and results have been provided to demonstrate the validity of the modified UPF as well as its improved performance over the conventional one.
基金the sponsorship of National Natural Science Foundation Project(U1562215,41604101)National Grand Project for Science and Technology(2016ZX05024-004,2017ZX05032-003)+2 种基金the Post-graduate Innovation Program of China University of Petroleum(YCX2017005)Science Foundation from SINOPEC Key Laboratory of Geophysics(wtyjy-wx2016-04-10)the Fundamental Research Funds for the Central Universities
文摘Seismic inversion performed in the time or frequency domain cannot always recover the long-wavelength background of subsurface parameters due to the lack of low-frequency seismic records. Since the low-frequency response becomes much richer in the Laplace mixed domains, one novel Bayesian impedance inversion approach in the complex Laplace mixed domains is established in this study to solve the model dependency problem. The derivation of a Laplace mixed-domain formula of the Robinson convolution is the first step in our work. With this formula, the Laplace seismic spectrum, the wavelet spectrum and time-domain reflectivity are joined together. Next, to improve inversion stability, the object inversion function accompanied by the initial constraint of the linear increment model is launched under a Bayesian framework. The likelihood function and prior probability distribution can be combined together by Bayesian formula to calculate the posterior probability distribution of subsurface parameters. By achieving the optimal solution corresponding to maximum posterior probability distribution, the low-frequency background of subsurface parameters can be obtained successfully. Then, with the regularization constraint of estimated low frequency in the Laplace mixed domains, multi-scale Bayesian inversion inthe pure frequency domain is exploited to obtain the absolute model parameters. The effectiveness, anti-noise capability and lateral continuity of Laplace mixed-domain inversion are illustrated by synthetic tests. Furthermore,one field case in the east of China is discussed carefully with different input frequency components and different inversion algorithms. This provides adequate proof to illustrate the reliability improvement in low-frequency estimation and resolution enhancement of subsurface parameters, in comparison with conventional Bayesian inversion in the frequency domain.
文摘A Bayesian estimation method to separate multicomponent signals with single channel observation is presented in this paper. By using the basis function projection, the component separation becomes a problem of limited parameter estimation. Then, a Bayesian model for estimating parameters is set up. The reversible jump MCMC (Monte Carlo Markov Chain) algorithmis adopted to perform the Bayesian computation. The method can jointly estimate the parameters of each component and the component number. Simulation results demonstrate that the method has low SNR threshold and better performance.
基金A Postdoctoral Science Foundation of China (J63104020156) National Defence Foundation of China
文摘An efficient despeclding algorithm is proposed based on stationary wavelet transform (SWT) for synthetic aperture radar (SAR) images. The statistical model of wavelet coefficients is analyzed and its performance is modeled with a mixture density of two zero-mean Gaussian distributions. A fuzzy shrinkage factor is derived based on the minimum mean square error (MMSE) criteria with Bayesian estimation. In the case above, the ideas of region division and fuzzy shrinkage arc adopted according to the interscale dependencies among wavelet coefficients. The noise-free wavelet coefficients are estimated accurately. Experimental results show that the algorithm proposed is superior to the refined Lee filter, wavelet soft thresbolding shrinkage and SWT shrinkage algorithms in terms of smoothing effects and edges preservation.
基金supported by the Fundamental Scientific Research Business Expenses for Central Universities(3072021CFJ0803)the Advanced Marine Communication and Information Technology Ministry of Industry and Information Technology Key Laboratory Project(AMCIT21V3).
文摘The accuracy of target threat estimation has a great impact on command decision-making.The Bayesian network,as an effective way to deal with the problem of uncertainty,can be used to track the change of the target threat level.Unfortunately,the traditional discrete dynamic Bayesian network(DDBN)has the problems of poor parameter learning and poor reasoning accuracy in a small sample environment with partial prior information missing.Considering the finiteness and discreteness of DDBN parameters,a fuzzy k-nearest neighbor(KNN)algorithm based on correlation of feature quantities(CF-FKNN)is proposed for DDBN parameter learning.Firstly,the correlation between feature quantities is calculated,and then the KNN algorithm with fuzzy weight is introduced to fill the missing data.On this basis,a reasonable DDBN structure is constructed by using expert experience to complete DDBN parameter learning and reasoning.Simulation results show that the CF-FKNN algorithm can accurately fill in the data when the samples are seriously missing,and improve the effect of DDBN parameter learning in the case of serious sample missing.With the proposed method,the final target threat assessment results are reasonable,which meets the needs of engineering applications.
基金supported by the National Natural Science Foundation of China(7117116471401134+1 种基金71571144)the Natural Science Basic Research Program of Shaanxi Province(2015JM1003)
文摘This paper considers the Bayesian and expected Bayesian(E-Bayesian) estimations of the parameter and reliability function for competing risk model from Gompertz distribution under Type-I progressively hybrid censoring scheme(PHCS). The estimations are obtained based on Gamma conjugate prior for the parameter under squared error(SE) and Linex loss functions. The simulation results are provided for the comparison purpose and one data set is analyzed.
文摘In this paper, we consider the problem of determining the order ofINAR(Q) model on the basis of the Bayesian estimation theory. The Bayesian es-timator for the order is given with respect to a squared-error loss function. The consistency of the estimator is discussed. The results of a simulation study for the estimation method are presented.
基金Supported by the Fujian Province NSFC(2009J01001)
文摘This paper develops a new method, named E-Bayesian estimation method, to estimate the reliability parameters. The E-Bayesian estimation method of the reliability are derived for the zero-failure data from the product with Binomial distribution. Firstly, for the product reliability, the definitions of E-Bayesian estimation were given, and on the base, expressions of the E-Bayesian estimation and hierarchical Bayesian estimation of the products reliability was given. Secondly, discuss properties of the E-Bayesian estimation. Finally, the new method is applied to a real zero-failure data set, and as can be seen, it is both efficient and easy to operate.
基金The National Natural Science Foundation of China under contract Nos 41276029 and 41321004the Project of State Key Laboratory of Satellite Ocean Environment Dynamics,Second Institute of Oceanography under contract No.SOEDZZ1404the National Basic Research Program(973 Program)of China under contract No.2013CB430302
文摘Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustment Kalman filter(EAKF) and sequential importance resampling particle filter(SIR-PF), using a well-known nonlinear and non-Gaussian model(Lorenz '63 model). The EAKF, which is a deterministic scheme of the ensemble Kalman filter(En KF), performs better than the classical(stochastic) En KF in a general framework. Comparison between the SIR-PF and the EAKF reveals that the former outperforms the latter if ensemble size is so large that can avoid the filter degeneracy, and vice versa. The impact of the probability density functions and effective ensemble sizes on assimilation performances are also explored. On the basis of comparisons between the SIR-PF and the EAKF, a mixture filter, called ensemble adjustment Kalman particle filter(EAKPF), is proposed to combine their both merits. Similar to the ensemble Kalman particle filter, which combines the stochastic En KF and SIR-PF analysis schemes with a tuning parameter, the new mixture filter essentially provides a continuous interpolation between the EAKF and SIR-PF. The same Lorenz '63 model is used as a testbed, showing that the EAKPF is able to overcome filter degeneracy while maintaining the non-Gaussian nature, and performs better than the EAKF given limited ensemble size.
文摘In order to apply speech recognition systems to actual circumstances such as inspection and maintenance operations in industrial factories to recording and reporting routines at construction sites, etc. where hand-writing is difficult, some countermeasure methods for surrounding noise are indispensable. In this study, a signal detection method to remove the noise for actual speech signals is proposed by using Bayesian estimation with the aid of bone-conducted speech. More specifically, by introducing Bayes’ theorem based on the observation of air-conducted speech contaminated by surrounding background noise, a new type of algorithm for noise removal is theoretically derived. In the proposed speech detection method, bone-conducted speech is utilized in order to obtain precise estimation for speech signals. The effectiveness of the proposed method is experimentally confirmed by applying it to air- and bone-conducted speeches measured in real environment under the existence of surrounding background noise.
文摘A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith’s discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.
文摘The finite strip controlling equation of pinned curve box was deduced on basis of Novozhilov theory and with flexibility method, and the problem of continuous curve box was resolved. Dynamic Bayesian error function of displacement parameters of continuous curve box was found. The corresponding formulas of dynamic Bayesian expectation and variance were derived. After the method of solving the automatic search of step length was put forward, the optimization estimation computing formulas were also obtained by adapting conjugate gradient method. Then the steps of dynamic Bayesian estimation were given in detail. Through analysis of a Classic example, the criterion of judging the precision of the known information is gained as well as some other important conclusions about dynamic Bayesian stochastic estimation of displacement parameters of continuous curve box.
基金supported by the National Natural Science Foundation of China (6097400161104121)the Fundamental Research Funds for the Central Universities (JUDCF11039)
文摘The Bayesian approach is considered as the most general formulation of the state estimation for dynamic systems. However, most of the existing Bayesian estimators of stochastic hybrid systems only focus on the Markov jump system, few liter- ature is related to the estimation problem of nonlinear stochastic hybrid systems with state dependent transitions. According to this problem, a new methodology which relaxes quite a restrictive as- sumption that the mode transition process must satisfy Markov properties is proposed. In this method, a general approach is presented to model the state dependent transitions, the state and output spaces are discreted into cell space which handles the nonlinearities and computationally intensive problem offline. Then maximum a posterior estimation is obtained by using the Bayesian theory. The efficacy of the estimator is illustrated by a simulated example .