期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
Convergence Diagnostics for Gibbs Sampler via Maximum Likelihood Estimation
1
作者 程杞元 林秀光 《Journal of Beijing Institute of Technology》 EI CAS 2003年第2期212-215,共4页
A diagnostic procedure based on maximum likelihood estimation, to study the convergence of the Markov chain produced by Gibbs sampler, is presented. The unbiasedness, consistent and asymptotic normality are considered... A diagnostic procedure based on maximum likelihood estimation, to study the convergence of the Markov chain produced by Gibbs sampler, is presented. The unbiasedness, consistent and asymptotic normality are considered for the estimation of the parameters produced by the procedure. An example is provided to illustrate the procedure, and the numerical result is consistent with the theoretical one. 展开更多
关键词 Markov chain Monte Carlo gibbs sampler maximum likelihood estimation
下载PDF
Comparison of methods for deriving phenotypes from incomplete observation data with an application to age at puberty in dairy cattle
2
作者 Melissa A.Stephen Chris R.Burke +5 位作者 Jennie E.Pryce Nicole M.Steele Peter R.Amer Susanne Meier Claire V.C.Phyn Dorian J.Garrick 《Journal of Animal Science and Biotechnology》 SCIE CAS CSCD 2024年第2期535-545,共11页
Background Many phenotypes in animal breeding are derived from incomplete measures,especially if they are challenging or expensive to measure precisely.Examples include time-dependent traits such as reproductive statu... Background Many phenotypes in animal breeding are derived from incomplete measures,especially if they are challenging or expensive to measure precisely.Examples include time-dependent traits such as reproductive status,or lifespan.Incomplete measures for these traits result in phenotypes that are subject to left-,interval-and rightcensoring,where phenotypes are only known to fall below an upper bound,between a lower and upper bound,or above a lower bound respectively.Here we compare three methods for deriving phenotypes from incomplete data using age at first elevation(>1 ng/mL)in blood plasma progesterone(AGEP4),which generally coincides with onset of puberty,as an example trait.Methods We produced AGEP4 phenotypes from three blood samples collected at about 30-day intervals from approximately 5,000 Holstein–Friesian or Holstein–Friesian×Jersey cross-bred dairy heifers managed in 54 seasonal-calving,pasture-based herds in New Zealand.We used these actual data to simulate 7 different visit scenarios,increasing the extent of censoring by disregarding data from one or two of the three visits.Three methods for deriving phenotypes from these data were explored:1)ordinal categorical variables which were analysed using categorical threshold analysis;2)continuous variables,with a penalty of 31 d assigned to right-censored phenotypes;and 3)continuous variables,sampled from within a lower and upper bound using a data augmentation approach.Results Credibility intervals for heritability estimations overlapped across all methods and visit scenarios,but estimated heritabilities tended to be higher when left censoring was reduced.For sires with at least 5 daughters,the correlations between estimated breeding values(EBVs)from our three-visit scenario and each reduced data scenario varied by method,ranging from 0.65 to 0.95.The estimated breed effects also varied by method,but breed differences were smaller as phenotype censoring increased.Conclusion Our results indicate that using some methods,phenotypes derived from one observation per offspring for a time-dependent trait such as AGEP4 may provide comparable sire rankings to three observations per offspring.This has implications for the design of large-scale phenotyping initiatives where animal breeders aim to estimate variance parameters and estimated breeding values(EBVs)for phenotypes that are challenging to measure or prohibitively expensive. 展开更多
关键词 CATTLE gibbs sampler Markov-chain Monte Carlo(MCMC) Puberty
下载PDF
Estimating posterior inference quality of the relational infinite latent feature model for overlapping community detection 被引量:1
3
作者 Qianchen YU Zhiwen YU +2 位作者 Zhu WANG Xiaofeng WANG Yongzhi WANG 《Frontiers of Computer Science》 SCIE EI CSCD 2020年第6期55-69,共15页
Overlapping community detection has become a very hot research topic in recent decades,and a plethora of methods have been proposed.But,a common challenge in many existing overlapping community detection approaches is... Overlapping community detection has become a very hot research topic in recent decades,and a plethora of methods have been proposed.But,a common challenge in many existing overlapping community detection approaches is that the number of communities K must be predefined manually.We propose a flexible nonparametric Bayesian generative model for count-value networks,which can allow K to increase as more and more data are encountered instead of to be fixed in advance.The Indian buffet process was used to model the community assignment matrix Z,and an uncol-lapsed Gibbs sampler has been derived.However,as the community assignment matrix Zis a structured multi-variable parameter,how to summarize the posterior inference results andestimate the inference quality about Z,is still a considerable challenge in the literature.In this paper,a graph convolutional neural network based graph classifier was utilized to help tosummarize the results and to estimate the inference qualityabout Z.We conduct extensive experiments on synthetic data and real data,and find that empirically,the traditional posterior summarization strategy is reliable. 展开更多
关键词 graph convolutional neural network graph classification overlapping community detection nonparametric Bayesian generative model relational infinite latent feature model Indian buffet process uncollapsed gibbs sampler posterior inference quality estimation
原文传递
Bayesian Regularized Regression Based on Composite Quantile Method 被引量:1
4
作者 Wei-hua ZHAO Ri-quan ZHANG +1 位作者 Ya-zhao LU Ji-cai LIU 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2016年第2期495-512,共18页
Recently, variable selection based on penalized regression methods has received a great deal of attention, mostly through frequentist's models. This paper investigates regularization regression from Bayesian perspect... Recently, variable selection based on penalized regression methods has received a great deal of attention, mostly through frequentist's models. This paper investigates regularization regression from Bayesian perspective. Our new method extends the Bayesian Lasso regression (Park and Casella, 2008) through replacing the least square loss and Lasso penalty by composite quantile loss function and adaptive Lasso penalty, which allows different penalization parameters for different regression coefficients. Based on the Bayesian hierarchical model framework, an efficient Gibbs sampler is derived to simulate the parameters from posterior distributions. Furthermore, we study the Bayesian composite quantile regression with adaptive group Lasso penalty. The distinguishing characteristic of the newly proposed method is completely data adaptive without requiring prior knowledge of the error distribution. Extensive simulations and two real data examples are used to examine the good performance of the proposed method. All results confirm that our novel method has both robustness and high efficiency and often outperforms other approaches. 展开更多
关键词 composite quantile regression variable selection Lasso adaptive Lasso gibbs sampler
原文传递
Efficient Algorithms for Generating Truncated Multivariate Normal Distributions
5
作者 Jun-wu YU Guo-liang TIAN 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2011年第4期601-612,共12页
Sampling from a truncated multivariate normal distribution (TMVND) constitutes the core computational module in fitting many statistical and econometric models. We propose two efficient methods, an iterative data au... Sampling from a truncated multivariate normal distribution (TMVND) constitutes the core computational module in fitting many statistical and econometric models. We propose two efficient methods, an iterative data augmentation (DA) algorithm and a non-iterative inverse Bayes formulae (IBF) sampler, to simulate TMVND and generalize them to multivariate normal distributions with linear inequality constraints. By creating a Bayesian incomplete-data structure, the posterior step of the DA Mgorithm directly generates random vector draws as opposed to single element draws, resulting obvious computational advantage and easy coding with common statistical software packages such as S-PLUS, MATLAB and GAUSS. Furthermore, the DA provides a ready structure for implementing a fast EM algorithm to identify the mode of TMVND, which has many potential applications in statistical inference of constrained parameter problems. In addition, utilizing this mode as an intermediate result, the IBF sampling provides a novel alternative to Gibbs sampling and elimi- nares problems with convergence and possible slow convergence due to the high correlation between components of a TMVND. The DA algorithm is applied to a linear regression model with constrained parameters and is illustrated with a published data set. Numerical comparisons show that the proposed DA algorithm and IBF sampler are more efficient than the Gibbs sampler and the accept-reject algorithm. 展开更多
关键词 data augmentation EM algorithm gibbs sampler IBF sampler linear inequality constraints truncated multivariate normal distribution
原文传递
Dirichlet process and its developments: a survey
6
作者 Yemao XIA Yingan LIU Jianwei GOU 《Frontiers of Mathematics in China》 SCIE CSCD 2022年第1期79-115,共37页
The core of the nonparametric/semiparametric Bayesian analysis is to relax the particular parametric assumptions on the distributions of interest to be unknown and random,and assign them a prior.Selecting a suitable p... The core of the nonparametric/semiparametric Bayesian analysis is to relax the particular parametric assumptions on the distributions of interest to be unknown and random,and assign them a prior.Selecting a suitable prior therefore is especially critical in the nonparametric Bayesian fitting.As the distribution of distribution,Dirichlet process(DP)is the most appreciated nonparametric prior due to its nice theoretical proprieties,modeling flexibility and computational feasibility.In this paper,we review and summarize some developments of DP during the past decades.Our focus is mainly concentrated upon its theoretical properties,various extensions,statistical modeling and applications to the latent variable models. 展开更多
关键词 Nonparametric Bayes Dirichlet process Polya urn prediction Sethuraman representation stick-breaking procedure Chinese restaurant rule mixture of Dirichlet process dependence Dirichlet process Markov Chains Monte Carlo blocked gibbs sampler latent variable models
原文传递
Bayesian penalized model for classification and selection of functional predictors using longitudinal MRI data from ADNI
7
作者 Asish Banik Taps Maiti Andrew Bender 《Statistical Theory and Related Fields》 2022年第4期327-343,共17页
The main goal of this paper is to employ longitudinal trajectories in a significant number of sub-regional brain volumetric MRI data as statistical predictors for Alzheimer's disease(AD)clas-sification.We use logi... The main goal of this paper is to employ longitudinal trajectories in a significant number of sub-regional brain volumetric MRI data as statistical predictors for Alzheimer's disease(AD)clas-sification.We use logistic regression in a Bayesian framework that includes many functional predictors.The direct sampling of regression Coefficients from the Bayesian logistic model is dif-ficult due to its complicated likelihood function.In high-dimensional scenarios,the selection of predictors is paramount with the introduction of either spike-and-slab priors,non-local priors,or Horseshoe priors.We seek to avoid the complicated Metropolis-Hastings approach and to develop an easily implementable Gibbs sampler.In addition,the Bayesian estimation provides proper estimates of the model parameters,which are also useful for building inference.Another advantage of working with logistic regression is that it calculates the log of odds of relative risk for AD compared to normal control based on the selected longitudinal predictors,rather than simply classifying patients based on cross-sectional estimates.Ultimately,however,we com-bine approaches and use a probability threshold to classify individual patients.We employ 49 functional predictors consisting of volumetric estimates of brain sub-regions,chosen for their established clinical significance.Moreover,the use of spike and slab priors ensures that many redundant predictors are dropped from the model. 展开更多
关键词 Alzheimer's disease basis spline Polya-gamma augmentation Bayesian group lss spike and-slab prior gibbs sampler:volumetric MRI ADNI
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部