In order to improve crash occurrence models to account for the influence of various contributing factors, a conditional autoregressive negative binomial (CAR-NB) model is employed to allow for overdispersion (tackl...In order to improve crash occurrence models to account for the influence of various contributing factors, a conditional autoregressive negative binomial (CAR-NB) model is employed to allow for overdispersion (tackled by the NB component), unobserved heterogeneity and spatial autocorrelation (captured by the CAR process), using Markov chain Monte Carlo methods and the Gibbs sampler. Statistical tests suggest that the CAR-NB model is preferred over the CAR-Poisson, NB, zero-inflated Poisson, zero-inflated NB models, due to its lower prediction errors and more robust parameter inference. The study results show that crash frequency and fatalities are positively associated with the number of lanes, curve length, annual average daily traffic (AADT) per lane, as well as rainfall. Speed limit and the distances to the nearest hospitals have negative associations with segment-based crash counts but positive associations with fatality counts, presumably as a result of worsened collision impacts at higher speed and time loss during transporting crash victims.展开更多
To improve the accuracy and speed in cycle-accurate power estimation, this paper uses multiple dimensional coefficients to build a Bayesian inference dynamic power model. By analyzing the power distribution and intern...To improve the accuracy and speed in cycle-accurate power estimation, this paper uses multiple dimensional coefficients to build a Bayesian inference dynamic power model. By analyzing the power distribution and internal node state, we find the deficiency of only using port information. Then, we define the gate level number computing method and the concept of slice, and propose using slice analysis to distill switching density as coefficients in a special circuit stage and participate in Bayesian inference with port information. Experiments show that this method can reduce the power-per-cycle estimation error by 21.9% and the root mean square error by 25.0% compared with the original model, and maintain a 700 + speedup compared with the existing gate-level power analysis technique.展开更多
This study developed a hierarchical Bayesian(HB)model for local and regional flood frequency analysis in the Dongting Lake Basin,in China.The annual maximum daily flows from 15 streamflow-gauged sites in the study are...This study developed a hierarchical Bayesian(HB)model for local and regional flood frequency analysis in the Dongting Lake Basin,in China.The annual maximum daily flows from 15 streamflow-gauged sites in the study area were analyzed with the HB model.The generalized extreme value(GEV)distribution was selected as the extreme flood distribution,and the GEV distribution location and scale parameters were spatially modeled through a regression approach with the drainage area as a covariate.The Markov chain Monte Carlo(MCMC)method with Gibbs sampling was employed to calculate the posterior distribution in the HB model.The results showed that the proposed HB model provided satisfactory Bayesian credible intervals for flood quantiles,while the traditional delta method could not provide reliable uncertainty estimations for large flood quantiles,due to the fact that the lower confidence bounds tended to decrease as the return periods increased.Furthermore,the HB model for regional analysis allowed for a reduction in the value of some restrictive assumptions in the traditional index flood method,such as the homogeneity region assumption and the scale invariance assumption.The HB model can also provide an uncertainty band of flood quantile prediction at a poorly gauged or ungauged site,but the index flood method with L-moments does not demonstrate this uncertainty directly.Therefore,the HB model is an effective method of implementing the flexible local and regional frequency analysis scheme,and of quantifying the associated predictive uncertainty.展开更多
A Bayesian analysis of the minimal model was proposed where both glucose and insulin were analyzed simultaneously under the insulin-modified intravenous glucose tolerance test (IVGTT). The resulting model was implemen...A Bayesian analysis of the minimal model was proposed where both glucose and insulin were analyzed simultaneously under the insulin-modified intravenous glucose tolerance test (IVGTT). The resulting model was implemented with a nonlinear mixed-effects modeling setup using ordinary differential equations (ODEs), which leads to precise estimation of population parameters by separating the inter- and intra-individual variability. The results indicated that the Bayesian method applied to the glucose-insulin minimal model provided a satisfactory solution with accurate parameter estimates which were numerically stable since the Bayesian method did not require approximation by linearization.展开更多
A Bayesian network (BN) model was developed to predict susceptibility to PWD(Pine Wilt Disease). The distribution of PWD was identified using QuickBird and unmanned aerial vehicle (UAV) images taken at different times...A Bayesian network (BN) model was developed to predict susceptibility to PWD(Pine Wilt Disease). The distribution of PWD was identified using QuickBird and unmanned aerial vehicle (UAV) images taken at different times. Seven factors that influence the distribution of PWD were extracted from the QuickBird images and were used as the independent variables. The results showed that the BN model predicted PWD with high accuracy. In a sensitivity analysis, elevation (EL), the normal differential vegetation index (NDVI), the distance to settlements (DS) and the distance to roads (DR) were strongly associated with PWD prevalence, and slope (SL) exhibited the weakest association with PWD prevalence. The study showed that BN is an effective tool for modeling PWD prevalence and quantifying the impact of various factors.展开更多
Indirect approaches to estimation of biomass factors are often applied to measure carbon flux in the forestry sector. An assumption underlying a country-level carbon stock estimate is the representativeness of these f...Indirect approaches to estimation of biomass factors are often applied to measure carbon flux in the forestry sector. An assumption underlying a country-level carbon stock estimate is the representativeness of these factors. Although intensive studies have been conducted to quantify biomass factors, each study typically covers a limited geographic area. The goal of this study was to employ a meta-analysis approach to develop regional bio- mass factors for Quercus mongolica forests in South Korea. The biomass factors of interest were biomass conversion and expansion factor (BCEF), biomass expansion factor (BEF) and root-to-shoot ratio (RSR). Our objectives were to select probability density functions (PDFs) that best fitted the three biomass factors and to quantify their means and uncertainties. A total of 12 scientific publications were selected as data sources based on a set of criteria. Fromthese publications we chose 52 study sites spread out across South Korea. The statistical model for the meta- analysis was a multilevel model with publication (data source) as the nesting factor specified under the Bayesian framework. Gamma, Log-normal and Weibull PDFs were evaluated. The Log-normal PDF yielded the best quanti- tative and qualitative fit for the three biomass factors. However, a poor fit of the PDF to the long right tail of observed BEF and RSR distributions was apparent. The median posterior estimates for means and 95 % credible intervals for BCEF, BEF and RSR across all 12 publica- tions were 1.016 (0.800-1.299), 1.414 (1.304-1.560) and 0.260 (0.200-0.335), respectively. The Log-normal PDF proved useful for estimating carbon stock of Q. mongolica forests on a regional scale and for uncertainty analysis based on Monte Carlo simulation.展开更多
In order to classify the minimal hepatic encephalopathy (MHE) patients from healthy controls, the independent component analysis (ICA) is used to generate the default mode network (DMN) from resting-state functi...In order to classify the minimal hepatic encephalopathy (MHE) patients from healthy controls, the independent component analysis (ICA) is used to generate the default mode network (DMN) from resting-state functional magnetic resonance imaging (fMRI). Then a Bayesian voxel- wised method, graphical-model-based multivariate analysis (GAMMA), is used to explore the associations between abnormal functional integration within DMN and clinical variable. Without any prior knowledge, five machine learning methods, namely, support vector machines (SVMs), classification and regression trees ( CART ), logistic regression, the Bayesian network, and C4.5, are applied to the classification. The functional integration patterns were alternative within DMN, which have the power to predict MHE with an accuracy of 98%. The GAMMA method generating functional integration patterns within DMN can become a simple, objective, and common imaging biomarker for detecting MIIE and can serve as a supplement to the existing diagnostic methods.展开更多
Survival of HIV/AIDS patients is crucially dependent on comprehensive and targeted medical interventions such as supply of antiretroviral therapy and monitoring disease progression with CD4 T-cell counts. Statistical ...Survival of HIV/AIDS patients is crucially dependent on comprehensive and targeted medical interventions such as supply of antiretroviral therapy and monitoring disease progression with CD4 T-cell counts. Statistical modelling approaches are helpful towards this goal. This study aims at developing Bayesian joint models with assumed generalized error distribution (GED) for the longitudinal CD4 data and two accelerated failure time distributions, Lognormal and loglogistic, for the survival time of HIV/AIDS patients. Data are obtained from patients under antiretroviral therapy follow-up at Shashemene referral hospital during January 2006-January 2012 and at Bale Robe general hospital during January 2008-March 2015. The Bayesian joint models are defined through latent variables and association parameters and with specified non-informative prior distributions for the model parameters. Simulations are conducted using Gibbs sampler algorithm implemented in the WinBUGS software. The results of the analyses of the two different data sets show that distributions of measurement errors of the longitudinal CD4 variable follow the generalized error distribution with fatter tails than the normal distribution. The Bayesian joint GED loglogistic models fit better to the data sets compared to the lognormal cases. Findings reveal that patients’ health can be improved over time. Compared to the males, female patients gain more CD4 counts. Survival time of a patient is negatively affected by TB infection. Moreover, increase in number of opportunistic infection implies decline of CD4 counts. Patients’ age negatively affects the disease marker with no effects on survival time. Improving weight may improve survival time of patients. Bayesian joint models with GED and AFT distributions are found to be useful in modelling the longitudinal and survival processes. Thus we recommend the generalized error distributions for measurement errors of the longitudinal data under the Bayesian joint modelling. Further studies may investigate the models with various types of shared random effects and more covariates with predictions.展开更多
Computations involved in Bayesian approach to practical model selection problems are usually very difficult. Computational simplifications are sometimes possible, but are not generally applicable. There is a large lit...Computations involved in Bayesian approach to practical model selection problems are usually very difficult. Computational simplifications are sometimes possible, but are not generally applicable. There is a large literature available on a methodology based on information theory called Minimum Description Length (MDL). It is described here how many of these techniques are either directly Bayesian in nature, or are very good objective approximations to Bayesian solutions. First, connections between the Bayesian approach and MDL are theoretically explored;thereafter a few illustrations are provided to describe how MDL can give useful computational simplifications.展开更多
Nonparametric and parametric subset selection procedures are used in the analysis of state homicide rates (SHRs), for the year 2005 and years 2014-2020, to identify subsets of states that contain the “best” (lowest ...Nonparametric and parametric subset selection procedures are used in the analysis of state homicide rates (SHRs), for the year 2005 and years 2014-2020, to identify subsets of states that contain the “best” (lowest SHR) and “worst” (highest SHR) rates with a prescribed probability. A new Bayesian model is developed and applied to the SHR data and the results are contrasted with those obtained with the subset selection procedures. All analyses are applied within the context of a two-way block design.展开更多
In the investigation of disease dynamics, the effect of covariates on the hazard function is a major topic. Some recent smoothed estimation methods have been proposed, both frequentist and Bayesian, based on the relat...In the investigation of disease dynamics, the effect of covariates on the hazard function is a major topic. Some recent smoothed estimation methods have been proposed, both frequentist and Bayesian, based on the relationship between penalized splines and mixed models theory. These approaches are also motivated by the possibility of using automatic procedures for determining the optimal amount of smoothing. However, estimation algorithms involve an analytically intractable hazard function, and thus require ad-hoc software routines. We propose a more user-friendly alternative, consisting in regularized estimation of piecewise exponential models by Bayesian P-splines. A further facilitation is that widespread Bayesian software, such as WinBUGS, can be used. The aim is assessing the robustness of this approach with respect to different prior functions and penalties. A large dataset from breast cancer patients, where results from validated clinical studies are available, is used as a benchmark to evaluate the reliability of the estimates. A second dataset from a small case series of sarcoma patients is used for evaluating the performances of the PE model as a tool for exploratory analysis. Concerning breast cancer data, the estimates are robust with respect to priors and penalties, and consistent with clinical knowledge. Concerning soft tissue sarcoma data, the estimates of the hazard function are sensitive with respect to the prior for the smoothing parameter, whereas the estimates of regression coefficients are robust. In conclusion, Gibbs sampling results an efficient computational strategy. The issue of the sensitivity with respect to the priors concerns only the estimates of the hazard function, and seems more likely to occur when non-large case series are investigated, calling for tailored solutions.展开更多
In traditional Bayesian software reliability models, it was assume that all probabilities are precise. In practical applications the parameters of the probability distributions are often under uncertainty due to stron...In traditional Bayesian software reliability models, it was assume that all probabilities are precise. In practical applications the parameters of the probability distributions are often under uncertainty due to strong dependence on subjective information of experts' judgments on sparse statistical data. In this paper, a quasi-Bayesian software reliability model using interval-valued probabilities to clearly quantify experts' prior beliefs on possible intervals of the parameters of the probability distributions is presented. The model integrates experts' judgments with statistical data to obtain more convincible assessments of software reliability with small samples. For some actual data sets, the presented model yields better predictions than the Jelinski-Moranda (JM) model using maximum likelihood (ML).展开更多
Objective: The progression of human cancer is characterized by the accumulation of genetic instability. An increasing number of experimental genetic molecular techniques have been used to detect chromosome aberration...Objective: The progression of human cancer is characterized by the accumulation of genetic instability. An increasing number of experimental genetic molecular techniques have been used to detect chromosome aberrations. Previous studies on chromosome abnormalities often focused on identifying the frequent loci of chromosome alterations, but rarely addressed the issue of interrelationship of chromosomal abnormalities. In the last few years, several mathematical models have been employed to construct models of carcinogenesis, in an attempt to identify the time order and cause-and-effect relationship of chromosome aberrations. The principles and applications of these models are reviewed and compared in this paper. Mathematical modeling of carcinogenesis can contribute to our understanding of the molecular genetics of tumor development, and identification of cancer related genes, thus leading to improved clinical practice of cancer.展开更多
The Wiener process as a degradation model plays an important role in the degradation analysis.In this paper, we propose an objective Bayesian analysis for an acceleration degradation Wienermodel which is subjected to ...The Wiener process as a degradation model plays an important role in the degradation analysis.In this paper, we propose an objective Bayesian analysis for an acceleration degradation Wienermodel which is subjected to measurement errors. The Jeffreys prior and reference priors underdifferent group orderings are first derived, the propriety of the posteriors is then validated. It isshown that two of the reference priors can yield proper posteriors while the others cannot. A simulation study is carried out to investigate the frequentist performance of the approach comparedto the maximum likelihood method. Finally, the approach is applied to analyse a real data.展开更多
基金The National Science Foundation by Changjiang Scholarship of Ministry of Education of China(No.BCS-0527508)the Joint Research Fund for Overseas Natural Science of China(No.51250110075)+1 种基金the Natural Science Foundation of Jiangsu Province(No.SBK200910046)the Postdoctoral Science Foundation of Jiangsu Province(No.0901005C)
文摘In order to improve crash occurrence models to account for the influence of various contributing factors, a conditional autoregressive negative binomial (CAR-NB) model is employed to allow for overdispersion (tackled by the NB component), unobserved heterogeneity and spatial autocorrelation (captured by the CAR process), using Markov chain Monte Carlo methods and the Gibbs sampler. Statistical tests suggest that the CAR-NB model is preferred over the CAR-Poisson, NB, zero-inflated Poisson, zero-inflated NB models, due to its lower prediction errors and more robust parameter inference. The study results show that crash frequency and fatalities are positively associated with the number of lanes, curve length, annual average daily traffic (AADT) per lane, as well as rainfall. Speed limit and the distances to the nearest hospitals have negative associations with segment-based crash counts but positive associations with fatality counts, presumably as a result of worsened collision impacts at higher speed and time loss during transporting crash victims.
文摘To improve the accuracy and speed in cycle-accurate power estimation, this paper uses multiple dimensional coefficients to build a Bayesian inference dynamic power model. By analyzing the power distribution and internal node state, we find the deficiency of only using port information. Then, we define the gate level number computing method and the concept of slice, and propose using slice analysis to distill switching density as coefficients in a special circuit stage and participate in Bayesian inference with port information. Experiments show that this method can reduce the power-per-cycle estimation error by 21.9% and the root mean square error by 25.0% compared with the original model, and maintain a 700 + speedup compared with the existing gate-level power analysis technique.
基金supported by the National Natural Science Foundation of China(Grants No.51779074 and 41371052)the Special Fund for the Public Welfare Industry of the Ministry of Water Resources of China(Grant No.201501059)+3 种基金the National Key Research and Development Program of China(Grant No.2017YFC0404304)the Jiangsu Water Conservancy Science and Technology Project(Grant No.2017027)the Program for Outstanding Young Talents in Colleges and Universities of Anhui Province(Grant No.gxyq2018143)the Natural Science Foundation of Wanjiang University of Technology(Grant No.WG18030)
文摘This study developed a hierarchical Bayesian(HB)model for local and regional flood frequency analysis in the Dongting Lake Basin,in China.The annual maximum daily flows from 15 streamflow-gauged sites in the study area were analyzed with the HB model.The generalized extreme value(GEV)distribution was selected as the extreme flood distribution,and the GEV distribution location and scale parameters were spatially modeled through a regression approach with the drainage area as a covariate.The Markov chain Monte Carlo(MCMC)method with Gibbs sampling was employed to calculate the posterior distribution in the HB model.The results showed that the proposed HB model provided satisfactory Bayesian credible intervals for flood quantiles,while the traditional delta method could not provide reliable uncertainty estimations for large flood quantiles,due to the fact that the lower confidence bounds tended to decrease as the return periods increased.Furthermore,the HB model for regional analysis allowed for a reduction in the value of some restrictive assumptions in the traditional index flood method,such as the homogeneity region assumption and the scale invariance assumption.The HB model can also provide an uncertainty band of flood quantile prediction at a poorly gauged or ungauged site,but the index flood method with L-moments does not demonstrate this uncertainty directly.Therefore,the HB model is an effective method of implementing the flexible local and regional frequency analysis scheme,and of quantifying the associated predictive uncertainty.
文摘A Bayesian analysis of the minimal model was proposed where both glucose and insulin were analyzed simultaneously under the insulin-modified intravenous glucose tolerance test (IVGTT). The resulting model was implemented with a nonlinear mixed-effects modeling setup using ordinary differential equations (ODEs), which leads to precise estimation of population parameters by separating the inter- and intra-individual variability. The results indicated that the Bayesian method applied to the glucose-insulin minimal model provided a satisfactory solution with accurate parameter estimates which were numerically stable since the Bayesian method did not require approximation by linearization.
文摘A Bayesian network (BN) model was developed to predict susceptibility to PWD(Pine Wilt Disease). The distribution of PWD was identified using QuickBird and unmanned aerial vehicle (UAV) images taken at different times. Seven factors that influence the distribution of PWD were extracted from the QuickBird images and were used as the independent variables. The results showed that the BN model predicted PWD with high accuracy. In a sensitivity analysis, elevation (EL), the normal differential vegetation index (NDVI), the distance to settlements (DS) and the distance to roads (DR) were strongly associated with PWD prevalence, and slope (SL) exhibited the weakest association with PWD prevalence. The study showed that BN is an effective tool for modeling PWD prevalence and quantifying the impact of various factors.
文摘Indirect approaches to estimation of biomass factors are often applied to measure carbon flux in the forestry sector. An assumption underlying a country-level carbon stock estimate is the representativeness of these factors. Although intensive studies have been conducted to quantify biomass factors, each study typically covers a limited geographic area. The goal of this study was to employ a meta-analysis approach to develop regional bio- mass factors for Quercus mongolica forests in South Korea. The biomass factors of interest were biomass conversion and expansion factor (BCEF), biomass expansion factor (BEF) and root-to-shoot ratio (RSR). Our objectives were to select probability density functions (PDFs) that best fitted the three biomass factors and to quantify their means and uncertainties. A total of 12 scientific publications were selected as data sources based on a set of criteria. Fromthese publications we chose 52 study sites spread out across South Korea. The statistical model for the meta- analysis was a multilevel model with publication (data source) as the nesting factor specified under the Bayesian framework. Gamma, Log-normal and Weibull PDFs were evaluated. The Log-normal PDF yielded the best quanti- tative and qualitative fit for the three biomass factors. However, a poor fit of the PDF to the long right tail of observed BEF and RSR distributions was apparent. The median posterior estimates for means and 95 % credible intervals for BCEF, BEF and RSR across all 12 publica- tions were 1.016 (0.800-1.299), 1.414 (1.304-1.560) and 0.260 (0.200-0.335), respectively. The Log-normal PDF proved useful for estimating carbon stock of Q. mongolica forests on a regional scale and for uncertainty analysis based on Monte Carlo simulation.
基金The National Natural Science Foundation of China(No.8123003481271739+2 种基金81501453)the Special Program of Medical Science of Jiangsu Province(No.BL2013029)the Natural Science Foundation of Jiangsu Province(No.BK20141342)
文摘In order to classify the minimal hepatic encephalopathy (MHE) patients from healthy controls, the independent component analysis (ICA) is used to generate the default mode network (DMN) from resting-state functional magnetic resonance imaging (fMRI). Then a Bayesian voxel- wised method, graphical-model-based multivariate analysis (GAMMA), is used to explore the associations between abnormal functional integration within DMN and clinical variable. Without any prior knowledge, five machine learning methods, namely, support vector machines (SVMs), classification and regression trees ( CART ), logistic regression, the Bayesian network, and C4.5, are applied to the classification. The functional integration patterns were alternative within DMN, which have the power to predict MHE with an accuracy of 98%. The GAMMA method generating functional integration patterns within DMN can become a simple, objective, and common imaging biomarker for detecting MIIE and can serve as a supplement to the existing diagnostic methods.
文摘Survival of HIV/AIDS patients is crucially dependent on comprehensive and targeted medical interventions such as supply of antiretroviral therapy and monitoring disease progression with CD4 T-cell counts. Statistical modelling approaches are helpful towards this goal. This study aims at developing Bayesian joint models with assumed generalized error distribution (GED) for the longitudinal CD4 data and two accelerated failure time distributions, Lognormal and loglogistic, for the survival time of HIV/AIDS patients. Data are obtained from patients under antiretroviral therapy follow-up at Shashemene referral hospital during January 2006-January 2012 and at Bale Robe general hospital during January 2008-March 2015. The Bayesian joint models are defined through latent variables and association parameters and with specified non-informative prior distributions for the model parameters. Simulations are conducted using Gibbs sampler algorithm implemented in the WinBUGS software. The results of the analyses of the two different data sets show that distributions of measurement errors of the longitudinal CD4 variable follow the generalized error distribution with fatter tails than the normal distribution. The Bayesian joint GED loglogistic models fit better to the data sets compared to the lognormal cases. Findings reveal that patients’ health can be improved over time. Compared to the males, female patients gain more CD4 counts. Survival time of a patient is negatively affected by TB infection. Moreover, increase in number of opportunistic infection implies decline of CD4 counts. Patients’ age negatively affects the disease marker with no effects on survival time. Improving weight may improve survival time of patients. Bayesian joint models with GED and AFT distributions are found to be useful in modelling the longitudinal and survival processes. Thus we recommend the generalized error distributions for measurement errors of the longitudinal data under the Bayesian joint modelling. Further studies may investigate the models with various types of shared random effects and more covariates with predictions.
文摘Computations involved in Bayesian approach to practical model selection problems are usually very difficult. Computational simplifications are sometimes possible, but are not generally applicable. There is a large literature available on a methodology based on information theory called Minimum Description Length (MDL). It is described here how many of these techniques are either directly Bayesian in nature, or are very good objective approximations to Bayesian solutions. First, connections between the Bayesian approach and MDL are theoretically explored;thereafter a few illustrations are provided to describe how MDL can give useful computational simplifications.
文摘Nonparametric and parametric subset selection procedures are used in the analysis of state homicide rates (SHRs), for the year 2005 and years 2014-2020, to identify subsets of states that contain the “best” (lowest SHR) and “worst” (highest SHR) rates with a prescribed probability. A new Bayesian model is developed and applied to the SHR data and the results are contrasted with those obtained with the subset selection procedures. All analyses are applied within the context of a two-way block design.
文摘In the investigation of disease dynamics, the effect of covariates on the hazard function is a major topic. Some recent smoothed estimation methods have been proposed, both frequentist and Bayesian, based on the relationship between penalized splines and mixed models theory. These approaches are also motivated by the possibility of using automatic procedures for determining the optimal amount of smoothing. However, estimation algorithms involve an analytically intractable hazard function, and thus require ad-hoc software routines. We propose a more user-friendly alternative, consisting in regularized estimation of piecewise exponential models by Bayesian P-splines. A further facilitation is that widespread Bayesian software, such as WinBUGS, can be used. The aim is assessing the robustness of this approach with respect to different prior functions and penalties. A large dataset from breast cancer patients, where results from validated clinical studies are available, is used as a benchmark to evaluate the reliability of the estimates. A second dataset from a small case series of sarcoma patients is used for evaluating the performances of the PE model as a tool for exploratory analysis. Concerning breast cancer data, the estimates are robust with respect to priors and penalties, and consistent with clinical knowledge. Concerning soft tissue sarcoma data, the estimates of the hazard function are sensitive with respect to the prior for the smoothing parameter, whereas the estimates of regression coefficients are robust. In conclusion, Gibbs sampling results an efficient computational strategy. The issue of the sensitivity with respect to the priors concerns only the estimates of the hazard function, and seems more likely to occur when non-large case series are investigated, calling for tailored solutions.
基金supported by the National High-Technology Research and Development Program of China (Grant Nos.2006AA01Z187,2007AA040605)
文摘In traditional Bayesian software reliability models, it was assume that all probabilities are precise. In practical applications the parameters of the probability distributions are often under uncertainty due to strong dependence on subjective information of experts' judgments on sparse statistical data. In this paper, a quasi-Bayesian software reliability model using interval-valued probabilities to clearly quantify experts' prior beliefs on possible intervals of the parameters of the probability distributions is presented. The model integrates experts' judgments with statistical data to obtain more convincible assessments of software reliability with small samples. For some actual data sets, the presented model yields better predictions than the Jelinski-Moranda (JM) model using maximum likelihood (ML).
基金supported by a grant from the Education Department of Zhejiang Province (No.Y200803235)
文摘Objective: The progression of human cancer is characterized by the accumulation of genetic instability. An increasing number of experimental genetic molecular techniques have been used to detect chromosome aberrations. Previous studies on chromosome abnormalities often focused on identifying the frequent loci of chromosome alterations, but rarely addressed the issue of interrelationship of chromosomal abnormalities. In the last few years, several mathematical models have been employed to construct models of carcinogenesis, in an attempt to identify the time order and cause-and-effect relationship of chromosome aberrations. The principles and applications of these models are reviewed and compared in this paper. Mathematical modeling of carcinogenesis can contribute to our understanding of the molecular genetics of tumor development, and identification of cancer related genes, thus leading to improved clinical practice of cancer.
基金The work is supported by the Humanities and Social Sciences Foundation of Ministry of Education,China(Grant No.17YJC910003).
文摘The Wiener process as a degradation model plays an important role in the degradation analysis.In this paper, we propose an objective Bayesian analysis for an acceleration degradation Wienermodel which is subjected to measurement errors. The Jeffreys prior and reference priors underdifferent group orderings are first derived, the propriety of the posteriors is then validated. It isshown that two of the reference priors can yield proper posteriors while the others cannot. A simulation study is carried out to investigate the frequentist performance of the approach comparedto the maximum likelihood method. Finally, the approach is applied to analyse a real data.