It is well-known that the power of Cochran’s Q test to assess the presence of heterogeneity among treatment effects in a clinical meta-analysis is low due to the small number of studies combined. Two modified tests (...It is well-known that the power of Cochran’s Q test to assess the presence of heterogeneity among treatment effects in a clinical meta-analysis is low due to the small number of studies combined. Two modified tests (PL1, PL2) were proposed by replacing the profile maximum likelihood estimator (PMLE) into the variance formula of logarithm of risk ratio in the standard chi-square test statistic for testing the null common risk ratios across all k studies (i = 1, L, k). The simply naive test (SIM) as another comparative candidate has considerably arisen. The performance of tests in terms of type I error rate under the null hypothesis and power of test under the random effects hypothesis was done via a simulation plan with various combinations of significance levels, numbers of studies, sample sizes in treatment and control arms, and true risk ratios as effect sizes of interest. The results indicated that for moderate to large study sizes (k?≥ 16)?in combination with moderate to large sample sizes?(?≥ 50), three tests (PL1, PL2, and Q) could control type I error rates in almost all situations. Two proposed tests (PL1, PL2) performed best with the highest power when?k?≥ 16?and moderate sample sizes (= 50,100);this finding was very useful to make a recommendation to use them in practical situations. Meanwhile, the standard Q test performed best when?k?≥ 16 and large sample sizes (≥ 500). Moreover, no tests were reasonable for small sample sizes (≤ 10), regardless of study size k. The simply naive test (SIM) is recommended to be adopted with high performance when k = 4 in combination with (≥ 500).展开更多
Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the c...Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.展开更多
This paper considers tests for regression coefficients in high dimensional partially linear Models.The authors first use the B-spline method to estimate the unknown smooth function so that it could be linearly express...This paper considers tests for regression coefficients in high dimensional partially linear Models.The authors first use the B-spline method to estimate the unknown smooth function so that it could be linearly expressed.Then,the authors propose an empirical likelihood method to test regression coefficients.The authors derive the asymptotic chi-squared distribution with two degrees of freedom of the proposed test statistics under the null hypothesis.In addition,the method is extended to test with nuisance parameters.Simulations show that the proposed method have a good performance in control of type-I error rate and power.The proposed method is also employed to analyze a data of Skin Cutaneous Melanoma(SKCM).展开更多
The testing covariance equality is of importance in many areas of statistical analysis,such as microarray analysis and quality control.Conventional tests for the finite-dimensional covariance do not apply to high-dime...The testing covariance equality is of importance in many areas of statistical analysis,such as microarray analysis and quality control.Conventional tests for the finite-dimensional covariance do not apply to high-dimensional data in general,and tests for the high-dimensional covariance in the literature usually depend on some special structure of the matrix and whether the dimension diverges.In this paper,we propose a jackknife empirical likelihood method to test the equality of covariance matrices.The asymptotic distribution of the new test is regardless of the divergent or fixed dimension.Simulation studies show that the new test has a very stable size with respect to the dimension and it is also more powerful than the test proposed by Schott(2007)and studied by Srivastava and Yanagihara(2010).Furthermore,we illustrate the method using a breast cancer dataset.展开更多
In this paper, asymptotic expansions of the distribution of the likelihood ratio statistic for testing sphericity in a crowth curve model have been derived in the null and nonnull cases when the alternatives are dose ...In this paper, asymptotic expansions of the distribution of the likelihood ratio statistic for testing sphericity in a crowth curve model have been derived in the null and nonnull cases when the alternatives are dose to the null hypothesis. These expansions are given in series form of beta distributions.展开更多
This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ...This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ratio test is X^2(1).展开更多
In this paper, we extend the generalized likelihood ratio test to the varying-coefficient models with censored data. We investigate the asymptotic behavior of the proposed test and demonstrate that its limiting null d...In this paper, we extend the generalized likelihood ratio test to the varying-coefficient models with censored data. We investigate the asymptotic behavior of the proposed test and demonstrate that its limiting null distribution follows a distribution, with the scale constant and the number of degree of freedom being independent of nuisance parameters or functions, which is called the wilks phenomenon. Both simulated and real data examples are given to illustrate the performance of the testing approach.展开更多
文章研究了背景为子空间干扰加高斯杂波的距离扩展目标方向检测问题。杂波是均值为零协方差矩阵未知但具有斜对称特性的高斯杂波,目标与干扰分别通过具备斜对称特性的目标子空间和干扰子空间描述。针对方向检测问题,利用上述斜对称性,...文章研究了背景为子空间干扰加高斯杂波的距离扩展目标方向检测问题。杂波是均值为零协方差矩阵未知但具有斜对称特性的高斯杂波,目标与干扰分别通过具备斜对称特性的目标子空间和干扰子空间描述。针对方向检测问题,利用上述斜对称性,根据广义似然比检验(Generalized Likeli-hood Ratio Test,GLRT)准则的一步与两步设计方法,设计了基于GLRT的一步法与两步法的距离扩展目标方向检测器。通过理论推导证明了这2种检测器相对于未知杂波协方差矩阵都具有恒虚警率。对比相同背景下已有检测器,特别是在辅助数据有限的场景下,文章提出的2个检测器表现出了优越的检测性能。展开更多
Background: Bivariate count data are commonly encountered in medicine, biology, engineering, epidemiology and many other applications. The Poisson distribution has been the model of choice to analyze such data. In mos...Background: Bivariate count data are commonly encountered in medicine, biology, engineering, epidemiology and many other applications. The Poisson distribution has been the model of choice to analyze such data. In most cases mutual independence among the variables is assumed, however this fails to take into accounts the correlation between the outcomes of interests. A special bivariate form of the multivariate Lagrange family of distribution, names Generalized Bivariate Poisson Distribution, is considered in this paper. Objectives: We estimate the model parameters using the method of maximum likelihood and show that the model fits the count variables representing components of metabolic syndrome in spousal pairs. We use the likelihood local score to test the significance of the correlation between the counts. We also construct confidence interval on the ratio of the two correlated Poisson means. Methods: Based on a random sample of pairs of count data, we show that the score test of independence is locally most powerful. We also provide a formula for sample size estimation for given level of significance and given power. The confidence intervals on the ratio of correlated Poisson means are constructed using the delta method, the Fieller’s theorem, and the nonparametric bootstrap. We illustrate the methodologies on metabolic syndrome data collected from 4000 spousal pairs. Results: The bivariate Poisson model fitted the metabolic syndrome data quite satisfactorily. Moreover, the three methods of confidence interval estimation were almost identical, meaning that they have the same interval width.展开更多
Hypothesis testing analysis and unknown parameter estimation of both the intermediate frequency(IF) and baseband GPS signal detection are given by using the generalized likelihood ratio test(GLRT) approach,applying th...Hypothesis testing analysis and unknown parameter estimation of both the intermediate frequency(IF) and baseband GPS signal detection are given by using the generalized likelihood ratio test(GLRT) approach,applying the model of GPS signal in white Gaussian noise,It is proved that the test statistic follows central or noncentral F distribution,It is also pointed out that the test statistic is nearly identical to central or noncentral chi-squared distribution because the processing samples are large enough to be considered as infinite in GPS acquisition problem.It is also proved that the probability of false alarm,the probability of detection and the threshold are affected largely when the hypothesis testing refers to the full pseudorandom noise(PRN) code phase and Doppler frequency search space cells instead of each individual cell.The performance of the test statistic is also given with combining the noncoherent integration.展开更多
Varying-coefficient models are a useful extension of classical linear model. They are widely applied to economics, biomedicine, epidemiology, and so on. There are extensive studies on them in the latest three decade y...Varying-coefficient models are a useful extension of classical linear model. They are widely applied to economics, biomedicine, epidemiology, and so on. There are extensive studies on them in the latest three decade years. In this paper, many of models related to varying-coefficient models are gathered up. All kinds of the estimation procedures and theory of hypothesis test on the varying-coefficients model are summarized. Prom my opinion, some aspects waiting to study are proposed.展开更多
In this papert we give an approach for detecting one or more outliers inrandomized linear model.The likelihood ratio test statistic and its distributions underthe null hypothesis and the alternative hypothesis are giv...In this papert we give an approach for detecting one or more outliers inrandomized linear model.The likelihood ratio test statistic and its distributions underthe null hypothesis and the alternative hypothesis are given. Furthermore,the robustnessof the test statistic in a certain sense is proved. Finally,the optimality properties of thetest are derived.展开更多
文摘It is well-known that the power of Cochran’s Q test to assess the presence of heterogeneity among treatment effects in a clinical meta-analysis is low due to the small number of studies combined. Two modified tests (PL1, PL2) were proposed by replacing the profile maximum likelihood estimator (PMLE) into the variance formula of logarithm of risk ratio in the standard chi-square test statistic for testing the null common risk ratios across all k studies (i = 1, L, k). The simply naive test (SIM) as another comparative candidate has considerably arisen. The performance of tests in terms of type I error rate under the null hypothesis and power of test under the random effects hypothesis was done via a simulation plan with various combinations of significance levels, numbers of studies, sample sizes in treatment and control arms, and true risk ratios as effect sizes of interest. The results indicated that for moderate to large study sizes (k?≥ 16)?in combination with moderate to large sample sizes?(?≥ 50), three tests (PL1, PL2, and Q) could control type I error rates in almost all situations. Two proposed tests (PL1, PL2) performed best with the highest power when?k?≥ 16?and moderate sample sizes (= 50,100);this finding was very useful to make a recommendation to use them in practical situations. Meanwhile, the standard Q test performed best when?k?≥ 16 and large sample sizes (≥ 500). Moreover, no tests were reasonable for small sample sizes (≤ 10), regardless of study size k. The simply naive test (SIM) is recommended to be adopted with high performance when k = 4 in combination with (≥ 500).
文摘Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.
基金supported by the University of Chinese Academy of Sciences under Grant No.Y95401TXX2Beijing Natural Science Foundation under Grant No.Z190004Key Program of Joint Funds of the National Natural Science Foundation of China under Grant No.U19B2040。
文摘This paper considers tests for regression coefficients in high dimensional partially linear Models.The authors first use the B-spline method to estimate the unknown smooth function so that it could be linearly expressed.Then,the authors propose an empirical likelihood method to test regression coefficients.The authors derive the asymptotic chi-squared distribution with two degrees of freedom of the proposed test statistics under the null hypothesis.In addition,the method is extended to test with nuisance parameters.Simulations show that the proposed method have a good performance in control of type-I error rate and power.The proposed method is also employed to analyze a data of Skin Cutaneous Melanoma(SKCM).
基金supported by the Simons Foundation,National Natural Science Foundation of China(Grant Nos.11771390 and 11371318)Zhejiang Provincial Natural Science Foundation of China(Grant No.LR16A010001)+1 种基金the University of Sydney and Zhejiang University Partnership Collaboration Awardsthe Fundamental Research Funds for the Central Universities.
文摘The testing covariance equality is of importance in many areas of statistical analysis,such as microarray analysis and quality control.Conventional tests for the finite-dimensional covariance do not apply to high-dimensional data in general,and tests for the high-dimensional covariance in the literature usually depend on some special structure of the matrix and whether the dimension diverges.In this paper,we propose a jackknife empirical likelihood method to test the equality of covariance matrices.The asymptotic distribution of the new test is regardless of the divergent or fixed dimension.Simulation studies show that the new test has a very stable size with respect to the dimension and it is also more powerful than the test proposed by Schott(2007)and studied by Srivastava and Yanagihara(2010).Furthermore,we illustrate the method using a breast cancer dataset.
文摘In this paper, asymptotic expansions of the distribution of the likelihood ratio statistic for testing sphericity in a crowth curve model have been derived in the null and nonnull cases when the alternatives are dose to the null hypothesis. These expansions are given in series form of beta distributions.
基金Supported by the National Natural Science Foundation of China(10661003)the SRF for ROCS,SEM([2004]527)the NSF of Guangxi(0728092)
文摘This paper investigates the modified likelihood ratio test(LRT) for homogeneity in normal mixtures of two samples with mixing proportions unknown. It is proved that the limit distribution of the modified likelihood ratio test is X^2(1).
文摘In this paper, we extend the generalized likelihood ratio test to the varying-coefficient models with censored data. We investigate the asymptotic behavior of the proposed test and demonstrate that its limiting null distribution follows a distribution, with the scale constant and the number of degree of freedom being independent of nuisance parameters or functions, which is called the wilks phenomenon. Both simulated and real data examples are given to illustrate the performance of the testing approach.
文摘文章研究了背景为子空间干扰加高斯杂波的距离扩展目标方向检测问题。杂波是均值为零协方差矩阵未知但具有斜对称特性的高斯杂波,目标与干扰分别通过具备斜对称特性的目标子空间和干扰子空间描述。针对方向检测问题,利用上述斜对称性,根据广义似然比检验(Generalized Likeli-hood Ratio Test,GLRT)准则的一步与两步设计方法,设计了基于GLRT的一步法与两步法的距离扩展目标方向检测器。通过理论推导证明了这2种检测器相对于未知杂波协方差矩阵都具有恒虚警率。对比相同背景下已有检测器,特别是在辅助数据有限的场景下,文章提出的2个检测器表现出了优越的检测性能。
文摘Background: Bivariate count data are commonly encountered in medicine, biology, engineering, epidemiology and many other applications. The Poisson distribution has been the model of choice to analyze such data. In most cases mutual independence among the variables is assumed, however this fails to take into accounts the correlation between the outcomes of interests. A special bivariate form of the multivariate Lagrange family of distribution, names Generalized Bivariate Poisson Distribution, is considered in this paper. Objectives: We estimate the model parameters using the method of maximum likelihood and show that the model fits the count variables representing components of metabolic syndrome in spousal pairs. We use the likelihood local score to test the significance of the correlation between the counts. We also construct confidence interval on the ratio of the two correlated Poisson means. Methods: Based on a random sample of pairs of count data, we show that the score test of independence is locally most powerful. We also provide a formula for sample size estimation for given level of significance and given power. The confidence intervals on the ratio of correlated Poisson means are constructed using the delta method, the Fieller’s theorem, and the nonparametric bootstrap. We illustrate the methodologies on metabolic syndrome data collected from 4000 spousal pairs. Results: The bivariate Poisson model fitted the metabolic syndrome data quite satisfactorily. Moreover, the three methods of confidence interval estimation were almost identical, meaning that they have the same interval width.
文摘Hypothesis testing analysis and unknown parameter estimation of both the intermediate frequency(IF) and baseband GPS signal detection are given by using the generalized likelihood ratio test(GLRT) approach,applying the model of GPS signal in white Gaussian noise,It is proved that the test statistic follows central or noncentral F distribution,It is also pointed out that the test statistic is nearly identical to central or noncentral chi-squared distribution because the processing samples are large enough to be considered as infinite in GPS acquisition problem.It is also proved that the probability of false alarm,the probability of detection and the threshold are affected largely when the hypothesis testing refers to the full pseudorandom noise(PRN) code phase and Doppler frequency search space cells instead of each individual cell.The performance of the test statistic is also given with combining the noncoherent integration.
基金Foundation item: Supported by the National Natural Science Foundation of China(10501053) Acknowledgement I would like to thank Henan Society of Applied Statistics for which give me a chance to declare my opinion about the varying-coefficient model.
文摘Varying-coefficient models are a useful extension of classical linear model. They are widely applied to economics, biomedicine, epidemiology, and so on. There are extensive studies on them in the latest three decade years. In this paper, many of models related to varying-coefficient models are gathered up. All kinds of the estimation procedures and theory of hypothesis test on the varying-coefficients model are summarized. Prom my opinion, some aspects waiting to study are proposed.
文摘In this papert we give an approach for detecting one or more outliers inrandomized linear model.The likelihood ratio test statistic and its distributions underthe null hypothesis and the alternative hypothesis are given. Furthermore,the robustnessof the test statistic in a certain sense is proved. Finally,the optimality properties of thetest are derived.