Today, Linear Mixed Models (LMMs) are fitted, mostly, by assuming that random effects and errors have Gaussian distributions, therefore using Maximum Likelihood (ML) or REML estimation. However, for many data sets, th...Today, Linear Mixed Models (LMMs) are fitted, mostly, by assuming that random effects and errors have Gaussian distributions, therefore using Maximum Likelihood (ML) or REML estimation. However, for many data sets, that double assumption is unlikely to hold, particularly for the random effects, a crucial component </span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">in </span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">which assessment of magnitude is key in such modeling. Alternative fitting methods not relying on that assumption (as ANOVA ones and Rao</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">’</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">s MINQUE) apply, quite often, only to the very constrained class of variance components models. In this paper, a new computationally feasible estimation methodology is designed, first for the widely used class of 2-level (or longitudinal) LMMs with only assumption (beyond the usual basic ones) that residual errors are uncorrelated and homoscedastic, with no distributional assumption imposed on the random effects. A major asset of this new approach is that it yields nonnegative variance estimates and covariance matrices estimates which are symmetric and, at least, positive semi-definite. Furthermore, it is shown that when the LMM is, indeed, Gaussian, this new methodology differs from ML just through a slight variation in the denominator of the residual variance estimate. The new methodology actually generalizes to LMMs a well known nonparametric fitting procedure for standard Linear Models. Finally, the methodology is also extended to ANOVA LMMs, generalizing an old method by Henderson for ML estimation in such models under normality.展开更多
In generalized linear models with fixed design, under the assumption λ↑_n→∞ and other regularity conditions, the asymptotic normality of maximum quasi-likelihood estimator ^↑βn, which is the root of the quasi-li...In generalized linear models with fixed design, under the assumption λ↑_n→∞ and other regularity conditions, the asymptotic normality of maximum quasi-likelihood estimator ^↑βn, which is the root of the quasi-likelihood equation with natural link function ∑i=1^n Xi(yi -μ(Xi′β)) = 0, is obtained, where λ↑_n denotes the minimum eigenvalue of ∑i=1^nXiXi′, Xi are bounded p × q regressors, and yi are q × 1 responses.展开更多
Suppose that we have a partially linear model Yi = xiβ + g(ti) +εi with independent zero mean errors εi, where (xi,ti, i = 1, ... ,n} are non-random and observed completely and (Yi, i = 1,...,n} are missing a...Suppose that we have a partially linear model Yi = xiβ + g(ti) +εi with independent zero mean errors εi, where (xi,ti, i = 1, ... ,n} are non-random and observed completely and (Yi, i = 1,...,n} are missing at random(MAR). Two types of estimators of β and g(t) for fixed t are investigated: estimators based on semiparametric regression and inverse probability weighted imputations. Asymptotic normality of the estimators is established, which is used to construct normal approximation based confidence intervals on β and g(t). Results are reported of a simulation study on the finite sample performance of the estimators and confidence intervals proposed in this paper.展开更多
This paper concerns computational problems of the concave penalized linear regression model.We propose a fixed point iterative algorithm to solve the computational problem based on the fact that the penalized estimato...This paper concerns computational problems of the concave penalized linear regression model.We propose a fixed point iterative algorithm to solve the computational problem based on the fact that the penalized estimator satisfies a fixed point equation.The convergence property of the proposed algorithm is established.Numerical studies are conducted to evaluate the finite sample performance of the proposed algorithm.展开更多
文摘Today, Linear Mixed Models (LMMs) are fitted, mostly, by assuming that random effects and errors have Gaussian distributions, therefore using Maximum Likelihood (ML) or REML estimation. However, for many data sets, that double assumption is unlikely to hold, particularly for the random effects, a crucial component </span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">in </span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">which assessment of magnitude is key in such modeling. Alternative fitting methods not relying on that assumption (as ANOVA ones and Rao</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">’</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">s MINQUE) apply, quite often, only to the very constrained class of variance components models. In this paper, a new computationally feasible estimation methodology is designed, first for the widely used class of 2-level (or longitudinal) LMMs with only assumption (beyond the usual basic ones) that residual errors are uncorrelated and homoscedastic, with no distributional assumption imposed on the random effects. A major asset of this new approach is that it yields nonnegative variance estimates and covariance matrices estimates which are symmetric and, at least, positive semi-definite. Furthermore, it is shown that when the LMM is, indeed, Gaussian, this new methodology differs from ML just through a slight variation in the denominator of the residual variance estimate. The new methodology actually generalizes to LMMs a well known nonparametric fitting procedure for standard Linear Models. Finally, the methodology is also extended to ANOVA LMMs, generalizing an old method by Henderson for ML estimation in such models under normality.
基金the National Natural Science Foundation of China under Grant Nos.10171094,10571001,and 30572285the Foundation of Nanjing Normal University under Grant No.2005101XGQ2B84+1 种基金the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant No.07KJD110093the Foundation of Anhui University under Grant No.02203105
文摘In generalized linear models with fixed design, under the assumption λ↑_n→∞ and other regularity conditions, the asymptotic normality of maximum quasi-likelihood estimator ^↑βn, which is the root of the quasi-likelihood equation with natural link function ∑i=1^n Xi(yi -μ(Xi′β)) = 0, is obtained, where λ↑_n denotes the minimum eigenvalue of ∑i=1^nXiXi′, Xi are bounded p × q regressors, and yi are q × 1 responses.
基金Supported by the National Natural Science Foundation of China(No.11271088,11361011,11201088)Guangxi"Bagui Scholar"Special Project Foundationthe Natural Science Foundation of Guangxi(No.2013GXNS-FAA019004,2013GXNSFAA019007,2013GXNSFBA019001)
文摘Suppose that we have a partially linear model Yi = xiβ + g(ti) +εi with independent zero mean errors εi, where (xi,ti, i = 1, ... ,n} are non-random and observed completely and (Yi, i = 1,...,n} are missing at random(MAR). Two types of estimators of β and g(t) for fixed t are investigated: estimators based on semiparametric regression and inverse probability weighted imputations. Asymptotic normality of the estimators is established, which is used to construct normal approximation based confidence intervals on β and g(t). Results are reported of a simulation study on the finite sample performance of the estimators and confidence intervals proposed in this paper.
基金Supported by the National Natural Science Foundation of China(11701571)
文摘This paper concerns computational problems of the concave penalized linear regression model.We propose a fixed point iterative algorithm to solve the computational problem based on the fact that the penalized estimator satisfies a fixed point equation.The convergence property of the proposed algorithm is established.Numerical studies are conducted to evaluate the finite sample performance of the proposed algorithm.