In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates (QMLE) concerning the quasi-likelihood equation $ \sum\nolimits_{i = 1}^n {X_i (y_i - \mu (X_i^\prime \beta ))} $ for u...In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates (QMLE) concerning the quasi-likelihood equation $ \sum\nolimits_{i = 1}^n {X_i (y_i - \mu (X_i^\prime \beta ))} $ for univariate generalized linear model E(y|X) = μ(X′β). Given uncorrelated residuals {e i = Y i ? μ(X i ′ β0), 1 ? i ? n} and other conditions, we prove that $$ \hat \beta _n - \beta _0 = O_p (\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\lambda } _n^{ - 1/2} ) $$ holds, where $ \hat \beta _n $ is a root of the above equation, β 0 is the true value of parameter β and $$ \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\lambda } _n $$ denotes the smallest eigenvalue of the matrix S n = ∑ i=1 n X i X i ′ . We also show that the convergence rate above is sharp, provided independent non-asymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas (1976) for classical linear regression models, we point out that the necessary condition guaranteeing the weak consistency of QMLE is S n ?1 → 0, as the sample size n → ∞.展开更多
We study the law of the iterated logarithm (LIL) for the maximum likelihood estimation of the parameters (as a convex optimization problem) in the generalized linear models with independent or weakly dependent (ρ-mix...We study the law of the iterated logarithm (LIL) for the maximum likelihood estimation of the parameters (as a convex optimization problem) in the generalized linear models with independent or weakly dependent (ρ-mixing) responses under mild conditions. The LIL is useful to derive the asymptotic bounds for the discrepancy between the empirical process of the log-likelihood function and the true log-likelihood. The strong consistency of some penalized likelihood-based model selection criteria can be shown as an application of the LIL. Under some regularity conditions, the model selection criterion will be helpful to select the simplest correct model almost surely when the penalty term increases with the model dimension, and the penalty term has an order higher than O(log log n) but lower than O(n). Simulation studies are implemented to verify the selection consistency of Bayesian information criterion.展开更多
基金supported by the President Foundation (Grant No. Y1050)the Scientific Research Foundation(Grant No. KYQD200502) of GUCAS
文摘In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates (QMLE) concerning the quasi-likelihood equation $ \sum\nolimits_{i = 1}^n {X_i (y_i - \mu (X_i^\prime \beta ))} $ for univariate generalized linear model E(y|X) = μ(X′β). Given uncorrelated residuals {e i = Y i ? μ(X i ′ β0), 1 ? i ? n} and other conditions, we prove that $$ \hat \beta _n - \beta _0 = O_p (\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\lambda } _n^{ - 1/2} ) $$ holds, where $ \hat \beta _n $ is a root of the above equation, β 0 is the true value of parameter β and $$ \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\lambda } _n $$ denotes the smallest eigenvalue of the matrix S n = ∑ i=1 n X i X i ′ . We also show that the convergence rate above is sharp, provided independent non-asymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas (1976) for classical linear regression models, we point out that the necessary condition guaranteeing the weak consistency of QMLE is S n ?1 → 0, as the sample size n → ∞.
文摘We study the law of the iterated logarithm (LIL) for the maximum likelihood estimation of the parameters (as a convex optimization problem) in the generalized linear models with independent or weakly dependent (ρ-mixing) responses under mild conditions. The LIL is useful to derive the asymptotic bounds for the discrepancy between the empirical process of the log-likelihood function and the true log-likelihood. The strong consistency of some penalized likelihood-based model selection criteria can be shown as an application of the LIL. Under some regularity conditions, the model selection criterion will be helpful to select the simplest correct model almost surely when the penalty term increases with the model dimension, and the penalty term has an order higher than O(log log n) but lower than O(n). Simulation studies are implemented to verify the selection consistency of Bayesian information criterion.