The solution of the grey model(GM(1,1)model)generally involves equal-precision observations,and the(co)variance matrix is established from the prior information.However,the data are generally available with unequal-pr...The solution of the grey model(GM(1,1)model)generally involves equal-precision observations,and the(co)variance matrix is established from the prior information.However,the data are generally available with unequal-precision measurements in reality.To deal with the errors of all observations for GM(1,1)model with errors-in-variables(EIV)structure,we exploit the total least-squares(TLS)algorithm to estimate the parameters of GM(1,1)model in this paper.Ignoring that the effect of the improper prior stochastic model and the homologous observations may degrade the accuracy of parameter estimation,we further present a nonlinear total least-squares variance component estimation approach for GM(1,1)model,which resorts to the minimum norm quadratic unbiased estimation(MINQUE).The practical and simulative experiments indicate that the presented approach has significant merits in improving the predictive accuracy in comparison with control methods.展开更多
A mixed distribution of empirical variances, composed of two distributions the basic and contaminating ones, and referred to as PERG mixed distribution of empirical variances, is considered. In the paper a robust inve...A mixed distribution of empirical variances, composed of two distributions the basic and contaminating ones, and referred to as PERG mixed distribution of empirical variances, is considered. In the paper a robust inverse problem solution is given, namely a (new) robust method for estimation of variances of both distributions—PEROBVC Method, as well as the estimates for the numbers of observations for both distributions and, in this way also the estimate of contamination degree.展开更多
Tensor data have been widely used in many fields,e.g.,modern biomedical imaging,chemometrics,and economics,but often suffer from some common issues as in high dimensional statistics.How to find their low-dimensional l...Tensor data have been widely used in many fields,e.g.,modern biomedical imaging,chemometrics,and economics,but often suffer from some common issues as in high dimensional statistics.How to find their low-dimensional latent structure has been of great interest for statisticians.To this end,we develop two efficient tensor sufficient dimension reduction methods based on the sliced average variance estimation(SAVE)to estimate the corresponding dimension reduction subspaces.The first one,entitled tensor sliced average variance estimation(TSAVE),works well when the response is discrete or takes finite values,but is not■consistent for continuous response;the second one,named bias-correction tensor sliced average variance estimation(CTSAVE),is a de-biased version of the TSAVE method.The asymptotic properties of both methods are derived under mild conditions.Simulations and real data examples are also provided to show the superiority of the efficiency of the developed methods.展开更多
Variance is one of themost important measures of descriptive statistics and commonly used for statistical analysis.The traditional second-order central moment based variance estimation is a widely utilized methodology...Variance is one of themost important measures of descriptive statistics and commonly used for statistical analysis.The traditional second-order central moment based variance estimation is a widely utilized methodology.However,traditional variance estimator is highly affected in the presence of extreme values.So this paper initially,proposes two classes of calibration estimators based on an adaptation of the estimators recently proposed by Koyuncu and then presents a new class of L-Moments based calibration variance estimators utilizing L-Moments characteristics(L-location,Lscale,L-CV)and auxiliary information.It is demonstrated that the proposed L-Moments based calibration variance estimators are more efficient than adapted ones.Artificial data is considered for assessing the performance of the proposed estimators.We also demonstrated an application related to apple fruit for purposes of the article.Using artificial and real data sets,percentage relative efficiency(PRE)of the proposed class of estimators with respect to adapted ones are calculated.The PRE results indicate to the superiority of the proposed class over adapted ones in the presence of extreme values.In this manner,the proposed class of estimators could be applied over an expansive range of survey sampling whenever auxiliary information is available in the presence of extreme values.展开更多
Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring system...Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring systems is presented. The variance and confidence intervals of the system reliability estimation are obtained by expressing system reliability as a linear sum of products of higher order moments of component reliability estimates when the number of component or system survivals obeys binomial distribution. The eigenfunction of binomial distribution is used to determine the moments of component reliability estimates, and a symbolic matrix which can facilitate the search of explicit system reliability estimates is proposed. Furthermore, a case of application is used to illustrate the procedure, and with the help of this example, various issues such as the applicability of this estimation model, and measures to improve system reliability of monitoring systems are discussed.展开更多
Variance is one of the most vital measures of dispersion widely employed in practical aspects.A commonly used approach for variance estimation is the traditional method of moments that is strongly influenced by the pr...Variance is one of the most vital measures of dispersion widely employed in practical aspects.A commonly used approach for variance estimation is the traditional method of moments that is strongly influenced by the presence of extreme values,and thus its results cannot be relied on.Finding momentum from Koyuncu’s recent work,the present paper focuses first on proposing two classes of variance estimators based on linear moments(L-moments),and then employing them with auxiliary data under double stratified sampling to introduce a new class of calibration variance estimators using important properties of L-moments(L-location,L-cv,L-variance).Three populations are taken into account to assess the efficiency of the new estimators.The first and second populations are concerned with artificial data,and the third populations is concerned with real data.The percentage relative efficiency of the proposed estimators over existing ones is evaluated.In the presence of extreme values,our findings depict the superiority and high efficiency of the proposed classes over traditional classes.Hence,when auxiliary data is available along with extreme values,the proposed classes of estimators may be implemented in an extensive variety of sampling surveys.展开更多
Our purpose is twofold: to present a prototypical example of the conditioning technique to obtain the best estimator of a parameter and to show that th</span><span style="font-family:Verdana;">is...Our purpose is twofold: to present a prototypical example of the conditioning technique to obtain the best estimator of a parameter and to show that th</span><span style="font-family:Verdana;">is technique resides in the structure of an inner product space. Th</span><span style="font-family:Verdana;">e technique uses conditioning </span></span><span style="font-family:Verdana;">of</span><span style="font-family:Verdana;"> an unbiased estimator </span><span style="font-family:Verdana;">on</span><span style="font-family:Verdana;"> a sufficient statistic. This procedure is founded upon the conditional variance formula, which leads to an inner product space and a geometric interpretation. The example clearly illustrates the dependence on the sampling methodology. These advantages show the power and centrality of this process.展开更多
Linear mixed model (LMM) approaches have been widely applied in many areas of research data analysis because they offer great flexibility for different data structures and linear model systems. In this study, emphasis...Linear mixed model (LMM) approaches have been widely applied in many areas of research data analysis because they offer great flexibility for different data structures and linear model systems. In this study, emphasis is placed on comparing the properties of two LMM approaches: restricted maximum likelihood (REML) and minimum norm quadratic unbiased estimation (MINQUE) with and without resampling techniques being included. Bias, testing power, Type I error, and computing time were compared between REML and MINQUE approaches with and without Jackknife technique based on 500 simulated data sets. Results showed that MINQUE and REML methods performed equally regarding bias, Type I error, and power. Jackknife-based MINQUE and REML greatly improved power compared to non-Jackknife based linear mixed model approaches. Results also showed that MINQUE is more time-saving compared to REML, especially with the use of resampling techniques and large data set analysis. Results from the actual cotton data analysis were in agreement with our simulated results. Therefore, Jackknife-based MINQUE approaches could be recommended to achieve desirable power with reduced time for a large data analysis and model simulations.展开更多
Many operations carried out by official statistical institutes use large-scale surveys obtained by stratified random sampling without replacement. Variables commonly examined in this type of surveys are binary, catego...Many operations carried out by official statistical institutes use large-scale surveys obtained by stratified random sampling without replacement. Variables commonly examined in this type of surveys are binary, categorical and continuous, and hence, the estimates of interest involve estimates of proportions, totals and means. The problem of approximating the sampling relative error of this kind of estimates is studied in this paper. Some new jackknife methods are proposed and compared with plug-in and bootstrap methods. An extensive simulation study is carried out to compare the behavior of all the methods considered in this paper.展开更多
Geodetic functional models,stochastic models,and model parameter estimation theory are fundamental for geodetic data processing.In the past five years,through the unremitting efforts of Chinese scholars in the field o...Geodetic functional models,stochastic models,and model parameter estimation theory are fundamental for geodetic data processing.In the past five years,through the unremitting efforts of Chinese scholars in the field of geodetic data processing,according to the application and practice of geodesy,they have made significant contributions in the fields of hypothesis testing theory,un-modeled error,outlier detection,and robust estimation,variance component estimation,complex least squares,and ill-posed problems treatment.Many functional models such as the nonlinear adjustment model,EIV model,and mixed additive and multiplicative random error model are also constructed and improved.Geodetic data inversion is an important part of geodetic data processing,and Chinese scholars have done a lot of work in geodetic data inversion in the past five years,such as seismic slide distribution inversion,intelligent inversion algorithm,multi-source data joint inversion,water reserve change and satellite gravity inversion.This paper introduces the achievements of Chinese scholars in the field of geodetic data processing in the past five years,analyzes the methods used by scholars and the problems solved,and looks forward to the unsolved problems in geodetic data processing and the direction that needs further research in the future.展开更多
Stochastic models play an important role in achieving high accuracy in positioning,the ideal estimator in the least-squares(LS)can be obtained only by using the suitable stochastic model.This study investigates the ro...Stochastic models play an important role in achieving high accuracy in positioning,the ideal estimator in the least-squares(LS)can be obtained only by using the suitable stochastic model.This study investigates the role of variance component estimation(VCE)in the LS method for Precise Point Positioning(PPP).This estimation is performed by considering the ionospheric-free(IF)functional model for code and the phase observation of Global Positioning System(GPS).The strategy for estimating the accuracy of these observations was evaluated to check the effect of the stochastic model in four modes:a)antenna type,b)receiver type,c)the tropospheric effect,and d)the ionosphere effect.The results show that using empirical variance for code and phase observations in some cases caused erroneous estimation of unknown components in the PPP model.This is because a constant empirical variance may not be suitable for various receivers and antennas under different conditions.Coordinates were compared in two cases using the stochastic model of nominal weight and weight estimated by LS-VCE.The position error difference for the east-west,north-south,and height components was 1.5 cm,4 mm,and 1.8 cm,respectively.Therefore,weight estimation with LS-VCE can provide more appropriate results.Eventually,the convergence time based on four elevation-dependent models was evaluated using nominal weight and LS-VCE weight.According to the results,the LS-VCE has a higher convergence rate than the nominal weight.The weight estimation using LS-VCE improves the convergence time in four elevation-dependent models by 11,13,12,and 9 min,respectively.展开更多
This paper shows that a general multisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion f...This paper shows that a general multisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random parameter under estimation. First, we formulate the problem of distributed estimation fusion in the LMV setting. In this setting, the fused estimator is a weighted sum of local estimates with a matrix weight. We show that the set of weights is optimal if and only if it is a solution of a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrix Ck.Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with known prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provide an off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.展开更多
For multisensor systems,when the model parameters and the noise variances are unknown,the consistent fused estimators of the model parameters and noise variances are obtained,based on the system identification algorit...For multisensor systems,when the model parameters and the noise variances are unknown,the consistent fused estimators of the model parameters and noise variances are obtained,based on the system identification algorithm,correlation method and least squares fusion criterion.Substituting these consistent estimators into the optimal weighted measurement fusion Kalman filter,a self-tuning weighted measurement fusion Kalman filter is presented.Using the dynamic error system analysis (DESA) method,the convergence of the self-tuning weighted measurement fusion Kalman filter is proved,i.e.,the self-tuning Kalman filter converges to the corresponding optimal Kalman filter in a realization.Therefore,the self-tuning weighted measurement fusion Kalman filter has asymptotic global optimality.One simulation example for a 4-sensor target tracking system verifies its effectiveness.展开更多
In this article, we study the variable selection of partially linear single-index model(PLSIM). Based on the minimized average variance estimation, the variable selection of PLSIM is done by minimizing average varianc...In this article, we study the variable selection of partially linear single-index model(PLSIM). Based on the minimized average variance estimation, the variable selection of PLSIM is done by minimizing average variance with adaptive l1 penalty. Implementation algorithm is given. Under some regular conditions, we demonstrate the oracle properties of aLASSO procedure for PLSIM. Simulations are used to investigate the effectiveness of the proposed method for variable selection of PLSIM.展开更多
Opting to follow the computing-design philosophy that the best way to reduce power consumption and increase energy efficiency is to reduce waste, we propose an architecture with a very simple ready-implementation by u...Opting to follow the computing-design philosophy that the best way to reduce power consumption and increase energy efficiency is to reduce waste, we propose an architecture with a very simple ready-implementation by using an NComputing device that can allow multi-users but only one computer is needed. This intuitively can save energy, space as well as cost. In this paper, we propose a simple and realistic NComputing architecture to study the energy and power-efficient consumption of desktop computer systems by using the NComputing device. We also propose new approaches to estimate the reliability of k-out-of-n systems based on the delta method. The k-out-of-n system consisting of n subsystems works if and only if at least k-of-the-n subsystems work. More specificly, we develop approaches to obtain the reliability estimation for the k-out-of-n systems which is composed of n independent and identically distributed subsystems where each subsystem (or energy-efficient usage application) can be assumed to follow a two-parameter exponential lifetime distribution function. The detailed derivations of reliability estimation of k-out-of-n systems based on the biased-corrected estimator, known as delta method, the uniformly minimum variance unbiased estimate (UMVUE) and maximum likelihood estimate (MLE) are discussed. An energy-management NComputing application is discussed to illustrate the reliability results in terms of the energy consumption usages of a computer system with qua(t-core, 8 GB of RAM, and a GeForce 9800GX-2 graphics card to perform various complex applications. The estimated reliability values of systems based on the UMVUE and the delta method differ only slightly. Often the UMVUE of reliability for a complex system is a lot more difficult to obtain, if not impossible. The delta method seems to be a simple and better approach to obtain the reliability estimation of complex systems. The results of this study also show that, in practice, the NComputing architecture improves both energy cost saving and energy efficient living spaces.展开更多
Aiming at the robustness issue in high-speed trains(HSTs)operation control,this article proposes a model-free adaptive control(MFAC)scheme to suppress disturbance.Firstly,the dynamic linearization data model of train ...Aiming at the robustness issue in high-speed trains(HSTs)operation control,this article proposes a model-free adaptive control(MFAC)scheme to suppress disturbance.Firstly,the dynamic linearization data model of train system under the action of measurement disturbance is given,and the Kalman filter(KF)based on this model is derived under the minimum variance estimation criterion.Then,according to the KF,an anti-interference MFAC scheme is designed.This scheme only needs the input and output data of the controlled system to realize the MFAC of the train under strong disturbance.Finally,the simulation experiment of CRH380A HSTs is carried out and compared with the traditional MFAC and the MFAC with attenuation factor.The proposed control algorithm can effectively suppress the measurement disturbance,and obtain smaller tracking error and larger signal to noise ratio with better applicability.展开更多
Non-random missing data poses serious problems in longitudinal studies. The binomial distribution parameter becomes to be unidentifiable without any other auxiliary information or assumption when it suffers from ignor...Non-random missing data poses serious problems in longitudinal studies. The binomial distribution parameter becomes to be unidentifiable without any other auxiliary information or assumption when it suffers from ignorable missing data. Existing methods are mostly based on the log-linear regression model. In this article, a model is proposed for longitudinal data with non-ignorable non-response. It is considered to use the pre-test baseline data to improve the identifiability of the post-test parameter. Furthermore, we derive the identified estimation (IE), the maximum likelihood estimation (MLE) and its associated variance for the post-test parameter. The simulation study based on the model of this paper shows that the proposed approach gives promising results.展开更多
Official monthly U.S.labour force estimation at the sub-State level(mostly counties)is based on what is known as the‘Handbook’(HB)method,one of the earliest uses of administrative data for small area estimation.The ...Official monthly U.S.labour force estimation at the sub-State level(mostly counties)is based on what is known as the‘Handbook’(HB)method,one of the earliest uses of administrative data for small area estimation.The administrative data,however,are poor in coverage and have conceptual deficiencies.Past attempts to correct for the resulting bias of the HB estimates by informal(implicit)modelling have not been successful,due to the absence of regular direct monthly survey estimates at the sub-State level.Benchmarking the sub-State HB estimates each month to the State model dependent estimates helps to correct for an overall bias,but not in individual areas.In this article we propose benchmarking additionally to the annual model-dependent area estimates.The annual models include known administrative data as covariates,and are used to define corresponding monthly sub-State models,which in turn enable producing monthly synthetic estimates as possible substitutes for the HB estimates in real time production.Variance estimates,which account for sampling errors and the errors of the model dependent estimators are developed.Data for sub-State areas in the State of Arizona are used for illustration.Although the methodology developed in this article stems from a particular(but very important)application,it is general and applicable to other similar problems.展开更多
The application of Tikhonov regularization method dealing with the ill-conditioned problems in the regional gravity field modeling by Poisson wavelets is studied. In particular, the choices of the regularization matri...The application of Tikhonov regularization method dealing with the ill-conditioned problems in the regional gravity field modeling by Poisson wavelets is studied. In particular, the choices of the regularization matrices as well as the approaches for estimating the regularization parameters are investigated in details. The numerical results show that the regularized solutions derived from the first-order regularization are better than the ones obtained from zero-order regularization. For cross validation, the optimal regularization parameters are estimated from L-curve, variance component estimation(VCE) and minimum standard deviation(MSTD) approach, respectively, and the results show that the derived regularization parameters from different methods are consistent with each other. Together with the firstorder Tikhonov regularization and VCE method, the optimal network of Poisson wavelets is derived, based on which the local gravimetric geoid is computed. The accuracy of the corresponding gravimetric geoid reaches 1.1 cm in Netherlands, which validates the reliability of using Tikhonov regularization method in tackling the ill-conditioned problem for regional gravity field modeling.展开更多
In practical survey sampling, nonresponse phenomenon is unavoidable. How to impute missing data is an important problem. There are several imputation methods in the literature. In this paper, the imputation method of ...In practical survey sampling, nonresponse phenomenon is unavoidable. How to impute missing data is an important problem. There are several imputation methods in the literature. In this paper, the imputation method of the mean of ratios for missing data under uniform response is applied to the estimation of a finite population mean when the PPSWR sampling is used. The imputed estimator is valid under the corresponding response mechanism regardless of the model as well as under the ratio model regardless of the response mechanism. The approximately unbiased jackknife variance estimator is also presented. All of these results are extended to the case of non-uniform response. Simulation studies show the good performance of the proposed estimators.展开更多
基金supported by the National Natural Science Foundation of China(No.41874001 and No.41664001)Support Program for Outstanding Youth Talents in Jiangxi Province(No.20162BCB23050)National Key Research and Development Program(No.2016YFB0501405)。
文摘The solution of the grey model(GM(1,1)model)generally involves equal-precision observations,and the(co)variance matrix is established from the prior information.However,the data are generally available with unequal-precision measurements in reality.To deal with the errors of all observations for GM(1,1)model with errors-in-variables(EIV)structure,we exploit the total least-squares(TLS)algorithm to estimate the parameters of GM(1,1)model in this paper.Ignoring that the effect of the improper prior stochastic model and the homologous observations may degrade the accuracy of parameter estimation,we further present a nonlinear total least-squares variance component estimation approach for GM(1,1)model,which resorts to the minimum norm quadratic unbiased estimation(MINQUE).The practical and simulative experiments indicate that the presented approach has significant merits in improving the predictive accuracy in comparison with control methods.
文摘A mixed distribution of empirical variances, composed of two distributions the basic and contaminating ones, and referred to as PERG mixed distribution of empirical variances, is considered. In the paper a robust inverse problem solution is given, namely a (new) robust method for estimation of variances of both distributions—PEROBVC Method, as well as the estimates for the numbers of observations for both distributions and, in this way also the estimate of contamination degree.
基金supported by the National Natural Science Foundation of China(Grant NO.12301377,11971208,92358303)the National Social Science Foundation of China(Grant NO.21&ZD152)+4 种基金the Outstanding Youth Fund Project of the Science and Technology Department of Jiangxi Province(Grant No.20224ACB211003)Jiangxi Provincial National Natural Science Foundation(Grant NO.20232BAB211014)the Science and technology research project of the Education Department of Jiangxi Province(Grant No.GJJ210535)the opening funding of Key Laboratory of Data Science in Finance and Economicsthe innovation team funding of Digital Economy and Industrial Development,Jiangxi University of Finance and Economics。
文摘Tensor data have been widely used in many fields,e.g.,modern biomedical imaging,chemometrics,and economics,but often suffer from some common issues as in high dimensional statistics.How to find their low-dimensional latent structure has been of great interest for statisticians.To this end,we develop two efficient tensor sufficient dimension reduction methods based on the sliced average variance estimation(SAVE)to estimate the corresponding dimension reduction subspaces.The first one,entitled tensor sliced average variance estimation(TSAVE),works well when the response is discrete or takes finite values,but is not■consistent for continuous response;the second one,named bias-correction tensor sliced average variance estimation(CTSAVE),is a de-biased version of the TSAVE method.The asymptotic properties of both methods are derived under mild conditions.Simulations and real data examples are also provided to show the superiority of the efficiency of the developed methods.
基金The authors are grateful to the Deanship of Scientific Research at King Khalid University,Kingdom of Saudi Arabia for funding this study through the research groups program under project number R.G.P.2/67/41.Ibrahim Mufrah Almanjahie received the grant.
文摘Variance is one of themost important measures of descriptive statistics and commonly used for statistical analysis.The traditional second-order central moment based variance estimation is a widely utilized methodology.However,traditional variance estimator is highly affected in the presence of extreme values.So this paper initially,proposes two classes of calibration estimators based on an adaptation of the estimators recently proposed by Koyuncu and then presents a new class of L-Moments based calibration variance estimators utilizing L-Moments characteristics(L-location,Lscale,L-CV)and auxiliary information.It is demonstrated that the proposed L-Moments based calibration variance estimators are more efficient than adapted ones.Artificial data is considered for assessing the performance of the proposed estimators.We also demonstrated an application related to apple fruit for purposes of the article.Using artificial and real data sets,percentage relative efficiency(PRE)of the proposed class of estimators with respect to adapted ones are calculated.The PRE results indicate to the superiority of the proposed class over adapted ones in the presence of extreme values.In this manner,the proposed class of estimators could be applied over an expansive range of survey sampling whenever auxiliary information is available in the presence of extreme values.
基金This project is supported by National Natural Science Foundation of China(No.50335020,No.50205009)Laboratory of Intelligence Manufacturing Technology of Ministry of Education of China(No.J100301).
文摘Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring systems is presented. The variance and confidence intervals of the system reliability estimation are obtained by expressing system reliability as a linear sum of products of higher order moments of component reliability estimates when the number of component or system survivals obeys binomial distribution. The eigenfunction of binomial distribution is used to determine the moments of component reliability estimates, and a symbolic matrix which can facilitate the search of explicit system reliability estimates is proposed. Furthermore, a case of application is used to illustrate the procedure, and with the help of this example, various issues such as the applicability of this estimation model, and measures to improve system reliability of monitoring systems are discussed.
基金The authors thank the Deanship of Scientific Research at King Khalid University,Kingdom of Saudi Arabia for funding this study through the research groups program under Project Number R.G.P.1/64/42.Ishfaq Ahmad and Ibrahim Mufrah Almanjahie received the grant.
文摘Variance is one of the most vital measures of dispersion widely employed in practical aspects.A commonly used approach for variance estimation is the traditional method of moments that is strongly influenced by the presence of extreme values,and thus its results cannot be relied on.Finding momentum from Koyuncu’s recent work,the present paper focuses first on proposing two classes of variance estimators based on linear moments(L-moments),and then employing them with auxiliary data under double stratified sampling to introduce a new class of calibration variance estimators using important properties of L-moments(L-location,L-cv,L-variance).Three populations are taken into account to assess the efficiency of the new estimators.The first and second populations are concerned with artificial data,and the third populations is concerned with real data.The percentage relative efficiency of the proposed estimators over existing ones is evaluated.In the presence of extreme values,our findings depict the superiority and high efficiency of the proposed classes over traditional classes.Hence,when auxiliary data is available along with extreme values,the proposed classes of estimators may be implemented in an extensive variety of sampling surveys.
文摘Our purpose is twofold: to present a prototypical example of the conditioning technique to obtain the best estimator of a parameter and to show that th</span><span style="font-family:Verdana;">is technique resides in the structure of an inner product space. Th</span><span style="font-family:Verdana;">e technique uses conditioning </span></span><span style="font-family:Verdana;">of</span><span style="font-family:Verdana;"> an unbiased estimator </span><span style="font-family:Verdana;">on</span><span style="font-family:Verdana;"> a sufficient statistic. This procedure is founded upon the conditional variance formula, which leads to an inner product space and a geometric interpretation. The example clearly illustrates the dependence on the sampling methodology. These advantages show the power and centrality of this process.
文摘Linear mixed model (LMM) approaches have been widely applied in many areas of research data analysis because they offer great flexibility for different data structures and linear model systems. In this study, emphasis is placed on comparing the properties of two LMM approaches: restricted maximum likelihood (REML) and minimum norm quadratic unbiased estimation (MINQUE) with and without resampling techniques being included. Bias, testing power, Type I error, and computing time were compared between REML and MINQUE approaches with and without Jackknife technique based on 500 simulated data sets. Results showed that MINQUE and REML methods performed equally regarding bias, Type I error, and power. Jackknife-based MINQUE and REML greatly improved power compared to non-Jackknife based linear mixed model approaches. Results also showed that MINQUE is more time-saving compared to REML, especially with the use of resampling techniques and large data set analysis. Results from the actual cotton data analysis were in agreement with our simulated results. Therefore, Jackknife-based MINQUE approaches could be recommended to achieve desirable power with reduced time for a large data analysis and model simulations.
基金supported by the Galician Official Statistical Institute(IGE)and by Grants 10DPI105003PRCN2012/130 from Xunta de Galicia(Spain)by Grant number MTM2011-22392 from Ministerio de Ciencia e Innovacion(Spain).
文摘Many operations carried out by official statistical institutes use large-scale surveys obtained by stratified random sampling without replacement. Variables commonly examined in this type of surveys are binary, categorical and continuous, and hence, the estimates of interest involve estimates of proportions, totals and means. The problem of approximating the sampling relative error of this kind of estimates is studied in this paper. Some new jackknife methods are proposed and compared with plug-in and bootstrap methods. An extensive simulation study is carried out to compare the behavior of all the methods considered in this paper.
基金National Natural Science Foundation of China(No.42174011)。
文摘Geodetic functional models,stochastic models,and model parameter estimation theory are fundamental for geodetic data processing.In the past five years,through the unremitting efforts of Chinese scholars in the field of geodetic data processing,according to the application and practice of geodesy,they have made significant contributions in the fields of hypothesis testing theory,un-modeled error,outlier detection,and robust estimation,variance component estimation,complex least squares,and ill-posed problems treatment.Many functional models such as the nonlinear adjustment model,EIV model,and mixed additive and multiplicative random error model are also constructed and improved.Geodetic data inversion is an important part of geodetic data processing,and Chinese scholars have done a lot of work in geodetic data inversion in the past five years,such as seismic slide distribution inversion,intelligent inversion algorithm,multi-source data joint inversion,water reserve change and satellite gravity inversion.This paper introduces the achievements of Chinese scholars in the field of geodetic data processing in the past five years,analyzes the methods used by scholars and the problems solved,and looks forward to the unsolved problems in geodetic data processing and the direction that needs further research in the future.
文摘Stochastic models play an important role in achieving high accuracy in positioning,the ideal estimator in the least-squares(LS)can be obtained only by using the suitable stochastic model.This study investigates the role of variance component estimation(VCE)in the LS method for Precise Point Positioning(PPP).This estimation is performed by considering the ionospheric-free(IF)functional model for code and the phase observation of Global Positioning System(GPS).The strategy for estimating the accuracy of these observations was evaluated to check the effect of the stochastic model in four modes:a)antenna type,b)receiver type,c)the tropospheric effect,and d)the ionosphere effect.The results show that using empirical variance for code and phase observations in some cases caused erroneous estimation of unknown components in the PPP model.This is because a constant empirical variance may not be suitable for various receivers and antennas under different conditions.Coordinates were compared in two cases using the stochastic model of nominal weight and weight estimated by LS-VCE.The position error difference for the east-west,north-south,and height components was 1.5 cm,4 mm,and 1.8 cm,respectively.Therefore,weight estimation with LS-VCE can provide more appropriate results.Eventually,the convergence time based on four elevation-dependent models was evaluated using nominal weight and LS-VCE weight.According to the results,the LS-VCE has a higher convergence rate than the nominal weight.The weight estimation using LS-VCE improves the convergence time in four elevation-dependent models by 11,13,12,and 9 min,respectively.
文摘This paper shows that a general multisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random parameter under estimation. First, we formulate the problem of distributed estimation fusion in the LMV setting. In this setting, the fused estimator is a weighted sum of local estimates with a matrix weight. We show that the set of weights is optimal if and only if it is a solution of a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrix Ck.Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with known prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provide an off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.
基金supported by the National Natural Science Foundation of China(No.60874063)the Innovation Scientific Research Foundation for Graduate Students of Heilongjiang Province(No.YJSCX2008-018HLJ),and the Automatic Control Key Laboratory of Heilongjiang University
文摘For multisensor systems,when the model parameters and the noise variances are unknown,the consistent fused estimators of the model parameters and noise variances are obtained,based on the system identification algorithm,correlation method and least squares fusion criterion.Substituting these consistent estimators into the optimal weighted measurement fusion Kalman filter,a self-tuning weighted measurement fusion Kalman filter is presented.Using the dynamic error system analysis (DESA) method,the convergence of the self-tuning weighted measurement fusion Kalman filter is proved,i.e.,the self-tuning Kalman filter converges to the corresponding optimal Kalman filter in a realization.Therefore,the self-tuning weighted measurement fusion Kalman filter has asymptotic global optimality.One simulation example for a 4-sensor target tracking system verifies its effectiveness.
文摘In this article, we study the variable selection of partially linear single-index model(PLSIM). Based on the minimized average variance estimation, the variable selection of PLSIM is done by minimizing average variance with adaptive l1 penalty. Implementation algorithm is given. Under some regular conditions, we demonstrate the oracle properties of aLASSO procedure for PLSIM. Simulations are used to investigate the effectiveness of the proposed method for variable selection of PLSIM.
基金supported by Rutgers CCC Green Computing Initiative
文摘Opting to follow the computing-design philosophy that the best way to reduce power consumption and increase energy efficiency is to reduce waste, we propose an architecture with a very simple ready-implementation by using an NComputing device that can allow multi-users but only one computer is needed. This intuitively can save energy, space as well as cost. In this paper, we propose a simple and realistic NComputing architecture to study the energy and power-efficient consumption of desktop computer systems by using the NComputing device. We also propose new approaches to estimate the reliability of k-out-of-n systems based on the delta method. The k-out-of-n system consisting of n subsystems works if and only if at least k-of-the-n subsystems work. More specificly, we develop approaches to obtain the reliability estimation for the k-out-of-n systems which is composed of n independent and identically distributed subsystems where each subsystem (or energy-efficient usage application) can be assumed to follow a two-parameter exponential lifetime distribution function. The detailed derivations of reliability estimation of k-out-of-n systems based on the biased-corrected estimator, known as delta method, the uniformly minimum variance unbiased estimate (UMVUE) and maximum likelihood estimate (MLE) are discussed. An energy-management NComputing application is discussed to illustrate the reliability results in terms of the energy consumption usages of a computer system with qua(t-core, 8 GB of RAM, and a GeForce 9800GX-2 graphics card to perform various complex applications. The estimated reliability values of systems based on the UMVUE and the delta method differ only slightly. Often the UMVUE of reliability for a complex system is a lot more difficult to obtain, if not impossible. The delta method seems to be a simple and better approach to obtain the reliability estimation of complex systems. The results of this study also show that, in practice, the NComputing architecture improves both energy cost saving and energy efficient living spaces.
基金The authors thank the anonymous reviewers for their valuable suggestions.This work is supported by funds National Natural Science Foundation of China(Grants No.52162048,61991404 and 62003138)National Key Research and Development Program of China(Grant No.2020YFB1713703)Jiangxi Graduate Innovation Fund Project(Grant No.YC2021-S446).
文摘Aiming at the robustness issue in high-speed trains(HSTs)operation control,this article proposes a model-free adaptive control(MFAC)scheme to suppress disturbance.Firstly,the dynamic linearization data model of train system under the action of measurement disturbance is given,and the Kalman filter(KF)based on this model is derived under the minimum variance estimation criterion.Then,according to the KF,an anti-interference MFAC scheme is designed.This scheme only needs the input and output data of the controlled system to realize the MFAC of the train under strong disturbance.Finally,the simulation experiment of CRH380A HSTs is carried out and compared with the traditional MFAC and the MFAC with attenuation factor.The proposed control algorithm can effectively suppress the measurement disturbance,and obtain smaller tracking error and larger signal to noise ratio with better applicability.
基金Supported by the National Natural Science Foundation of China(No.10801019)the Fundamental ResearchFunds for the Central Universities(BUPT2012RC0708)
文摘Non-random missing data poses serious problems in longitudinal studies. The binomial distribution parameter becomes to be unidentifiable without any other auxiliary information or assumption when it suffers from ignorable missing data. Existing methods are mostly based on the log-linear regression model. In this article, a model is proposed for longitudinal data with non-ignorable non-response. It is considered to use the pre-test baseline data to improve the identifiability of the post-test parameter. Furthermore, we derive the identified estimation (IE), the maximum likelihood estimation (MLE) and its associated variance for the post-test parameter. The simulation study based on the model of this paper shows that the proposed approach gives promising results.
文摘Official monthly U.S.labour force estimation at the sub-State level(mostly counties)is based on what is known as the‘Handbook’(HB)method,one of the earliest uses of administrative data for small area estimation.The administrative data,however,are poor in coverage and have conceptual deficiencies.Past attempts to correct for the resulting bias of the HB estimates by informal(implicit)modelling have not been successful,due to the absence of regular direct monthly survey estimates at the sub-State level.Benchmarking the sub-State HB estimates each month to the State model dependent estimates helps to correct for an overall bias,but not in individual areas.In this article we propose benchmarking additionally to the annual model-dependent area estimates.The annual models include known administrative data as covariates,and are used to define corresponding monthly sub-State models,which in turn enable producing monthly synthetic estimates as possible substitutes for the HB estimates in real time production.Variance estimates,which account for sampling errors and the errors of the model dependent estimators are developed.Data for sub-State areas in the State of Arizona are used for illustration.Although the methodology developed in this article stems from a particular(but very important)application,it is general and applicable to other similar problems.
基金supported by the National Natural Science Foundation of China (Nos.41374023,41131067,41474019)the National 973 Project of China (No.2013CB733302)+2 种基金the China Postdoctoral Science Foundation (No.2016M602301)the Key Laboratory of Geospace Envi-ronment and Geodesy,Ministry of Education,Wuhan University (No.15-02-08)the State Scholarship Fund from Chinese Scholarship Council (No.201306270014)
文摘The application of Tikhonov regularization method dealing with the ill-conditioned problems in the regional gravity field modeling by Poisson wavelets is studied. In particular, the choices of the regularization matrices as well as the approaches for estimating the regularization parameters are investigated in details. The numerical results show that the regularized solutions derived from the first-order regularization are better than the ones obtained from zero-order regularization. For cross validation, the optimal regularization parameters are estimated from L-curve, variance component estimation(VCE) and minimum standard deviation(MSTD) approach, respectively, and the results show that the derived regularization parameters from different methods are consistent with each other. Together with the firstorder Tikhonov regularization and VCE method, the optimal network of Poisson wavelets is derived, based on which the local gravimetric geoid is computed. The accuracy of the corresponding gravimetric geoid reaches 1.1 cm in Netherlands, which validates the reliability of using Tikhonov regularization method in tackling the ill-conditioned problem for regional gravity field modeling.
基金Supported by NationalNatural Science Foundation of China (Grant Nos. 70625004, 10721101 and 70933003)
文摘In practical survey sampling, nonresponse phenomenon is unavoidable. How to impute missing data is an important problem. There are several imputation methods in the literature. In this paper, the imputation method of the mean of ratios for missing data under uniform response is applied to the estimation of a finite population mean when the PPSWR sampling is used. The imputed estimator is valid under the corresponding response mechanism regardless of the model as well as under the ratio model regardless of the response mechanism. The approximately unbiased jackknife variance estimator is also presented. All of these results are extended to the case of non-uniform response. Simulation studies show the good performance of the proposed estimators.