The prediction process often runs with small samples and under-sufficient information.To target this problem,we propose a performance comparison study that combines prediction and optimization algorithms based on expe...The prediction process often runs with small samples and under-sufficient information.To target this problem,we propose a performance comparison study that combines prediction and optimization algorithms based on experimental data analysis.Through a large number of prediction and optimization experiments,the accuracy and stability of the prediction method and the correction ability of the optimization method are studied.First,five traditional single-item prediction methods are used to process small samples with under-sufficient information,and the standard deviation method is used to assign weights on the five methods for combined forecasting.The accuracy of the prediction results is ranked.The mean and variance of the rankings reflect the accuracy and stability of the prediction method.Second,the error elimination prediction optimization method is proposed.To make,the prediction results are corrected by error elimination optimization method(EEOM),Markov optimization and two-layer optimization separately to obtain more accurate prediction results.The degree improvement and decline are used to reflect the correction ability of the optimization method.The results show that the accuracy and stability of combined prediction are the best in the prediction methods,and the correction ability of error elimination optimization is the best in the optimization methods.The combination of the two methods can well solve the problem of prediction with small samples and under-sufficient information.Finally,the accuracy of the combination of the combined prediction and the error elimination optimization is verified by predicting the number of unsafe events in civil aviation in a certain year.展开更多
The sampling process is very inefficient for sam-pling-based motion planning algorithms that excess random sam-ples are generated in the planning space.In this paper,we pro-pose an adaptive space expansion(ASE)approac...The sampling process is very inefficient for sam-pling-based motion planning algorithms that excess random sam-ples are generated in the planning space.In this paper,we pro-pose an adaptive space expansion(ASE)approach which belongs to the informed sampling category to improve the sampling effi-ciency for quickly finding a feasible path.The ASE method enlarges the search space gradually and restrains the sampling process in a sequence of small hyper-ellipsoid ring subsets to avoid exploring the unnecessary space.Specifically,for a con-structed small hyper-ellipsoid ring subset,if the algorithm cannot find a feasible path in it,then the subset is expanded.Thus,the ASE method successively does space exploring and space expan-sion until the final path has been found.Besides,we present a particular construction method of the hyper-ellipsoid ring that uniform random samples can be directly generated in it.At last,we present a feasible motion planner BiASE and an asymptoti-cally optimal motion planner BiASE*using the bidirectional exploring method and the ASE strategy.Simulations demon-strate that the computation speed is much faster than that of the state-of-the-art algorithms.The source codes are available at https://github.com/shshlei/ompl.展开更多
The Maximum Likelihood method estimates the parameter values of a statistical model that maximizes the corresponding likelihood function, given the sample information. This is the primal approach that, in this paper, ...The Maximum Likelihood method estimates the parameter values of a statistical model that maximizes the corresponding likelihood function, given the sample information. This is the primal approach that, in this paper, is presented as a mathematical programming specification whose solution requires the formulation of a Lagrange problem. A result of this setup is that the Lagrange multipliers associated with the linear statistical model (where sample observations are regarded as a set of constraints) are equal to the vector of residuals scaled by the variance of those residuals. The novel contribution of this paper consists in deriving the dual model of the Maximum Likelihood method under normality assumptions. This model minimizes a function of the variance of the error terms subject to orthogonality conditions between the model residuals and the space of explanatory variables. An intuitive interpretation of the dual problem appeals to basic elements of information theory and an economic interpretation of Lagrange multipliers to establish that the dual maximizes the net value of the sample information. This paper presents the dual ML model for a single regression and provides a numerical example of how to obtain maximum likelihood estimates of the parameters of a linear statistical model using the dual specification.展开更多
A standard assumption when modelling linked sample data is that the stochastic properties of the linking process and process underpinning the population values of the response variable are independent of one another.T...A standard assumption when modelling linked sample data is that the stochastic properties of the linking process and process underpinning the population values of the response variable are independent of one another.This is often referred to as non-informative linkage.But what if linkage errors are informative?In this paper,we provide results from two simulation experiments that explore two potential informative linking scenarios.The first is where the choice of sample record to link is dependent on the response;and the second is where the probability of correct linkage is dependent on the response.We focus on the important and widely applicable problem of estimation of domain means given linked data,and provide empirical evidence that while standard domain estimation methods can be substantially biased in the presence of informative linkage errors,an alternative estimation method,based on a Gaussian approximation to a maximum likelihood estimator that allows for non-informative linkage error,performs well.展开更多
基金This work was supported by the Scientific Research Projects of Tianjin Educational Committee(No.2020KJ029)。
文摘The prediction process often runs with small samples and under-sufficient information.To target this problem,we propose a performance comparison study that combines prediction and optimization algorithms based on experimental data analysis.Through a large number of prediction and optimization experiments,the accuracy and stability of the prediction method and the correction ability of the optimization method are studied.First,five traditional single-item prediction methods are used to process small samples with under-sufficient information,and the standard deviation method is used to assign weights on the five methods for combined forecasting.The accuracy of the prediction results is ranked.The mean and variance of the rankings reflect the accuracy and stability of the prediction method.Second,the error elimination prediction optimization method is proposed.To make,the prediction results are corrected by error elimination optimization method(EEOM),Markov optimization and two-layer optimization separately to obtain more accurate prediction results.The degree improvement and decline are used to reflect the correction ability of the optimization method.The results show that the accuracy and stability of combined prediction are the best in the prediction methods,and the correction ability of error elimination optimization is the best in the optimization methods.The combination of the two methods can well solve the problem of prediction with small samples and under-sufficient information.Finally,the accuracy of the combination of the combined prediction and the error elimination optimization is verified by predicting the number of unsafe events in civil aviation in a certain year.
基金supported in part by the National Natural Science Foun-dation of China(51975236)the National Key Research and Development Program of China(2018YFA0703203)the Innovation Project of Optics Valley Laboratory(OVL2021BG007)。
文摘The sampling process is very inefficient for sam-pling-based motion planning algorithms that excess random sam-ples are generated in the planning space.In this paper,we pro-pose an adaptive space expansion(ASE)approach which belongs to the informed sampling category to improve the sampling effi-ciency for quickly finding a feasible path.The ASE method enlarges the search space gradually and restrains the sampling process in a sequence of small hyper-ellipsoid ring subsets to avoid exploring the unnecessary space.Specifically,for a con-structed small hyper-ellipsoid ring subset,if the algorithm cannot find a feasible path in it,then the subset is expanded.Thus,the ASE method successively does space exploring and space expan-sion until the final path has been found.Besides,we present a particular construction method of the hyper-ellipsoid ring that uniform random samples can be directly generated in it.At last,we present a feasible motion planner BiASE and an asymptoti-cally optimal motion planner BiASE*using the bidirectional exploring method and the ASE strategy.Simulations demon-strate that the computation speed is much faster than that of the state-of-the-art algorithms.The source codes are available at https://github.com/shshlei/ompl.
文摘The Maximum Likelihood method estimates the parameter values of a statistical model that maximizes the corresponding likelihood function, given the sample information. This is the primal approach that, in this paper, is presented as a mathematical programming specification whose solution requires the formulation of a Lagrange problem. A result of this setup is that the Lagrange multipliers associated with the linear statistical model (where sample observations are regarded as a set of constraints) are equal to the vector of residuals scaled by the variance of those residuals. The novel contribution of this paper consists in deriving the dual model of the Maximum Likelihood method under normality assumptions. This model minimizes a function of the variance of the error terms subject to orthogonality conditions between the model residuals and the space of explanatory variables. An intuitive interpretation of the dual problem appeals to basic elements of information theory and an economic interpretation of Lagrange multipliers to establish that the dual maximizes the net value of the sample information. This paper presents the dual ML model for a single regression and provides a numerical example of how to obtain maximum likelihood estimates of the parameters of a linear statistical model using the dual specification.
文摘A standard assumption when modelling linked sample data is that the stochastic properties of the linking process and process underpinning the population values of the response variable are independent of one another.This is often referred to as non-informative linkage.But what if linkage errors are informative?In this paper,we provide results from two simulation experiments that explore two potential informative linking scenarios.The first is where the choice of sample record to link is dependent on the response;and the second is where the probability of correct linkage is dependent on the response.We focus on the important and widely applicable problem of estimation of domain means given linked data,and provide empirical evidence that while standard domain estimation methods can be substantially biased in the presence of informative linkage errors,an alternative estimation method,based on a Gaussian approximation to a maximum likelihood estimator that allows for non-informative linkage error,performs well.