Since the formal introduction of computer experiments back in 1989, substantial work has been done to make these experiments as efficient and effective as possible. As a consequence, more and more industrial studies a...Since the formal introduction of computer experiments back in 1989, substantial work has been done to make these experiments as efficient and effective as possible. As a consequence, more and more industrial studies are performed. In this direction, sequential strategies have been introduced with the aim of reducing the experimental effort while keeping the required accuracy of the meta-model. The strategies consist in building a fairly accurate meta-model based on a low number of experimental points, and then adding new points in an iterative way according to some strategy like improving the accuracy of meta-model itself or finding the optimal design point in the design space. In this work, a hybrid of these two strategies is used, with the aim to achieve both meta-model accuracy and optimum design solution while keeping low the experimental effort. The proposed methodology is applied to an industrial case study. The pragmatism of such hybrid strategy, together with simplicity of implementation promotes the generalization of this approach to other industrial experiments.展开更多
For an expensive to evaluate computer simulator, even the estimate of the overall surface can be a challenging problem. In this paper, we focus on the estimation of the inverse solution, i.e., to find the set(s) of in...For an expensive to evaluate computer simulator, even the estimate of the overall surface can be a challenging problem. In this paper, we focus on the estimation of the inverse solution, i.e., to find the set(s) of input combinations of the simulator that generates a pre-determined simulator output. Ranjan et al. [1] proposed an expected improvement criterion under a sequential design framework for the inverse problem with a scalar valued simulator. In this paper, we focus on the inverse problem for a time-series valued simulator. We have used a few simulated and two real examples for performance comparison.展开更多
Orthogonal array-based uniform Latin hypercube design(uniform OALHD) is a class of orthogonal array-based Latin hypercube designs to have the best uniformity. In this paper, we provide a less computational algorithm...Orthogonal array-based uniform Latin hypercube design(uniform OALHD) is a class of orthogonal array-based Latin hypercube designs to have the best uniformity. In this paper, we provide a less computational algorithm to construct uniform OALHD in 2-dimensional space from Bundschuh and Zhu(1993). And some uniform OALHDs are constructed by using our method.展开更多
Numerical weather prediction(NWP)data possess internal inaccuracies,such as low NWP wind speed corresponding to high actual wind power generation.This study is intended to reduce the negative effects of such inaccurac...Numerical weather prediction(NWP)data possess internal inaccuracies,such as low NWP wind speed corresponding to high actual wind power generation.This study is intended to reduce the negative effects of such inaccuracies by proposing a pure data-selection framework(PDF)to choose useful data prior to modeling,thus improving the accuracy of day-ahead wind power forecasting.Briefly,we convert an entire NWP training dataset into many small subsets and then select the best subset combination via a validation set to build a forecasting model.Although a small subset can increase selection flexibility,it can also produce billions of subset combinations,resulting in computational issues.To address this problem,we incorporated metamodeling and optimization steps into PDF.We then proposed a design and analysis of the computer experiments-based metamodeling algorithm and heuristic-exhaustive search optimization algorithm,respectively.Experimental results demonstrate that(1)it is necessary to select data before constructing a forecasting model;(2)using a smaller subset will likely increase selection flexibility,leading to a more accurate forecasting model;(3)PDF can generate a better training dataset than similarity-based data selection methods(e.g.,K-means and support vector classification);and(4)choosing data before building a forecasting model produces a more accurate forecasting model compared with using a machine learning method to construct a model directly.展开更多
文摘Since the formal introduction of computer experiments back in 1989, substantial work has been done to make these experiments as efficient and effective as possible. As a consequence, more and more industrial studies are performed. In this direction, sequential strategies have been introduced with the aim of reducing the experimental effort while keeping the required accuracy of the meta-model. The strategies consist in building a fairly accurate meta-model based on a low number of experimental points, and then adding new points in an iterative way according to some strategy like improving the accuracy of meta-model itself or finding the optimal design point in the design space. In this work, a hybrid of these two strategies is used, with the aim to achieve both meta-model accuracy and optimum design solution while keeping low the experimental effort. The proposed methodology is applied to an industrial case study. The pragmatism of such hybrid strategy, together with simplicity of implementation promotes the generalization of this approach to other industrial experiments.
文摘For an expensive to evaluate computer simulator, even the estimate of the overall surface can be a challenging problem. In this paper, we focus on the estimation of the inverse solution, i.e., to find the set(s) of input combinations of the simulator that generates a pre-determined simulator output. Ranjan et al. [1] proposed an expected improvement criterion under a sequential design framework for the inverse problem with a scalar valued simulator. In this paper, we focus on the inverse problem for a time-series valued simulator. We have used a few simulated and two real examples for performance comparison.
基金Supported by the NNSF of Chitin(10301015)Supported by the Tianjia Planning Programs of Philosophy and Social Science of China(TJ05-TJ002)Supported by the Chitin Postdoctoral Science Foundayion(20060390169)
文摘Orthogonal array-based uniform Latin hypercube design(uniform OALHD) is a class of orthogonal array-based Latin hypercube designs to have the best uniformity. In this paper, we provide a less computational algorithm to construct uniform OALHD in 2-dimensional space from Bundschuh and Zhu(1993). And some uniform OALHDs are constructed by using our method.
基金supported by the National Natural Science Foundation of China(72101066,72131005,72121001,72171062,91846301,and 71772053)Heilongjiang Natural Science Excellent Youth Fund(YQ2022G004)Key Research and Development Projects of Heilongjiang Province(JD22A003).
文摘Numerical weather prediction(NWP)data possess internal inaccuracies,such as low NWP wind speed corresponding to high actual wind power generation.This study is intended to reduce the negative effects of such inaccuracies by proposing a pure data-selection framework(PDF)to choose useful data prior to modeling,thus improving the accuracy of day-ahead wind power forecasting.Briefly,we convert an entire NWP training dataset into many small subsets and then select the best subset combination via a validation set to build a forecasting model.Although a small subset can increase selection flexibility,it can also produce billions of subset combinations,resulting in computational issues.To address this problem,we incorporated metamodeling and optimization steps into PDF.We then proposed a design and analysis of the computer experiments-based metamodeling algorithm and heuristic-exhaustive search optimization algorithm,respectively.Experimental results demonstrate that(1)it is necessary to select data before constructing a forecasting model;(2)using a smaller subset will likely increase selection flexibility,leading to a more accurate forecasting model;(3)PDF can generate a better training dataset than similarity-based data selection methods(e.g.,K-means and support vector classification);and(4)choosing data before building a forecasting model produces a more accurate forecasting model compared with using a machine learning method to construct a model directly.