High Frequency(HF) radar current data is assimilated into a shelf sea circulation model based on optimal interpolation(OI) method. The purpose of this work is to develop a real-time computationally highly efficient as...High Frequency(HF) radar current data is assimilated into a shelf sea circulation model based on optimal interpolation(OI) method. The purpose of this work is to develop a real-time computationally highly efficient assimilation method to improve the forecast of shelf current. Since the true state of the ocean is not known, the specification of background error covariance is arduous. Usually, it is assumed or calculated from an ensemble of model states and is kept in constant. In our method, the spatial covariances of model forecast errors are derived from differences between the adjacent model forecast fields, which serve as the forecast tendencies. The assumption behind this is that forecast errors can resemble forecast tendencies, since variances are large when fields change quickly and small when fields change slowly. The implementation of HF radar data assimilation is found to yield good information for analyses. After assimilation, the root-mean-square error of model decreases significantly. Besides, three assimilation runs with variational observation density are implemented. The comparison of them indicates that the pattern described by observations is much more important than the amount of observations. It is more useful to expand the scope of observations than to increase the spatial interval. From our tests, the spatial interval of observation can be 5 times bigger than that of model grid.展开更多
The multiple determination tasks of chemical properties are a classical problem in analytical chemistry. The major problem is concerned in to find the best subset of variables that better represents the compounds. The...The multiple determination tasks of chemical properties are a classical problem in analytical chemistry. The major problem is concerned in to find the best subset of variables that better represents the compounds. These variables are obtained by a spectrophotometer device. This device measures hundreds of correlated variables related with physicocbemical properties and that can be used to estimate the component of interest. The problem is the selection of a subset of informative and uncorrelated variables that help the minimization of prediction error. Classical algorithms select a subset of variables for each compound considered. In this work we propose the use of the SPEA-II (strength Pareto evolutionary algorithm II). We would like to show that the variable selection algorithm can selected just one subset used for multiple determinations using multiple linear regressions. For the case study is used wheat data obtained by NIR (near-infrared spectroscopy) spectrometry where the objective is the determination of a variable subgroup with information about E protein content (%), test weight (Kg/HI), WKT (wheat kernel texture) (%) and farinograph water absorption (%). The results of traditional techniques of multivariate calibration as the SPA (successive projections algorithm), PLS (partial least square) and mono-objective genetic algorithm are presents for comparisons. For NIR spectral analysis of protein concentration on wheat, the number of variables selected from 775 spectral variables was reduced for just 10 in the SPEA-II algorithm. The prediction error decreased from 0.2 in the classical methods to 0.09 in proposed approach, a reduction of 37%. The model using variables selected by SPEA-II had better prediction performance than classical algorithms and full-spectrum partial least-squares.展开更多
The application of support vector machines to forecasting problems is becoming popular, lately. Several comparisons between neural networks trained with error backpropagation and support vector machines have shown adv...The application of support vector machines to forecasting problems is becoming popular, lately. Several comparisons between neural networks trained with error backpropagation and support vector machines have shown advantage for the latter in different domains of application. However, some difficulties still deteriorate the performance of the support vector machines. The main one is related to the setting of the hyperparameters involved in their training. Techniques based on meta-heuristics have been employed to determine appropriate values for those hyperparameters. However, because of the high noneonvexity of this estimation problem, which makes the search for a good solution very hard, an approach based on Bayesian inference, called relevance vector machine, has been proposed more recently. The present paper aims at investigating the suitability of this new approach to the short-term load forecasting problem.展开更多
基金supported by the State Oceanic Administration Young Marine Science Foundation (No. 2013201)the Shandong Provincial Key Laboratory of Marine Ecology and Environment & Disaster Prevention and Mitigation Foundation (No. 2012007)+1 种基金the Marine Public Foundation (No. 201005018)the North China Sea Branch Scientific Foundation (No. 2014B10)
文摘High Frequency(HF) radar current data is assimilated into a shelf sea circulation model based on optimal interpolation(OI) method. The purpose of this work is to develop a real-time computationally highly efficient assimilation method to improve the forecast of shelf current. Since the true state of the ocean is not known, the specification of background error covariance is arduous. Usually, it is assumed or calculated from an ensemble of model states and is kept in constant. In our method, the spatial covariances of model forecast errors are derived from differences between the adjacent model forecast fields, which serve as the forecast tendencies. The assumption behind this is that forecast errors can resemble forecast tendencies, since variances are large when fields change quickly and small when fields change slowly. The implementation of HF radar data assimilation is found to yield good information for analyses. After assimilation, the root-mean-square error of model decreases significantly. Besides, three assimilation runs with variational observation density are implemented. The comparison of them indicates that the pattern described by observations is much more important than the amount of observations. It is more useful to expand the scope of observations than to increase the spatial interval. From our tests, the spatial interval of observation can be 5 times bigger than that of model grid.
文摘The multiple determination tasks of chemical properties are a classical problem in analytical chemistry. The major problem is concerned in to find the best subset of variables that better represents the compounds. These variables are obtained by a spectrophotometer device. This device measures hundreds of correlated variables related with physicocbemical properties and that can be used to estimate the component of interest. The problem is the selection of a subset of informative and uncorrelated variables that help the minimization of prediction error. Classical algorithms select a subset of variables for each compound considered. In this work we propose the use of the SPEA-II (strength Pareto evolutionary algorithm II). We would like to show that the variable selection algorithm can selected just one subset used for multiple determinations using multiple linear regressions. For the case study is used wheat data obtained by NIR (near-infrared spectroscopy) spectrometry where the objective is the determination of a variable subgroup with information about E protein content (%), test weight (Kg/HI), WKT (wheat kernel texture) (%) and farinograph water absorption (%). The results of traditional techniques of multivariate calibration as the SPA (successive projections algorithm), PLS (partial least square) and mono-objective genetic algorithm are presents for comparisons. For NIR spectral analysis of protein concentration on wheat, the number of variables selected from 775 spectral variables was reduced for just 10 in the SPEA-II algorithm. The prediction error decreased from 0.2 in the classical methods to 0.09 in proposed approach, a reduction of 37%. The model using variables selected by SPEA-II had better prediction performance than classical algorithms and full-spectrum partial least-squares.
文摘The application of support vector machines to forecasting problems is becoming popular, lately. Several comparisons between neural networks trained with error backpropagation and support vector machines have shown advantage for the latter in different domains of application. However, some difficulties still deteriorate the performance of the support vector machines. The main one is related to the setting of the hyperparameters involved in their training. Techniques based on meta-heuristics have been employed to determine appropriate values for those hyperparameters. However, because of the high noneonvexity of this estimation problem, which makes the search for a good solution very hard, an approach based on Bayesian inference, called relevance vector machine, has been proposed more recently. The present paper aims at investigating the suitability of this new approach to the short-term load forecasting problem.