Historically,landslides have been the primary type of geological disaster worldwide.Generally,the stability of reservoir banks is primarily affected by rainfall and reservoir water level fluctuations.Moreover,the stab...Historically,landslides have been the primary type of geological disaster worldwide.Generally,the stability of reservoir banks is primarily affected by rainfall and reservoir water level fluctuations.Moreover,the stability of reservoir banks changes with the long-term dynamics of external disastercausing factors.Thus,assessing the time-varying reliability of reservoir landslides remains a challenge.In this paper,a machine learning(ML)based approach is proposed to analyze the long-term reliability of reservoir bank landslides in spatially variable soils through time series prediction.This study systematically investigated the prediction performances of three ML algorithms,i.e.multilayer perceptron(MLP),convolutional neural network(CNN),and long short-term memory(LSTM).Additionally,the effects of the data quantity and data ratio on the predictive power of deep learning models are considered.The results show that all three ML models can accurately depict the changes in the time-varying failure probability of reservoir landslides.The CNN model outperforms both the MLP and LSTM models in predicting the failure probability.Furthermore,selecting the right data ratio can improve the prediction accuracy of the failure probability obtained by ML models.展开更多
Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same g...Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same granularity,segmenting them into different granularity events can effectively mitigate the impact of varying time scales on prediction accuracy.However,these events of varying granularity frequently intersect with each other,which may possess unequal durations.Even minor differences can result in significant errors when matching time series with future trends.Besides,directly using matched events but unaligned events as state vectors in machine learning-based prediction models can lead to insufficient prediction accuracy.Therefore,this paper proposes a short-term forecasting method for time series based on a multi-granularity event,MGE-SP(multi-granularity event-based short-termprediction).First,amethodological framework for MGE-SP established guides the implementation steps.The framework consists of three key steps,including multi-granularity event matching based on the LTF(latest time first)strategy,multi-granularity event alignment using a piecewise aggregate approximation based on the compression ratio,and a short-term prediction model based on XGBoost.The data from a nationwide online car-hailing service in China ensures the method’s reliability.The average RMSE(root mean square error)and MAE(mean absolute error)of the proposed method are 3.204 and 2.360,lower than the respective values of 4.056 and 3.101 obtained using theARIMA(autoregressive integratedmoving average)method,as well as the values of 4.278 and 2.994 obtained using k-means-SVR(support vector regression)method.The other experiment is conducted on stock data froma public data set.The proposed method achieved an average RMSE and MAE of 0.836 and 0.696,lower than the respective values of 1.019 and 0.844 obtained using the ARIMA method,as well as the values of 1.350 and 1.172 obtained using the k-means-SVR method.展开更多
The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries an...The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries and other fields.Furthermore,it is important to construct a digital twin system.However,existing methods do not take full advantage of the potential properties of variables,which results in poor predicted accuracy.In this paper,we propose the Adaptive Fused Spatial-Temporal Graph Convolutional Network(AFSTGCN).First,to address the problem of the unknown spatial-temporal structure,we construct the Adaptive Fused Spatial-Temporal Graph(AFSTG)layer.Specifically,we fuse the spatial-temporal graph based on the interrelationship of spatial graphs.Simultaneously,we construct the adaptive adjacency matrix of the spatial-temporal graph using node embedding methods.Subsequently,to overcome the insufficient extraction of disordered correlation features,we construct the Adaptive Fused Spatial-Temporal Graph Convolutional(AFSTGC)module.The module forces the reordering of disordered temporal,spatial and spatial-temporal dependencies into rule-like data.AFSTGCN dynamically and synchronously acquires potential temporal,spatial and spatial-temporal correlations,thereby fully extracting rich hierarchical feature information to enhance the predicted accuracy.Experiments on different types of MTS datasets demonstrate that the model achieves state-of-the-art single-step and multi-step performance compared with eight other deep learning models.展开更多
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear...Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.展开更多
Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean...Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales.展开更多
Due to the increasingly severe challenges brought by various epidemic diseases,people urgently need intelligent outbreak trend prediction.Predicting disease onset is very important to assist decision-making.Most of th...Due to the increasingly severe challenges brought by various epidemic diseases,people urgently need intelligent outbreak trend prediction.Predicting disease onset is very important to assist decision-making.Most of the exist-ing work fails to make full use of the temporal and spatial characteristics of epidemics,and also relies on multi-variate data for prediction.In this paper,we propose a Multi-Scale Location Attention Graph Neural Networks(MSLAGNN)based on a large number of Centers for Disease Control and Prevention(CDC)patient electronic medical records research sequence source data sets.In order to understand the geography and timeliness of infec-tious diseases,specific neural networks are used to extract the geography and timeliness of infectious diseases.In the model framework,the features of different periods are extracted by a multi-scale convolution module.At the same time,the propagation effects between regions are simulated by graph convolution and attention mechan-isms.We compare the proposed method with the most advanced statistical methods and deep learning models.Meanwhile,we conduct comparative experiments on data sets with different time lengths to observe the predic-tion performance of the model in the face of different degrees of data collection.We conduct extensive experi-ments on real-world epidemic-related data sets.The method has strong prediction performance and can be readily used for epidemic prediction.展开更多
Target maneuver trajectory prediction is an important prerequisite for air combat situation awareness and maneuver decision-making.However,how to use a large amount of trajectory data generated by air combat confronta...Target maneuver trajectory prediction is an important prerequisite for air combat situation awareness and maneuver decision-making.However,how to use a large amount of trajectory data generated by air combat confrontation training to achieve real-time and accurate prediction of target maneuver trajectory is an urgent problem to be solved.To solve this problem,in this paper,a hybrid algorithm based on transfer learning,online learning,ensemble learning,regularization technology,target maneuvering segmentation point recognition algorithm,and Volterra series,abbreviated as AERTrOS-Volterra is proposed.Firstly,the model makes full use of a large number of trajectory sample data generated by air combat confrontation training,and constructs a Tr-Volterra algorithm framework suitable for air combat target maneuver trajectory prediction,which realizes the extraction of effective information from the historical trajectory data.Secondly,in order to improve the real-time online prediction accuracy and robustness of the prediction model in complex electromagnetic environments,on the basis of the TrVolterra algorithm framework,a robust regularized online Sequential Volterra prediction model is proposed by integrating online learning method,regularization technology and inverse weighting calculation method based on the priori error.Finally,inspired by the preferable performance of models ensemble,ensemble learning scheme is also incorporated into our proposed algorithm,which adaptively updates the ensemble prediction model according to the performance of the model on real-time samples and the recognition results of target maneuvering segmentation points,including the adaptation of model weights;adaptation of parameters;and dynamic inclusion and removal of models.Compared with many existing time series prediction methods,the newly proposed target maneuver trajectory prediction algorithm can fully mine the prior knowledge contained in the historical data to assist the current prediction.The rationality and effectiveness of the proposed algorithm are verified by simulation on three sets of chaotic time series data sets and a set of real target maneuver trajectory data sets.展开更多
The price prediction task is a well-studied problem due to its impact on the business domain.There are several research studies that have been conducted to predict the future price of items by capturing the patterns o...The price prediction task is a well-studied problem due to its impact on the business domain.There are several research studies that have been conducted to predict the future price of items by capturing the patterns of price change,but there is very limited work to study the price prediction of seasonal goods(e.g.,Christmas gifts).Seasonal items’prices have different patterns than normal items;this can be linked to the offers and discounted prices of seasonal items.This lack of research studies motivates the current work to investigate the problem of seasonal items’prices as a time series task.We proposed utilizing two different approaches to address this problem,namely,1)machine learning(ML)-based models and 2)deep learning(DL)-based models.Thus,this research tuned a set of well-known predictive models on a real-life dataset.Those models are ensemble learning-based models,random forest,Ridge,Lasso,and Linear regression.Moreover,two new DL architectures based on gated recurrent unit(GRU)and long short-term memory(LSTM)models are proposed.Then,the performance of the utilized ensemble learning and classic ML models are compared against the proposed two DL architectures on different accuracy metrics,where the evaluation includes both numerical and visual comparisons of the examined models.The obtained results show that the ensemble learning models outperformed the classic machine learning-based models(e.g.,linear regression and random forest)and the DL-based models.展开更多
The methods to determine time delays and embedding dimensions in the phase space delay reconstruction of multivariate chaotic time series are proposed. Three nonlinear prediction methods of multivariate chaotic tim...The methods to determine time delays and embedding dimensions in the phase space delay reconstruction of multivariate chaotic time series are proposed. Three nonlinear prediction methods of multivariate chaotic time series including local mean prediction, local linear prediction and BP neural networks prediction are considered. The simulation results obtained by the Lorenz system show that no matter what nonlinear prediction method is used, the prediction error of multivariate chaotic time series is much smaller than the prediction error of univariate time series, even if half of the data of univariate time series are used in multivariate time series. The results also verify that methods to determine the time delays and the embedding dimensions are correct from the view of minimizing the prediction error.展开更多
Landslides are destructive natural disasters that cause catastrophic damage and loss of life worldwide.Accurately predicting landslide displacement enables effective early warning and risk management.However,the limit...Landslides are destructive natural disasters that cause catastrophic damage and loss of life worldwide.Accurately predicting landslide displacement enables effective early warning and risk management.However,the limited availability of on-site measurement data has been a substantial obstacle in developing data-driven models,such as state-of-the-art machine learning(ML)models.To address these challenges,this study proposes a data augmentation framework that uses generative adversarial networks(GANs),a recent advance in generative artificial intelligence(AI),to improve the accuracy of landslide displacement prediction.The framework provides effective data augmentation to enhance limited datasets.A recurrent GAN model,RGAN-LS,is proposed,specifically designed to generate realistic synthetic multivariate time series that mimics the characteristics of real landslide on-site measurement data.A customized moment-matching loss is incorporated in addition to the adversarial loss in GAN during the training of RGAN-LS to capture the temporal dynamics and correlations in real time series data.Then,the synthetic data generated by RGAN-LS is used to enhance the training of long short-term memory(LSTM)networks and particle swarm optimization-support vector machine(PSO-SVM)models for landslide displacement prediction tasks.Results on two landslides in the Three Gorges Reservoir(TGR)region show a significant improvement in LSTM model prediction performance when trained on augmented data.For instance,in the case of the Baishuihe landslide,the average root mean square error(RMSE)increases by 16.11%,and the mean absolute error(MAE)by 17.59%.More importantly,the model’s responsiveness during mutational stages is enhanced for early warning purposes.However,the results have shown that the static PSO-SVM model only sees marginal gains compared to recurrent models such as LSTM.Further analysis indicates that an optimal synthetic-to-real data ratio(50%on the illustration cases)maximizes the improvements.This also demonstrates the robustness and effectiveness of supplementing training data for dynamic models to obtain better results.By using the powerful generative AI approach,RGAN-LS can generate high-fidelity synthetic landslide data.This is critical for improving the performance of advanced ML models in predicting landslide displacement,particularly when there are limited training data.Additionally,this approach has the potential to expand the use of generative AI in geohazard risk management and other research areas.展开更多
Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in t...Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.展开更多
In order to improve the performance degradation prediction accuracy of proton exchange membrane fuel cell(PEMFC),a fusion prediction method(CKDG)based on adaptive noise complete ensemble empirical mode decomposition(C...In order to improve the performance degradation prediction accuracy of proton exchange membrane fuel cell(PEMFC),a fusion prediction method(CKDG)based on adaptive noise complete ensemble empirical mode decomposition(CEEMDAN),kernel principal component analysis(KPCA)and dual attention mechanism gated recurrent unit neural network(DA-GRU)was proposed.CEEMDAN and KPCA were used to extract the input feature data sequence,reduce the influence of random factors,and capture essential feature components to reduce the model complexity.The DA-GRU network helps to learn the feature mapping relationship of data in long time series and predict the changing trend of performance degradation data more accurately.The actual aging experimental data verify the performance of the CKDG method.The results show that under the steady-state condition of 20%training data prediction,the CKDA method can reduce the root mean square error(RMSE)by 52.7%and 34.6%,respectively,compared with the traditional LSTM and GRU neural networks.Compared with the simple DA-GRU network,RMSE is reduced by 15%,and the degree of over-fitting is reduced,which has higher accuracy.It also shows excellent prediction performance under the dynamic condition data set and has good universality.展开更多
Long-term urban traffic flow prediction is an important task in the field of intelligent transportation,as it can help optimize traffic management and improve travel efficiency.To improve prediction accuracy,a crucial...Long-term urban traffic flow prediction is an important task in the field of intelligent transportation,as it can help optimize traffic management and improve travel efficiency.To improve prediction accuracy,a crucial issue is how to model spatiotemporal dependency in urban traffic data.In recent years,many studies have adopted spatiotemporal neural networks to extract key information from traffic data.However,most models ignore the semantic spatial similarity between long-distance areas when mining spatial dependency.They also ignore the impact of predicted time steps on the next unpredicted time step for making long-term predictions.Moreover,these models lack a comprehensive data embedding process to represent complex spatiotemporal dependency.This paper proposes a multi-scale persistent spatiotemporal transformer(MSPSTT)model to perform accurate long-term traffic flow prediction in cities.MSPSTT adopts an encoder-decoder structure and incorporates temporal,periodic,and spatial features to fully embed urban traffic data to address these issues.The model consists of a spatiotemporal encoder and a spatiotemporal decoder,which rely on temporal,geospatial,and semantic space multi-head attention modules to dynamically extract temporal,geospatial,and semantic characteristics.The spatiotemporal decoder combines the context information provided by the encoder,integrates the predicted time step information,and is iteratively updated to learn the correlation between different time steps in the broader time range to improve the model’s accuracy for long-term prediction.Experiments on four public transportation datasets demonstrate that MSPSTT outperforms the existing models by up to 9.5%on three common metrics.展开更多
The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine l...The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine learning(ML)models effectively deal with such challenges.This research paper comprehensively analyses recent advancements in crop yield prediction from January 2016 to March 2024.In addition,it analyses the effectiveness of various input parameters considered in crop yield prediction models.We conducted an in-depth search and gathered studies that employed crop modeling and AI-based methods to predict crop yield.The total number of articles reviewed for crop yield prediction using ML,meta-modeling(Crop models coupled with ML/DL),and DL-based prediction models and input parameter selection is 125.We conduct the research by setting up five objectives for this research and discussing them after analyzing the selected research papers.Each study is assessed based on the crop type,input parameters employed for prediction,the modeling techniques adopted,and the evaluation metrics used for estimatingmodel performance.We also discuss the ethical and social impacts of AI on agriculture.However,various approaches presented in the scientific literature have delivered impressive predictions,they are complicateddue to intricate,multifactorial influences oncropgrowthand theneed for accuratedata-driven models.Therefore,thorough research is required to deal with challenges in predicting agricultural output.展开更多
Mill vibration is a common problem in rolling production,which directly affects the thickness accuracy of the strip and may even lead to strip fracture accidents in serious cases.The existing vibration prediction mode...Mill vibration is a common problem in rolling production,which directly affects the thickness accuracy of the strip and may even lead to strip fracture accidents in serious cases.The existing vibration prediction models do not consider the features contained in the data,resulting in limited improvement of model accuracy.To address these challenges,this paper proposes a multi-dimensional multi-modal cold rolling vibration time series prediction model(MDMMVPM)based on the deep fusion of multi-level networks.In the model,the long-term and short-term modal features of multi-dimensional data are considered,and the appropriate prediction algorithms are selected for different data features.Based on the established prediction model,the effects of tension and rolling force on mill vibration are analyzed.Taking the 5th stand of a cold mill in a steel mill as the research object,the innovative model is applied to predict the mill vibration for the first time.The experimental results show that the correlation coefficient(R^(2))of the model proposed in this paper is 92.5%,and the root-mean-square error(RMSE)is 0.0011,which significantly improves the modeling accuracy compared with the existing models.The proposed model is also suitable for the hot rolling process,which provides a new method for the prediction of strip rolling vibration.展开更多
Traditional research believes that the filling body can effectively control stress concentration while ignoring the problems of unknown stability and the complex and changeable stress distribution of the filling body...Traditional research believes that the filling body can effectively control stress concentration while ignoring the problems of unknown stability and the complex and changeable stress distribution of the filling body–surrounding rock combination under high-stress conditions.Current monitoring data processing methods cannot fully consider the complexity of monitoring objects,the diversity of monitoring methods,and the dynamics of monitoring data.To solve this problem,this paper proposes a phase space reconstruction and stability prediction method to process heterogeneous information of backfill–surrounding rock combinations.The three-dimensional monitoring system of a large-area filling body–surrounding rock combination in Longshou Mine was constructed by using drilling stress,multipoint displacement meter,and inclinometer.Varied information,such as the stress and displacement of the filling body–surrounding rock combination,was continuously obtained.Combined with the average mutual information method and the false nearest neighbor point method,the phase space of the heterogeneous information of the filling body–surrounding rock combination was then constructed.In this paper,the distance between the phase point and its nearest point was used as the index evaluation distance to evaluate the stability of the filling body–surrounding rock combination.The evaluated distances(ED)revealed a high sensitivity to the stability of the filling body–surrounding rock combination.The new method was then applied to calculate the time series of historically ED for 12 measuring points located at Longshou Mine.The moments of mutation in these time series were at least 3 months ahead of the roadway return dates.In the ED prediction experiments,the autoregressive integrated moving average model showed a higher prediction accuracy than the deep learning models(long short-term memory and Transformer).Furthermore,the root-mean-square error distribution of the prediction results peaked at 0.26,thus outperforming the no-prediction method in 70%of the cases.展开更多
This paper examines the effectiveness of the Differential autoregressive integrated moving average (ARIMA) model in comparison to the Long Short Term Memory (LSTM) neural network model for predicting Wordle user-repor...This paper examines the effectiveness of the Differential autoregressive integrated moving average (ARIMA) model in comparison to the Long Short Term Memory (LSTM) neural network model for predicting Wordle user-reported scores. The ARIMA and LSTM models were trained using Wordle data from Twitter between 7th January 2022 and 31st December 2022. User-reported scores were predicted using evaluation metrics such as MSE, RMSE, R2, and MAE. Various regression models, including XG-Boost and Random Forest, were used to conduct comparison experiments. The MSE, RMSE, R2, and MAE values for the ARIMA(0,1,1) and LSTM models are 0.000, 0.010, 0.998, and 0.006, and 0.000, 0.024, 0.987, and 0.013, respectively. The results indicate that the ARIMA model is more suitable for predicting Wordle user scores than the LSTM model.展开更多
This paper proposes a new approach which we refer to as "segregated prediction" to predict climate time series which are nonstationary. This approach is based on the empirical mode decomposition method (EMD), whic...This paper proposes a new approach which we refer to as "segregated prediction" to predict climate time series which are nonstationary. This approach is based on the empirical mode decomposition method (EMD), which can decompose a time signal into a finite and usually small number of basic oscillatory components. To test the capabilities of this approach, some prediction experiments are carried out for several climate time series. The experimental results show that this approach can decompose the nonstationarity of the climate time series and segregate nonlinear interactions between the different mode components, which thereby is able to improve prediction accuracy of these original climate time series.展开更多
To improve the prediction accuracy of chaotic time series, a new methodformed on the basis of local polynomial prediction is proposed. The multivariate phase spacereconstruction theory is utilized to reconstruct the p...To improve the prediction accuracy of chaotic time series, a new methodformed on the basis of local polynomial prediction is proposed. The multivariate phase spacereconstruction theory is utilized to reconstruct the phase space firstly, and on its basis, apolynomial function is applied to construct the prediction model, then the parameters of the modelaccording to the data matrix built with the embedding dimensions are estimated and a one-stepprediction value is calculated. An estimate and one-step prediction value is calculated. Finally,the mean squared root statistics are used to estimate the prediction effect. The simulation resultsobtained by the Lorenz system and the prediction results of the Shanghai composite index show thatthe local polynomial prediction errors of the multivariate chaotic time series are small and itsprediction accuracy is much higher than that of the univariate chaotic time series.展开更多
A new method of predicting chaotic time series is presented based on a local Lyapunov exponent, by quantitatively measuring the exponential rate of separation or attraction of two infinitely close trajectories in stat...A new method of predicting chaotic time series is presented based on a local Lyapunov exponent, by quantitatively measuring the exponential rate of separation or attraction of two infinitely close trajectories in state space. After recon- structing state space from one-dimensional chaotic time series, neighboring multiple-state vectors of the predicting point are selected to deduce the prediction formula by using the definition of the locaI Lyapunov exponent. Numerical simulations are carded out to test its effectiveness and verify its higher precision over two older methods. The effects of the number of referential state vectors and added noise on forecasting accuracy are also studied numerically.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.52308340)the Innovative Projects of Universities in Guangdong(Grant No.2022KTSCX208)Sichuan Transportation Science and Technology Project(Grant No.2018-ZL-01).
文摘Historically,landslides have been the primary type of geological disaster worldwide.Generally,the stability of reservoir banks is primarily affected by rainfall and reservoir water level fluctuations.Moreover,the stability of reservoir banks changes with the long-term dynamics of external disastercausing factors.Thus,assessing the time-varying reliability of reservoir landslides remains a challenge.In this paper,a machine learning(ML)based approach is proposed to analyze the long-term reliability of reservoir bank landslides in spatially variable soils through time series prediction.This study systematically investigated the prediction performances of three ML algorithms,i.e.multilayer perceptron(MLP),convolutional neural network(CNN),and long short-term memory(LSTM).Additionally,the effects of the data quantity and data ratio on the predictive power of deep learning models are considered.The results show that all three ML models can accurately depict the changes in the time-varying failure probability of reservoir landslides.The CNN model outperforms both the MLP and LSTM models in predicting the failure probability.Furthermore,selecting the right data ratio can improve the prediction accuracy of the failure probability obtained by ML models.
基金funded by the Fujian Province Science and Technology Plan,China(Grant Number 2019H0017).
文摘Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same granularity,segmenting them into different granularity events can effectively mitigate the impact of varying time scales on prediction accuracy.However,these events of varying granularity frequently intersect with each other,which may possess unequal durations.Even minor differences can result in significant errors when matching time series with future trends.Besides,directly using matched events but unaligned events as state vectors in machine learning-based prediction models can lead to insufficient prediction accuracy.Therefore,this paper proposes a short-term forecasting method for time series based on a multi-granularity event,MGE-SP(multi-granularity event-based short-termprediction).First,amethodological framework for MGE-SP established guides the implementation steps.The framework consists of three key steps,including multi-granularity event matching based on the LTF(latest time first)strategy,multi-granularity event alignment using a piecewise aggregate approximation based on the compression ratio,and a short-term prediction model based on XGBoost.The data from a nationwide online car-hailing service in China ensures the method’s reliability.The average RMSE(root mean square error)and MAE(mean absolute error)of the proposed method are 3.204 and 2.360,lower than the respective values of 4.056 and 3.101 obtained using theARIMA(autoregressive integratedmoving average)method,as well as the values of 4.278 and 2.994 obtained using k-means-SVR(support vector regression)method.The other experiment is conducted on stock data froma public data set.The proposed method achieved an average RMSE and MAE of 0.836 and 0.696,lower than the respective values of 1.019 and 0.844 obtained using the ARIMA method,as well as the values of 1.350 and 1.172 obtained using the k-means-SVR method.
基金supported by the China Scholarship Council and the CERNET Innovation Project under grant No.20170111.
文摘The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries and other fields.Furthermore,it is important to construct a digital twin system.However,existing methods do not take full advantage of the potential properties of variables,which results in poor predicted accuracy.In this paper,we propose the Adaptive Fused Spatial-Temporal Graph Convolutional Network(AFSTGCN).First,to address the problem of the unknown spatial-temporal structure,we construct the Adaptive Fused Spatial-Temporal Graph(AFSTG)layer.Specifically,we fuse the spatial-temporal graph based on the interrelationship of spatial graphs.Simultaneously,we construct the adaptive adjacency matrix of the spatial-temporal graph using node embedding methods.Subsequently,to overcome the insufficient extraction of disordered correlation features,we construct the Adaptive Fused Spatial-Temporal Graph Convolutional(AFSTGC)module.The module forces the reordering of disordered temporal,spatial and spatial-temporal dependencies into rule-like data.AFSTGCN dynamically and synchronously acquires potential temporal,spatial and spatial-temporal correlations,thereby fully extracting rich hierarchical feature information to enhance the predicted accuracy.Experiments on different types of MTS datasets demonstrate that the model achieves state-of-the-art single-step and multi-step performance compared with eight other deep learning models.
基金funded by the Natural Science Foundation of Fujian Province,China (Grant No.2022J05291)Xiamen Scientific Research Funding for Overseas Chinese Scholars.
文摘Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.
基金The National Key R&D Program of China under contract No.2021YFC3101603.
文摘Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales.
文摘Due to the increasingly severe challenges brought by various epidemic diseases,people urgently need intelligent outbreak trend prediction.Predicting disease onset is very important to assist decision-making.Most of the exist-ing work fails to make full use of the temporal and spatial characteristics of epidemics,and also relies on multi-variate data for prediction.In this paper,we propose a Multi-Scale Location Attention Graph Neural Networks(MSLAGNN)based on a large number of Centers for Disease Control and Prevention(CDC)patient electronic medical records research sequence source data sets.In order to understand the geography and timeliness of infec-tious diseases,specific neural networks are used to extract the geography and timeliness of infectious diseases.In the model framework,the features of different periods are extracted by a multi-scale convolution module.At the same time,the propagation effects between regions are simulated by graph convolution and attention mechan-isms.We compare the proposed method with the most advanced statistical methods and deep learning models.Meanwhile,we conduct comparative experiments on data sets with different time lengths to observe the predic-tion performance of the model in the face of different degrees of data collection.We conduct extensive experi-ments on real-world epidemic-related data sets.The method has strong prediction performance and can be readily used for epidemic prediction.
基金the support of the Fundamental Research Funds for the Air Force Engineering University under Grant No.XZJK2019040。
文摘Target maneuver trajectory prediction is an important prerequisite for air combat situation awareness and maneuver decision-making.However,how to use a large amount of trajectory data generated by air combat confrontation training to achieve real-time and accurate prediction of target maneuver trajectory is an urgent problem to be solved.To solve this problem,in this paper,a hybrid algorithm based on transfer learning,online learning,ensemble learning,regularization technology,target maneuvering segmentation point recognition algorithm,and Volterra series,abbreviated as AERTrOS-Volterra is proposed.Firstly,the model makes full use of a large number of trajectory sample data generated by air combat confrontation training,and constructs a Tr-Volterra algorithm framework suitable for air combat target maneuver trajectory prediction,which realizes the extraction of effective information from the historical trajectory data.Secondly,in order to improve the real-time online prediction accuracy and robustness of the prediction model in complex electromagnetic environments,on the basis of the TrVolterra algorithm framework,a robust regularized online Sequential Volterra prediction model is proposed by integrating online learning method,regularization technology and inverse weighting calculation method based on the priori error.Finally,inspired by the preferable performance of models ensemble,ensemble learning scheme is also incorporated into our proposed algorithm,which adaptively updates the ensemble prediction model according to the performance of the model on real-time samples and the recognition results of target maneuvering segmentation points,including the adaptation of model weights;adaptation of parameters;and dynamic inclusion and removal of models.Compared with many existing time series prediction methods,the newly proposed target maneuver trajectory prediction algorithm can fully mine the prior knowledge contained in the historical data to assist the current prediction.The rationality and effectiveness of the proposed algorithm are verified by simulation on three sets of chaotic time series data sets and a set of real target maneuver trajectory data sets.
文摘The price prediction task is a well-studied problem due to its impact on the business domain.There are several research studies that have been conducted to predict the future price of items by capturing the patterns of price change,but there is very limited work to study the price prediction of seasonal goods(e.g.,Christmas gifts).Seasonal items’prices have different patterns than normal items;this can be linked to the offers and discounted prices of seasonal items.This lack of research studies motivates the current work to investigate the problem of seasonal items’prices as a time series task.We proposed utilizing two different approaches to address this problem,namely,1)machine learning(ML)-based models and 2)deep learning(DL)-based models.Thus,this research tuned a set of well-known predictive models on a real-life dataset.Those models are ensemble learning-based models,random forest,Ridge,Lasso,and Linear regression.Moreover,two new DL architectures based on gated recurrent unit(GRU)and long short-term memory(LSTM)models are proposed.Then,the performance of the utilized ensemble learning and classic ML models are compared against the proposed two DL architectures on different accuracy metrics,where the evaluation includes both numerical and visual comparisons of the examined models.The obtained results show that the ensemble learning models outperformed the classic machine learning-based models(e.g.,linear regression and random forest)and the DL-based models.
文摘The methods to determine time delays and embedding dimensions in the phase space delay reconstruction of multivariate chaotic time series are proposed. Three nonlinear prediction methods of multivariate chaotic time series including local mean prediction, local linear prediction and BP neural networks prediction are considered. The simulation results obtained by the Lorenz system show that no matter what nonlinear prediction method is used, the prediction error of multivariate chaotic time series is much smaller than the prediction error of univariate time series, even if half of the data of univariate time series are used in multivariate time series. The results also verify that methods to determine the time delays and the embedding dimensions are correct from the view of minimizing the prediction error.
基金supported by the Natural Science Foundation of Jiangsu Province(Grant No.BK20220421)the State Key Program of the National Natural Science Foundation of China(Grant No.42230702)the National Natural Science Foundation of China(Grant No.82302352).
文摘Landslides are destructive natural disasters that cause catastrophic damage and loss of life worldwide.Accurately predicting landslide displacement enables effective early warning and risk management.However,the limited availability of on-site measurement data has been a substantial obstacle in developing data-driven models,such as state-of-the-art machine learning(ML)models.To address these challenges,this study proposes a data augmentation framework that uses generative adversarial networks(GANs),a recent advance in generative artificial intelligence(AI),to improve the accuracy of landslide displacement prediction.The framework provides effective data augmentation to enhance limited datasets.A recurrent GAN model,RGAN-LS,is proposed,specifically designed to generate realistic synthetic multivariate time series that mimics the characteristics of real landslide on-site measurement data.A customized moment-matching loss is incorporated in addition to the adversarial loss in GAN during the training of RGAN-LS to capture the temporal dynamics and correlations in real time series data.Then,the synthetic data generated by RGAN-LS is used to enhance the training of long short-term memory(LSTM)networks and particle swarm optimization-support vector machine(PSO-SVM)models for landslide displacement prediction tasks.Results on two landslides in the Three Gorges Reservoir(TGR)region show a significant improvement in LSTM model prediction performance when trained on augmented data.For instance,in the case of the Baishuihe landslide,the average root mean square error(RMSE)increases by 16.11%,and the mean absolute error(MAE)by 17.59%.More importantly,the model’s responsiveness during mutational stages is enhanced for early warning purposes.However,the results have shown that the static PSO-SVM model only sees marginal gains compared to recurrent models such as LSTM.Further analysis indicates that an optimal synthetic-to-real data ratio(50%on the illustration cases)maximizes the improvements.This also demonstrates the robustness and effectiveness of supplementing training data for dynamic models to obtain better results.By using the powerful generative AI approach,RGAN-LS can generate high-fidelity synthetic landslide data.This is critical for improving the performance of advanced ML models in predicting landslide displacement,particularly when there are limited training data.Additionally,this approach has the potential to expand the use of generative AI in geohazard risk management and other research areas.
基金supported by the National Key Research and Development Program of China(No.2018YFB2101300)the National Natural Science Foundation of China(Grant No.61871186)the Dean’s Fund of Engineering Research Center of Software/Hardware Co-Design Technology and Application,Ministry of Education(East China Normal University).
文摘Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.
基金funded by Shaanxi Province Key Industrial Chain Project(2023-ZDLGY-24)Industrialization Project of Shaanxi Provincial Education Department(21JC018)+1 种基金Shaanxi Province Key Research and Development Program(2021ZDLGY13-02)the Open Foundation of State Key Laboratory for Advanced Metals and Materials(2022-Z01).
文摘In order to improve the performance degradation prediction accuracy of proton exchange membrane fuel cell(PEMFC),a fusion prediction method(CKDG)based on adaptive noise complete ensemble empirical mode decomposition(CEEMDAN),kernel principal component analysis(KPCA)and dual attention mechanism gated recurrent unit neural network(DA-GRU)was proposed.CEEMDAN and KPCA were used to extract the input feature data sequence,reduce the influence of random factors,and capture essential feature components to reduce the model complexity.The DA-GRU network helps to learn the feature mapping relationship of data in long time series and predict the changing trend of performance degradation data more accurately.The actual aging experimental data verify the performance of the CKDG method.The results show that under the steady-state condition of 20%training data prediction,the CKDA method can reduce the root mean square error(RMSE)by 52.7%and 34.6%,respectively,compared with the traditional LSTM and GRU neural networks.Compared with the simple DA-GRU network,RMSE is reduced by 15%,and the degree of over-fitting is reduced,which has higher accuracy.It also shows excellent prediction performance under the dynamic condition data set and has good universality.
基金the National Natural Science Foundation of China under Grant No.62272087Science and Technology Planning Project of Sichuan Province under Grant No.2023YFG0161.
文摘Long-term urban traffic flow prediction is an important task in the field of intelligent transportation,as it can help optimize traffic management and improve travel efficiency.To improve prediction accuracy,a crucial issue is how to model spatiotemporal dependency in urban traffic data.In recent years,many studies have adopted spatiotemporal neural networks to extract key information from traffic data.However,most models ignore the semantic spatial similarity between long-distance areas when mining spatial dependency.They also ignore the impact of predicted time steps on the next unpredicted time step for making long-term predictions.Moreover,these models lack a comprehensive data embedding process to represent complex spatiotemporal dependency.This paper proposes a multi-scale persistent spatiotemporal transformer(MSPSTT)model to perform accurate long-term traffic flow prediction in cities.MSPSTT adopts an encoder-decoder structure and incorporates temporal,periodic,and spatial features to fully embed urban traffic data to address these issues.The model consists of a spatiotemporal encoder and a spatiotemporal decoder,which rely on temporal,geospatial,and semantic space multi-head attention modules to dynamically extract temporal,geospatial,and semantic characteristics.The spatiotemporal decoder combines the context information provided by the encoder,integrates the predicted time step information,and is iteratively updated to learn the correlation between different time steps in the broader time range to improve the model’s accuracy for long-term prediction.Experiments on four public transportation datasets demonstrate that MSPSTT outperforms the existing models by up to 9.5%on three common metrics.
文摘The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine learning(ML)models effectively deal with such challenges.This research paper comprehensively analyses recent advancements in crop yield prediction from January 2016 to March 2024.In addition,it analyses the effectiveness of various input parameters considered in crop yield prediction models.We conducted an in-depth search and gathered studies that employed crop modeling and AI-based methods to predict crop yield.The total number of articles reviewed for crop yield prediction using ML,meta-modeling(Crop models coupled with ML/DL),and DL-based prediction models and input parameter selection is 125.We conduct the research by setting up five objectives for this research and discussing them after analyzing the selected research papers.Each study is assessed based on the crop type,input parameters employed for prediction,the modeling techniques adopted,and the evaluation metrics used for estimatingmodel performance.We also discuss the ethical and social impacts of AI on agriculture.However,various approaches presented in the scientific literature have delivered impressive predictions,they are complicateddue to intricate,multifactorial influences oncropgrowthand theneed for accuratedata-driven models.Therefore,thorough research is required to deal with challenges in predicting agricultural output.
基金Project(2023JH26-10100002)supported by the Liaoning Science and Technology Major Project,ChinaProjects(U21A20117,52074085)supported by the National Natural Science Foundation of China+1 种基金Project(2022JH2/101300008)supported by the Liaoning Applied Basic Research Program Project,ChinaProject(22567612H)supported by the Hebei Provincial Key Laboratory Performance Subsidy Project,China。
文摘Mill vibration is a common problem in rolling production,which directly affects the thickness accuracy of the strip and may even lead to strip fracture accidents in serious cases.The existing vibration prediction models do not consider the features contained in the data,resulting in limited improvement of model accuracy.To address these challenges,this paper proposes a multi-dimensional multi-modal cold rolling vibration time series prediction model(MDMMVPM)based on the deep fusion of multi-level networks.In the model,the long-term and short-term modal features of multi-dimensional data are considered,and the appropriate prediction algorithms are selected for different data features.Based on the established prediction model,the effects of tension and rolling force on mill vibration are analyzed.Taking the 5th stand of a cold mill in a steel mill as the research object,the innovative model is applied to predict the mill vibration for the first time.The experimental results show that the correlation coefficient(R^(2))of the model proposed in this paper is 92.5%,and the root-mean-square error(RMSE)is 0.0011,which significantly improves the modeling accuracy compared with the existing models.The proposed model is also suitable for the hot rolling process,which provides a new method for the prediction of strip rolling vibration.
基金the National Key R&D Program of China(No.2022YFC2904103)the Key Program of the National Natural Science Foundation of China(No.52034001)+1 种基金the 111 Project(No.B20041)the China National Postdoctoral Program for Innovative Talents(No.BX20230041)。
文摘Traditional research believes that the filling body can effectively control stress concentration while ignoring the problems of unknown stability and the complex and changeable stress distribution of the filling body–surrounding rock combination under high-stress conditions.Current monitoring data processing methods cannot fully consider the complexity of monitoring objects,the diversity of monitoring methods,and the dynamics of monitoring data.To solve this problem,this paper proposes a phase space reconstruction and stability prediction method to process heterogeneous information of backfill–surrounding rock combinations.The three-dimensional monitoring system of a large-area filling body–surrounding rock combination in Longshou Mine was constructed by using drilling stress,multipoint displacement meter,and inclinometer.Varied information,such as the stress and displacement of the filling body–surrounding rock combination,was continuously obtained.Combined with the average mutual information method and the false nearest neighbor point method,the phase space of the heterogeneous information of the filling body–surrounding rock combination was then constructed.In this paper,the distance between the phase point and its nearest point was used as the index evaluation distance to evaluate the stability of the filling body–surrounding rock combination.The evaluated distances(ED)revealed a high sensitivity to the stability of the filling body–surrounding rock combination.The new method was then applied to calculate the time series of historically ED for 12 measuring points located at Longshou Mine.The moments of mutation in these time series were at least 3 months ahead of the roadway return dates.In the ED prediction experiments,the autoregressive integrated moving average model showed a higher prediction accuracy than the deep learning models(long short-term memory and Transformer).Furthermore,the root-mean-square error distribution of the prediction results peaked at 0.26,thus outperforming the no-prediction method in 70%of the cases.
文摘This paper examines the effectiveness of the Differential autoregressive integrated moving average (ARIMA) model in comparison to the Long Short Term Memory (LSTM) neural network model for predicting Wordle user-reported scores. The ARIMA and LSTM models were trained using Wordle data from Twitter between 7th January 2022 and 31st December 2022. User-reported scores were predicted using evaluation metrics such as MSE, RMSE, R2, and MAE. Various regression models, including XG-Boost and Random Forest, were used to conduct comparison experiments. The MSE, RMSE, R2, and MAE values for the ARIMA(0,1,1) and LSTM models are 0.000, 0.010, 0.998, and 0.006, and 0.000, 0.024, 0.987, and 0.013, respectively. The results indicate that the ARIMA model is more suitable for predicting Wordle user scores than the LSTM model.
基金supported by the National Science Foundation of China, under grant Nos. 40890052, 40035010, 40505018, and 40940023
文摘This paper proposes a new approach which we refer to as "segregated prediction" to predict climate time series which are nonstationary. This approach is based on the empirical mode decomposition method (EMD), which can decompose a time signal into a finite and usually small number of basic oscillatory components. To test the capabilities of this approach, some prediction experiments are carried out for several climate time series. The experimental results show that this approach can decompose the nonstationarity of the climate time series and segregate nonlinear interactions between the different mode components, which thereby is able to improve prediction accuracy of these original climate time series.
文摘To improve the prediction accuracy of chaotic time series, a new methodformed on the basis of local polynomial prediction is proposed. The multivariate phase spacereconstruction theory is utilized to reconstruct the phase space firstly, and on its basis, apolynomial function is applied to construct the prediction model, then the parameters of the modelaccording to the data matrix built with the embedding dimensions are estimated and a one-stepprediction value is calculated. An estimate and one-step prediction value is calculated. Finally,the mean squared root statistics are used to estimate the prediction effect. The simulation resultsobtained by the Lorenz system and the prediction results of the Shanghai composite index show thatthe local polynomial prediction errors of the multivariate chaotic time series are small and itsprediction accuracy is much higher than that of the univariate chaotic time series.
基金Project supported by the National Natural Science Foundation of China (Grant No. 61201452)
文摘A new method of predicting chaotic time series is presented based on a local Lyapunov exponent, by quantitatively measuring the exponential rate of separation or attraction of two infinitely close trajectories in state space. After recon- structing state space from one-dimensional chaotic time series, neighboring multiple-state vectors of the predicting point are selected to deduce the prediction formula by using the definition of the locaI Lyapunov exponent. Numerical simulations are carded out to test its effectiveness and verify its higher precision over two older methods. The effects of the number of referential state vectors and added noise on forecasting accuracy are also studied numerically.