To tackle the problem of inaccurate short-term bus load prediction,especially during holidays,a Transformer-based scheme with tailored architectural enhancements is proposed.First,the input data are clustered to reduc...To tackle the problem of inaccurate short-term bus load prediction,especially during holidays,a Transformer-based scheme with tailored architectural enhancements is proposed.First,the input data are clustered to reduce complexity and capture inherent characteristics more effectively.Gated residual connections are then employed to selectively propagate salient features across layers,while an attention mechanism focuses on identifying prominent patterns in multivariate time-series data.Ultimately,a pre-trained structure is incorporated to reduce computational complexity.Experimental results based on extensive data show that the proposed scheme achieves improved prediction accuracy over comparative algorithms by at least 32.00%consistently across all buses evaluated,and the fitting effect of holiday load curves is outstanding.Meanwhile,the pre-trained structure drastically reduces the training time of the proposed algorithm by more than 65.75%.The proposed scheme can efficiently predict bus load results while enhancing robustness for holiday predictions,making it better adapted to real-world prediction scenarios.展开更多
Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currentl...Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currently,least squares(LS)+auto-regressive(AR)hybrid method is one of the main techniques of PM prediction.Besides,the weighted LS+AR hybrid method performs well for PM short-term prediction.However,the corresponding covariance information of LS fitting residuals deserves further exploration in the AR model.In this study,we have derived a modified stochastic model for the LS+AR hybrid method,namely the weighted LS+weighted AR hybrid method.By using the PM data products of IERS EOP 14 C04,the numerical results indicate that for PM short-term forecasting,the proposed weighted LS+weighted AR hybrid method shows an advantage over both the LS+AR hybrid method and the weighted LS+AR hybrid method.Compared to the mean absolute errors(MAEs)of PMX/PMY sho rt-term prediction of the LS+AR hybrid method and the weighted LS+AR hybrid method,the weighted LS+weighted AR hybrid method shows average improvements of 6.61%/12.08%and 0.24%/11.65%,respectively.Besides,for the slopes of the linear regression lines fitted to the errors of each method,the growth of the prediction error of the proposed method is slower than that of the other two methods.展开更多
Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same g...Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same granularity,segmenting them into different granularity events can effectively mitigate the impact of varying time scales on prediction accuracy.However,these events of varying granularity frequently intersect with each other,which may possess unequal durations.Even minor differences can result in significant errors when matching time series with future trends.Besides,directly using matched events but unaligned events as state vectors in machine learning-based prediction models can lead to insufficient prediction accuracy.Therefore,this paper proposes a short-term forecasting method for time series based on a multi-granularity event,MGE-SP(multi-granularity event-based short-termprediction).First,amethodological framework for MGE-SP established guides the implementation steps.The framework consists of three key steps,including multi-granularity event matching based on the LTF(latest time first)strategy,multi-granularity event alignment using a piecewise aggregate approximation based on the compression ratio,and a short-term prediction model based on XGBoost.The data from a nationwide online car-hailing service in China ensures the method’s reliability.The average RMSE(root mean square error)and MAE(mean absolute error)of the proposed method are 3.204 and 2.360,lower than the respective values of 4.056 and 3.101 obtained using theARIMA(autoregressive integratedmoving average)method,as well as the values of 4.278 and 2.994 obtained using k-means-SVR(support vector regression)method.The other experiment is conducted on stock data froma public data set.The proposed method achieved an average RMSE and MAE of 0.836 and 0.696,lower than the respective values of 1.019 and 0.844 obtained using the ARIMA method,as well as the values of 1.350 and 1.172 obtained using the k-means-SVR method.展开更多
Accurate origin–destination(OD)demand prediction is crucial for the efficient operation and management of urban rail transit(URT)systems,particularly during a pandemic.However,this task faces several limitations,incl...Accurate origin–destination(OD)demand prediction is crucial for the efficient operation and management of urban rail transit(URT)systems,particularly during a pandemic.However,this task faces several limitations,including real-time availability,sparsity,and high-dimensionality issues,and the impact of the pandemic.Consequently,this study proposes a unified framework called the physics-guided adaptive graph spatial–temporal attention network(PAG-STAN)for metro OD demand prediction under pandemic conditions.Specifically,PAG-STAN introduces a real-time OD estimation module to estimate real-time complete OD demand matrices.Subsequently,a novel dynamic OD demand matrix compression module is proposed to generate dense real-time OD demand matrices.Thereafter,PAG-STAN leverages various heterogeneous data to learn the evolutionary trend of future OD ridership during the pandemic.Finally,a masked physics-guided loss function(MPG-loss function)incorporates the physical quantity information between the OD demand and inbound flow into the loss function to enhance model interpretability.PAG-STAN demonstrated favorable performance on two real-world metro OD demand datasets under the pandemic and conventional scenarios,highlighting its robustness and sensitivity for metro OD demand prediction.A series of ablation studies were conducted to verify the indispensability of each module in PAG-STAN.展开更多
With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning ...With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning and operating traffic structures.This study proposed an improved ensemble-based deep learning method to solve traffic volume prediction problems.A set of optimal hyperparameters is also applied for the suggested approach to improve the performance of the learning process.The fusion of these methodologies aims to harness ensemble empirical mode decomposition’s capacity to discern complex traffic patterns and long short-term memory’s proficiency in learning temporal relationships.Firstly,a dataset for automatic vehicle identification is obtained and utilized in the preprocessing stage of the ensemble empirical mode decomposition model.The second aspect involves predicting traffic volume using the long short-term memory algorithm.Next,the study employs a trial-and-error approach to select a set of optimal hyperparameters,including the lookback window,the number of neurons in the hidden layers,and the gradient descent optimization.Finally,the fusion of the obtained results leads to a final traffic volume prediction.The experimental results show that the proposed method outperforms other benchmarks regarding various evaluation measures,including mean absolute error,root mean squared error,mean absolute percentage error,and R-squared.The achieved R-squared value reaches an impressive 98%,while the other evaluation indices surpass the competing.These findings highlight the accuracy of traffic pattern prediction.Consequently,this offers promising prospects for enhancing transportation management systems and urban infrastructure planning.展开更多
BACKGROUND Endometrial cancer(EC)is a common gynecological malignancy that typically requires prompt surgical intervention;however,the advantage of surgical management is limited by the high postoperative recurrence r...BACKGROUND Endometrial cancer(EC)is a common gynecological malignancy that typically requires prompt surgical intervention;however,the advantage of surgical management is limited by the high postoperative recurrence rates and adverse outcomes.Previous studies have highlighted the prognostic potential of circulating tumor DNA(ctDNA)monitoring for minimal residual disease in patients with EC.AIM To develop and validate an optimized ctDNA-based model for predicting shortterm postoperative EC recurrence.METHODS We retrospectively analyzed 294 EC patients treated surgically from 2015-2019 to devise a short-term recurrence prediction model,which was validated on 143 EC patients operated between 2020 and 2021.Prognostic factors were identified using univariate Cox,Lasso,and multivariate Cox regressions.A nomogram was created to predict the 1,1.5,and 2-year recurrence-free survival(RFS).Model performance was assessed via receiver operating characteristic(ROC),calibration,and decision curve analyses(DCA),leading to a recurrence risk stratification system.RESULTS Based on the regression analysis and the nomogram created,patients with postoperative ctDNA-negativity,postoperative carcinoembryonic antigen 125(CA125)levels of<19 U/mL,and grade G1 tumors had improved RFS after surgery.The nomogram’s efficacy for recurrence prediction was confirmed through ROC analysis,calibration curves,and DCA methods,highlighting its high accuracy and clinical utility.Furthermore,using the nomogram,the patients were successfully classified into three risk subgroups.CONCLUSION The nomogram accurately predicted RFS after EC surgery at 1,1.5,and 2 years.This model will help clinicians personalize treatments,stratify risks,and enhance clinical outcomes for patients with EC.展开更多
With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting m...With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting model is outdated due to the continuous updating of wind power data,a short-term wind power forecasting algorithm based on Incremental Learning-Bagging Deep Hybrid Kernel Extreme Learning Machine(IL-Bagging-DHKELM)error affinity propagation cluster analysis is proposed.The algorithm effectively combines deep hybrid kernel extreme learning machine(DHKELM)with incremental learning(IL).Firstly,an initial wind power prediction model is trained using the Bagging-DHKELM model.Secondly,Euclidean morphological distance affinity propagation AP clustering algorithm is used to cluster and analyze the prediction error of wind power obtained from the initial training model.Finally,the correlation between wind power prediction errors and Numerical Weather Prediction(NWP)data is introduced as incremental updates to the initial wind power prediction model.During the incremental learning process,multiple error performance indicators are used to measure the overall model performance,thereby enabling incremental updates of wind power models.Practical examples show the method proposed in this article reduces the root mean square error of the initial model by 1.9 percentage points,indicating that this method can be better adapted to the current scenario of the continuous increase in wind power penetration rate.The accuracy and precision of wind power generation prediction are effectively improved through the method.展开更多
Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and a...Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.展开更多
Since the existing prediction methods have encountered difficulties in processing themultiple influencing factors in short-term power load forecasting,we propose a bidirectional long short-term memory(BiLSTM)neural ne...Since the existing prediction methods have encountered difficulties in processing themultiple influencing factors in short-term power load forecasting,we propose a bidirectional long short-term memory(BiLSTM)neural network model based on the temporal pattern attention(TPA)mechanism.Firstly,based on the grey relational analysis,datasets similar to forecast day are obtained.Secondly,thebidirectional LSTM layermodels the data of thehistorical load,temperature,humidity,and date-type and extracts complex relationships between data from the hidden row vectors obtained by the BiLSTM network,so that the influencing factors(with different characteristics)can select relevant information from different time steps to reduce the prediction error of the model.Simultaneously,the complex and nonlinear dependencies between time steps and sequences are extracted by the TPA mechanism,so the attention weight vector is constructed for the hidden layer output of BiLSTM and the relevant variables at different time steps are weighted to influence the input.Finally,the chaotic sparrow search algorithm(CSSA)is used to optimize the hyperparameter selection of the model.The short-term power load forecasting on different data sets shows that the average absolute errors of short-termpower load forecasting based on our method are 0.876 and 4.238,respectively,which is lower than other forecastingmethods,demonstrating the accuracy and stability of our model.展开更多
Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes i...Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes in the flow field.In this study,we propose a novel deep learning method,named mapping net-work-coordinated stacked gated recurrent units(MSU),for pre-dicting pressure on a circular cylinder from velocity data.Specifi-cally,our coordinated learning strategy is designed to extract the most critical velocity point for prediction,a process that has not been explored before.In our experiments,MSU extracts one point from a velocity field containing 121 points and utilizes this point to accurately predict 100 pressure points on the cylinder.This method significantly reduces the workload of data measure-ment in practical engineering applications.Our experimental results demonstrate that MSU predictions are highly similar to the real turbulent data in both spatio-temporal and individual aspects.Furthermore,the comparison results show that MSU predicts more precise results,even outperforming models that use all velocity field points.Compared with state-of-the-art methods,MSU has an average improvement of more than 45%in various indicators such as root mean square error(RMSE).Through comprehensive and authoritative physical verification,we estab-lished that MSU’s prediction results closely align with pressure field data obtained in real turbulence fields.This confirmation underscores the considerable potential of MSU for practical applications in real engineering scenarios.The code is available at https://github.com/zhangzm0128/MSU.展开更多
The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning mode...The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning models have some problems,such as poor nonlinear performance,local optimum and incomplete factors feature extraction.These issues can affect the accuracy of slope stability prediction.Therefore,a deep learning algorithm called Long short-term memory(LSTM)has been innovatively proposed to predict slope stability.Taking the Ganzhou City in China as the study area,the landslide inventory and their characteristics of geotechnical parameters,slope height and slope angle are analyzed.Based on these characteristics,typical soil slopes are constructed using the Geo-Studio software.Five control factors affecting slope stability,including slope height,slope angle,internal friction angle,cohesion and volumetric weight,are selected to form different slope and construct model input variables.Then,the limit equilibrium method is used to calculate the stability coefficients of these typical soil slopes under different control factors.Each slope stability coefficient and its corresponding control factors is a slope sample.As a result,a total of 2160 training samples and 450 testing samples are constructed.These sample sets are imported into LSTM for modelling and compared with the support vector machine(SVM),random forest(RF)and convo-lutional neural network(CNN).The results show that the LSTM overcomes the problem that the commonly used machine learning models have difficulty extracting global features.Furthermore,LSTM has a better prediction performance for slope stability compared to SVM,RF and CNN models.展开更多
The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine l...The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine learning(ML)models effectively deal with such challenges.This research paper comprehensively analyses recent advancements in crop yield prediction from January 2016 to March 2024.In addition,it analyses the effectiveness of various input parameters considered in crop yield prediction models.We conducted an in-depth search and gathered studies that employed crop modeling and AI-based methods to predict crop yield.The total number of articles reviewed for crop yield prediction using ML,meta-modeling(Crop models coupled with ML/DL),and DL-based prediction models and input parameter selection is 125.We conduct the research by setting up five objectives for this research and discussing them after analyzing the selected research papers.Each study is assessed based on the crop type,input parameters employed for prediction,the modeling techniques adopted,and the evaluation metrics used for estimatingmodel performance.We also discuss the ethical and social impacts of AI on agriculture.However,various approaches presented in the scientific literature have delivered impressive predictions,they are complicateddue to intricate,multifactorial influences oncropgrowthand theneed for accuratedata-driven models.Therefore,thorough research is required to deal with challenges in predicting agricultural output.展开更多
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear...Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.展开更多
Acute complication prediction model is of great importance for the overall reduction of premature death in chronic diseases.The CLSTM-BPR proposed in this paper aims to improve the accuracy,interpretability,and genera...Acute complication prediction model is of great importance for the overall reduction of premature death in chronic diseases.The CLSTM-BPR proposed in this paper aims to improve the accuracy,interpretability,and generalizability of the existing disease prediction models.Firstly,through its complex neural network structure,CLSTM-BPR considers both disease commonality and patient characteristics in the prediction process.Secondly,by splicing the time series prediction algorithm and classifier,the judgment basis is given along with the prediction results.Finally,this model introduces the pairwise algorithm Bayesian Personalized Ranking(BPR)into the medical field for the first time,and achieves a good result in the diagnosis of six acute complications.Experiments on the Medical Information Mart for Intensive Care IV(MIMIC-IV)dataset show that the average Mean Absolute Error(MAE)of biomarker value prediction of the CLSTM-BPR model is 0.26,and the average accuracy(ACC)of the CLSTM-BPR model for acute complication diagnosis is 92.5%.Comparison experiments and ablation experiments further demonstrate the reliability of CLSTM-BPR in the prediction of acute complication,which is an advancement of current disease prediction tools.展开更多
Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a s...Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a single prediction model is hard to capture temporal features effectively, resulting in diminished predictionaccuracy. In this study, a hybrid deep learning framework that integrates attention mechanism, convolution neuralnetwork (CNN), improved chaotic particle swarm optimization (ICPSO), and long short-term memory (LSTM), isproposed for short-term household load forecasting. Firstly, the CNN model is employed to extract features fromthe original data, enhancing the quality of data features. Subsequently, the moving average method is used for datapreprocessing, followed by the application of the LSTM network to predict the processed data. Moreover, the ICPSOalgorithm is introduced to optimize the parameters of LSTM, aimed at boosting the model’s running speed andaccuracy. Finally, the attention mechanism is employed to optimize the output value of LSTM, effectively addressinginformation loss in LSTM induced by lengthy sequences and further elevating prediction accuracy. According tothe numerical analysis, the accuracy and effectiveness of the proposed hybrid model have been verified. It canexplore data features adeptly, achieving superior prediction accuracy compared to other forecasting methods forthe household load exhibiting significant fluctuations across different seasons.展开更多
The complexity of river-tide interaction poses a significant challenge in predicting discharge in tidal rivers.Long short-term memory(LSTM)networks excel in processing and predicting crucial events with extended inter...The complexity of river-tide interaction poses a significant challenge in predicting discharge in tidal rivers.Long short-term memory(LSTM)networks excel in processing and predicting crucial events with extended intervals and time delays in time series data.Additionally,the sequence-to-sequence(Seq2Seq)model,known for handling temporal relationships,adapting to variable-length sequences,effectively capturing historical information,and accommodating various influencing factors,emerges as a robust and flexible tool in discharge forecasting.In this study,we introduce the application of LSTM-based Seq2Seq models for the first time in forecasting the discharge of a tidal reach of the Changjiang River(Yangtze River)Estuary.This study focuses on discharge forecasting using three key input characteristics:flow velocity,water level,and discharge,which means the structure of multiple input and single output is adopted.The experiment used the discharge data of the whole year of 2020,of which the first 80%is used as the training set,and the last 20%is used as the test set.This means that the data covers different tidal cycles,which helps to test the forecasting effect of different models in different tidal cycles and different runoff.The experimental results indicate that the proposed models demonstrate advantages in long-term,mid-term,and short-term discharge forecasting.The Seq2Seq models improved by 6%-60%and 5%-20%of the relative standard deviation compared to the harmonic analysis models and improved back propagation neural network models in discharge prediction,respectively.In addition,the relative accuracy of the Seq2Seq model is 1%to 3%higher than that of the LSTM model.Analytical assessment of the prediction errors shows that the Seq2Seq models are insensitive to the forecast lead time and they can capture characteristic values such as maximum flood tide flow and maximum ebb tide flow in the tidal cycle well.This indicates the significance of the Seq2Seq models.展开更多
Traditional traffic management techniques appear to be incompetent in complex data center networks, so proposes a load balancing strategy based on Long Short-Term Memory (LSTM) and quantum annealing by Software Define...Traditional traffic management techniques appear to be incompetent in complex data center networks, so proposes a load balancing strategy based on Long Short-Term Memory (LSTM) and quantum annealing by Software Defined Network (SDN) to dynamically predict the traffic and comprehensively consider the current and predicted load of the network in order to select the optimal forwarding path and balance the network load. Experiments have demonstrated that the algorithm achieves significant improvement in both system throughput and average packet loss rate for the purpose of improving network quality of service.展开更多
To improve energy efficiency and protect the environment,the integrated energy system(IES)becomes a significant direction of energy structure adjustment.This paper innovatively proposes a wavelet neural network(WNN)mo...To improve energy efficiency and protect the environment,the integrated energy system(IES)becomes a significant direction of energy structure adjustment.This paper innovatively proposes a wavelet neural network(WNN)model optimized by the improved particle swarm optimization(IPSO)and chaos optimization algorithm(COA)for short-term load prediction of IES.The proposed model overcomes the disadvantages of the slow convergence and the tendency to fall into the local optimum in traditional WNN models.First,the Pearson correlation coefficient is employed to select the key influencing factors of load prediction.Then,the traditional particle swarm optimization(PSO)is improved by the dynamic particle inertia weight.To jump out of the local optimum,the COA is employed to search for individual optimal particles in IPSO.In the iteration,the parameters of WNN are continually optimized by IPSO-COA.Meanwhile,the feedback link is added to the proposed model,where the output error is adopted to modify the prediction results.Finally,the proposed model is employed for load prediction.The experimental simulation verifies that the proposed model significantly improves the prediction accuracy and operation efficiency compared with the artificial neural network(ANN),WNN,and PSO-WNN.展开更多
Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of comput...Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of computing resources in the monitoring model are analyzed. Then, a time-series autoregressive prediction model is devised. And an autoregressive support vector regression( ARSVR) monitoring method is put forward to predict the node load of the data grid. Finally, a model for historical observations sequences is set up using the autoregressive (AR) model and the model order is determined. The support vector regression(SVR) model is trained using historical data and the regression function is obtained. Simulation results show that the ARSVR method can effectively predict the node load.展开更多
Landslide deformation is affected by its geological conditions and many environmental factors.So it has the characteristics of dynamic,nonlinear and unstable,which makes the prediction of landslide displacement diffic...Landslide deformation is affected by its geological conditions and many environmental factors.So it has the characteristics of dynamic,nonlinear and unstable,which makes the prediction of landslide displacement difficult.In view of the above problems,this paper proposes a dynamic prediction model of landslide displacement based on the improvement of complete ensemble empirical mode decomposition with adaptive noise(ICEEMDAN),approximate entropy(ApEn)and convolution long short-term memory(CNN-LSTM)neural network.Firstly,ICEEMDAN and Ap En are used to decompose the cumulative displacements into trend,periodic and random displacements.Then,the least square quintic polynomial function is used to fit the displacement of trend term,and the CNN-LSTM is used to predict the displacement of periodic term and random term.Finally,the displacement prediction results of trend term,periodic term and random term are superimposed to obtain the cumulative displacement prediction value.The proposed model has been verified in Bazimen landslide in the Three Gorges Reservoir area of China.The experimental results show that the model proposed in this paper can more effectively predict the displacement changes of landslides.As compared with long short-term memory(LSTM)neural network,gated recurrent unit(GRU)network model and back propagation(BP)neural network,CNN-LSTM neural network had higher prediction accuracy in predicting the periodic displacement,with the mean absolute percentage error(MAPE)reduced by 3.621%,6.893% and 15.886% respectively,and the root mean square error(RMSE)reduced by 3.834 mm,3.945 mm and 7.422mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide a new insight for practical landslide prevention and control engineering.展开更多
文摘To tackle the problem of inaccurate short-term bus load prediction,especially during holidays,a Transformer-based scheme with tailored architectural enhancements is proposed.First,the input data are clustered to reduce complexity and capture inherent characteristics more effectively.Gated residual connections are then employed to selectively propagate salient features across layers,while an attention mechanism focuses on identifying prominent patterns in multivariate time-series data.Ultimately,a pre-trained structure is incorporated to reduce computational complexity.Experimental results based on extensive data show that the proposed scheme achieves improved prediction accuracy over comparative algorithms by at least 32.00%consistently across all buses evaluated,and the fitting effect of holiday load curves is outstanding.Meanwhile,the pre-trained structure drastically reduces the training time of the proposed algorithm by more than 65.75%.The proposed scheme can efficiently predict bus load results while enhancing robustness for holiday predictions,making it better adapted to real-world prediction scenarios.
基金supported by National Natural Science Foundation of China,China(No.42004016)HuBei Natural Science Fund,China(No.2020CFB329)+1 种基金HuNan Natural Science Fund,China(No.2023JJ60559,2023JJ60560)the State Key Laboratory of Geodesy and Earth’s Dynamics self-deployment project,China(No.S21L6101)。
文摘Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currently,least squares(LS)+auto-regressive(AR)hybrid method is one of the main techniques of PM prediction.Besides,the weighted LS+AR hybrid method performs well for PM short-term prediction.However,the corresponding covariance information of LS fitting residuals deserves further exploration in the AR model.In this study,we have derived a modified stochastic model for the LS+AR hybrid method,namely the weighted LS+weighted AR hybrid method.By using the PM data products of IERS EOP 14 C04,the numerical results indicate that for PM short-term forecasting,the proposed weighted LS+weighted AR hybrid method shows an advantage over both the LS+AR hybrid method and the weighted LS+AR hybrid method.Compared to the mean absolute errors(MAEs)of PMX/PMY sho rt-term prediction of the LS+AR hybrid method and the weighted LS+AR hybrid method,the weighted LS+weighted AR hybrid method shows average improvements of 6.61%/12.08%and 0.24%/11.65%,respectively.Besides,for the slopes of the linear regression lines fitted to the errors of each method,the growth of the prediction error of the proposed method is slower than that of the other two methods.
基金funded by the Fujian Province Science and Technology Plan,China(Grant Number 2019H0017).
文摘Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same granularity,segmenting them into different granularity events can effectively mitigate the impact of varying time scales on prediction accuracy.However,these events of varying granularity frequently intersect with each other,which may possess unequal durations.Even minor differences can result in significant errors when matching time series with future trends.Besides,directly using matched events but unaligned events as state vectors in machine learning-based prediction models can lead to insufficient prediction accuracy.Therefore,this paper proposes a short-term forecasting method for time series based on a multi-granularity event,MGE-SP(multi-granularity event-based short-termprediction).First,amethodological framework for MGE-SP established guides the implementation steps.The framework consists of three key steps,including multi-granularity event matching based on the LTF(latest time first)strategy,multi-granularity event alignment using a piecewise aggregate approximation based on the compression ratio,and a short-term prediction model based on XGBoost.The data from a nationwide online car-hailing service in China ensures the method’s reliability.The average RMSE(root mean square error)and MAE(mean absolute error)of the proposed method are 3.204 and 2.360,lower than the respective values of 4.056 and 3.101 obtained using theARIMA(autoregressive integratedmoving average)method,as well as the values of 4.278 and 2.994 obtained using k-means-SVR(support vector regression)method.The other experiment is conducted on stock data froma public data set.The proposed method achieved an average RMSE and MAE of 0.836 and 0.696,lower than the respective values of 1.019 and 0.844 obtained using the ARIMA method,as well as the values of 1.350 and 1.172 obtained using the k-means-SVR method.
基金supported by the National Natural Science Foundation of China(72288101,72201029,and 72322022).
文摘Accurate origin–destination(OD)demand prediction is crucial for the efficient operation and management of urban rail transit(URT)systems,particularly during a pandemic.However,this task faces several limitations,including real-time availability,sparsity,and high-dimensionality issues,and the impact of the pandemic.Consequently,this study proposes a unified framework called the physics-guided adaptive graph spatial–temporal attention network(PAG-STAN)for metro OD demand prediction under pandemic conditions.Specifically,PAG-STAN introduces a real-time OD estimation module to estimate real-time complete OD demand matrices.Subsequently,a novel dynamic OD demand matrix compression module is proposed to generate dense real-time OD demand matrices.Thereafter,PAG-STAN leverages various heterogeneous data to learn the evolutionary trend of future OD ridership during the pandemic.Finally,a masked physics-guided loss function(MPG-loss function)incorporates the physical quantity information between the OD demand and inbound flow into the loss function to enhance model interpretability.PAG-STAN demonstrated favorable performance on two real-world metro OD demand datasets under the pandemic and conventional scenarios,highlighting its robustness and sensitivity for metro OD demand prediction.A series of ablation studies were conducted to verify the indispensability of each module in PAG-STAN.
文摘With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning and operating traffic structures.This study proposed an improved ensemble-based deep learning method to solve traffic volume prediction problems.A set of optimal hyperparameters is also applied for the suggested approach to improve the performance of the learning process.The fusion of these methodologies aims to harness ensemble empirical mode decomposition’s capacity to discern complex traffic patterns and long short-term memory’s proficiency in learning temporal relationships.Firstly,a dataset for automatic vehicle identification is obtained and utilized in the preprocessing stage of the ensemble empirical mode decomposition model.The second aspect involves predicting traffic volume using the long short-term memory algorithm.Next,the study employs a trial-and-error approach to select a set of optimal hyperparameters,including the lookback window,the number of neurons in the hidden layers,and the gradient descent optimization.Finally,the fusion of the obtained results leads to a final traffic volume prediction.The experimental results show that the proposed method outperforms other benchmarks regarding various evaluation measures,including mean absolute error,root mean squared error,mean absolute percentage error,and R-squared.The achieved R-squared value reaches an impressive 98%,while the other evaluation indices surpass the competing.These findings highlight the accuracy of traffic pattern prediction.Consequently,this offers promising prospects for enhancing transportation management systems and urban infrastructure planning.
文摘BACKGROUND Endometrial cancer(EC)is a common gynecological malignancy that typically requires prompt surgical intervention;however,the advantage of surgical management is limited by the high postoperative recurrence rates and adverse outcomes.Previous studies have highlighted the prognostic potential of circulating tumor DNA(ctDNA)monitoring for minimal residual disease in patients with EC.AIM To develop and validate an optimized ctDNA-based model for predicting shortterm postoperative EC recurrence.METHODS We retrospectively analyzed 294 EC patients treated surgically from 2015-2019 to devise a short-term recurrence prediction model,which was validated on 143 EC patients operated between 2020 and 2021.Prognostic factors were identified using univariate Cox,Lasso,and multivariate Cox regressions.A nomogram was created to predict the 1,1.5,and 2-year recurrence-free survival(RFS).Model performance was assessed via receiver operating characteristic(ROC),calibration,and decision curve analyses(DCA),leading to a recurrence risk stratification system.RESULTS Based on the regression analysis and the nomogram created,patients with postoperative ctDNA-negativity,postoperative carcinoembryonic antigen 125(CA125)levels of<19 U/mL,and grade G1 tumors had improved RFS after surgery.The nomogram’s efficacy for recurrence prediction was confirmed through ROC analysis,calibration curves,and DCA methods,highlighting its high accuracy and clinical utility.Furthermore,using the nomogram,the patients were successfully classified into three risk subgroups.CONCLUSION The nomogram accurately predicted RFS after EC surgery at 1,1.5,and 2 years.This model will help clinicians personalize treatments,stratify risks,and enhance clinical outcomes for patients with EC.
基金funded by Liaoning Provincial Department of Science and Technology(2023JH2/101600058)。
文摘With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting model is outdated due to the continuous updating of wind power data,a short-term wind power forecasting algorithm based on Incremental Learning-Bagging Deep Hybrid Kernel Extreme Learning Machine(IL-Bagging-DHKELM)error affinity propagation cluster analysis is proposed.The algorithm effectively combines deep hybrid kernel extreme learning machine(DHKELM)with incremental learning(IL).Firstly,an initial wind power prediction model is trained using the Bagging-DHKELM model.Secondly,Euclidean morphological distance affinity propagation AP clustering algorithm is used to cluster and analyze the prediction error of wind power obtained from the initial training model.Finally,the correlation between wind power prediction errors and Numerical Weather Prediction(NWP)data is introduced as incremental updates to the initial wind power prediction model.During the incremental learning process,multiple error performance indicators are used to measure the overall model performance,thereby enabling incremental updates of wind power models.Practical examples show the method proposed in this article reduces the root mean square error of the initial model by 1.9 percentage points,indicating that this method can be better adapted to the current scenario of the continuous increase in wind power penetration rate.The accuracy and precision of wind power generation prediction are effectively improved through the method.
基金supported in part by the National Natural Science Foundation of China under Grant 62203468in part by the Technological Research and Development Program of China State Railway Group Co.,Ltd.under Grant Q2023X011+1 种基金in part by the Young Elite Scientist Sponsorship Program by China Association for Science and Technology(CAST)under Grant 2022QNRC001in part by the Youth Talent Program Supported by China Railway Society,and in part by the Research Program of China Academy of Railway Sciences Corporation Limited under Grant 2023YJ112.
文摘Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.
基金supported by the Major Project of Basic and Applied Research in Guangdong Universities (2017WZDXM012)。
文摘Since the existing prediction methods have encountered difficulties in processing themultiple influencing factors in short-term power load forecasting,we propose a bidirectional long short-term memory(BiLSTM)neural network model based on the temporal pattern attention(TPA)mechanism.Firstly,based on the grey relational analysis,datasets similar to forecast day are obtained.Secondly,thebidirectional LSTM layermodels the data of thehistorical load,temperature,humidity,and date-type and extracts complex relationships between data from the hidden row vectors obtained by the BiLSTM network,so that the influencing factors(with different characteristics)can select relevant information from different time steps to reduce the prediction error of the model.Simultaneously,the complex and nonlinear dependencies between time steps and sequences are extracted by the TPA mechanism,so the attention weight vector is constructed for the hidden layer output of BiLSTM and the relevant variables at different time steps are weighted to influence the input.Finally,the chaotic sparrow search algorithm(CSSA)is used to optimize the hyperparameter selection of the model.The short-term power load forecasting on different data sets shows that the average absolute errors of short-termpower load forecasting based on our method are 0.876 and 4.238,respectively,which is lower than other forecastingmethods,demonstrating the accuracy and stability of our model.
基金supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(JP22H03643)Japan Science and Technology Agency(JST)Support for Pioneering Research Initiated by the Next Generation(SPRING)(JPMJSP2145)+2 种基金JST Through the Establishment of University Fellowships Towards the Creation of Science Technology Innovation(JPMJFS2115)the National Natural Science Foundation of China(52078382)the State Key Laboratory of Disaster Reduction in Civil Engineering(CE19-A-01)。
文摘Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes in the flow field.In this study,we propose a novel deep learning method,named mapping net-work-coordinated stacked gated recurrent units(MSU),for pre-dicting pressure on a circular cylinder from velocity data.Specifi-cally,our coordinated learning strategy is designed to extract the most critical velocity point for prediction,a process that has not been explored before.In our experiments,MSU extracts one point from a velocity field containing 121 points and utilizes this point to accurately predict 100 pressure points on the cylinder.This method significantly reduces the workload of data measure-ment in practical engineering applications.Our experimental results demonstrate that MSU predictions are highly similar to the real turbulent data in both spatio-temporal and individual aspects.Furthermore,the comparison results show that MSU predicts more precise results,even outperforming models that use all velocity field points.Compared with state-of-the-art methods,MSU has an average improvement of more than 45%in various indicators such as root mean square error(RMSE).Through comprehensive and authoritative physical verification,we estab-lished that MSU’s prediction results closely align with pressure field data obtained in real turbulence fields.This confirmation underscores the considerable potential of MSU for practical applications in real engineering scenarios.The code is available at https://github.com/zhangzm0128/MSU.
基金funded by the National Natural Science Foundation of China (41807285)。
文摘The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning models have some problems,such as poor nonlinear performance,local optimum and incomplete factors feature extraction.These issues can affect the accuracy of slope stability prediction.Therefore,a deep learning algorithm called Long short-term memory(LSTM)has been innovatively proposed to predict slope stability.Taking the Ganzhou City in China as the study area,the landslide inventory and their characteristics of geotechnical parameters,slope height and slope angle are analyzed.Based on these characteristics,typical soil slopes are constructed using the Geo-Studio software.Five control factors affecting slope stability,including slope height,slope angle,internal friction angle,cohesion and volumetric weight,are selected to form different slope and construct model input variables.Then,the limit equilibrium method is used to calculate the stability coefficients of these typical soil slopes under different control factors.Each slope stability coefficient and its corresponding control factors is a slope sample.As a result,a total of 2160 training samples and 450 testing samples are constructed.These sample sets are imported into LSTM for modelling and compared with the support vector machine(SVM),random forest(RF)and convo-lutional neural network(CNN).The results show that the LSTM overcomes the problem that the commonly used machine learning models have difficulty extracting global features.Furthermore,LSTM has a better prediction performance for slope stability compared to SVM,RF and CNN models.
文摘The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine learning(ML)models effectively deal with such challenges.This research paper comprehensively analyses recent advancements in crop yield prediction from January 2016 to March 2024.In addition,it analyses the effectiveness of various input parameters considered in crop yield prediction models.We conducted an in-depth search and gathered studies that employed crop modeling and AI-based methods to predict crop yield.The total number of articles reviewed for crop yield prediction using ML,meta-modeling(Crop models coupled with ML/DL),and DL-based prediction models and input parameter selection is 125.We conduct the research by setting up five objectives for this research and discussing them after analyzing the selected research papers.Each study is assessed based on the crop type,input parameters employed for prediction,the modeling techniques adopted,and the evaluation metrics used for estimatingmodel performance.We also discuss the ethical and social impacts of AI on agriculture.However,various approaches presented in the scientific literature have delivered impressive predictions,they are complicateddue to intricate,multifactorial influences oncropgrowthand theneed for accuratedata-driven models.Therefore,thorough research is required to deal with challenges in predicting agricultural output.
基金funded by the Natural Science Foundation of Fujian Province,China (Grant No.2022J05291)Xiamen Scientific Research Funding for Overseas Chinese Scholars.
文摘Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.
基金supported by the Social Science Fund of China(No.19BTQ072).
文摘Acute complication prediction model is of great importance for the overall reduction of premature death in chronic diseases.The CLSTM-BPR proposed in this paper aims to improve the accuracy,interpretability,and generalizability of the existing disease prediction models.Firstly,through its complex neural network structure,CLSTM-BPR considers both disease commonality and patient characteristics in the prediction process.Secondly,by splicing the time series prediction algorithm and classifier,the judgment basis is given along with the prediction results.Finally,this model introduces the pairwise algorithm Bayesian Personalized Ranking(BPR)into the medical field for the first time,and achieves a good result in the diagnosis of six acute complications.Experiments on the Medical Information Mart for Intensive Care IV(MIMIC-IV)dataset show that the average Mean Absolute Error(MAE)of biomarker value prediction of the CLSTM-BPR model is 0.26,and the average accuracy(ACC)of the CLSTM-BPR model for acute complication diagnosis is 92.5%.Comparison experiments and ablation experiments further demonstrate the reliability of CLSTM-BPR in the prediction of acute complication,which is an advancement of current disease prediction tools.
基金the Shanghai Rising-Star Program(No.22QA1403900)the National Natural Science Foundation of China(No.71804106)the Noncarbon Energy Conversion and Utilization Institute under the Shanghai Class IV Peak Disciplinary Development Program.
文摘Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a single prediction model is hard to capture temporal features effectively, resulting in diminished predictionaccuracy. In this study, a hybrid deep learning framework that integrates attention mechanism, convolution neuralnetwork (CNN), improved chaotic particle swarm optimization (ICPSO), and long short-term memory (LSTM), isproposed for short-term household load forecasting. Firstly, the CNN model is employed to extract features fromthe original data, enhancing the quality of data features. Subsequently, the moving average method is used for datapreprocessing, followed by the application of the LSTM network to predict the processed data. Moreover, the ICPSOalgorithm is introduced to optimize the parameters of LSTM, aimed at boosting the model’s running speed andaccuracy. Finally, the attention mechanism is employed to optimize the output value of LSTM, effectively addressinginformation loss in LSTM induced by lengthy sequences and further elevating prediction accuracy. According tothe numerical analysis, the accuracy and effectiveness of the proposed hybrid model have been verified. It canexplore data features adeptly, achieving superior prediction accuracy compared to other forecasting methods forthe household load exhibiting significant fluctuations across different seasons.
基金The National Natural Science Foundation of China under contract Nos 42266006 and 41806114the Jiangxi Provincial Natural Science Foundation under contract Nos 20232BAB204089 and 20202ACBL214019.
文摘The complexity of river-tide interaction poses a significant challenge in predicting discharge in tidal rivers.Long short-term memory(LSTM)networks excel in processing and predicting crucial events with extended intervals and time delays in time series data.Additionally,the sequence-to-sequence(Seq2Seq)model,known for handling temporal relationships,adapting to variable-length sequences,effectively capturing historical information,and accommodating various influencing factors,emerges as a robust and flexible tool in discharge forecasting.In this study,we introduce the application of LSTM-based Seq2Seq models for the first time in forecasting the discharge of a tidal reach of the Changjiang River(Yangtze River)Estuary.This study focuses on discharge forecasting using three key input characteristics:flow velocity,water level,and discharge,which means the structure of multiple input and single output is adopted.The experiment used the discharge data of the whole year of 2020,of which the first 80%is used as the training set,and the last 20%is used as the test set.This means that the data covers different tidal cycles,which helps to test the forecasting effect of different models in different tidal cycles and different runoff.The experimental results indicate that the proposed models demonstrate advantages in long-term,mid-term,and short-term discharge forecasting.The Seq2Seq models improved by 6%-60%and 5%-20%of the relative standard deviation compared to the harmonic analysis models and improved back propagation neural network models in discharge prediction,respectively.In addition,the relative accuracy of the Seq2Seq model is 1%to 3%higher than that of the LSTM model.Analytical assessment of the prediction errors shows that the Seq2Seq models are insensitive to the forecast lead time and they can capture characteristic values such as maximum flood tide flow and maximum ebb tide flow in the tidal cycle well.This indicates the significance of the Seq2Seq models.
文摘Traditional traffic management techniques appear to be incompetent in complex data center networks, so proposes a load balancing strategy based on Long Short-Term Memory (LSTM) and quantum annealing by Software Defined Network (SDN) to dynamically predict the traffic and comprehensively consider the current and predicted load of the network in order to select the optimal forwarding path and balance the network load. Experiments have demonstrated that the algorithm achieves significant improvement in both system throughput and average packet loss rate for the purpose of improving network quality of service.
基金supported in part by the National Key Research and Development Program of China(No.2018YFB1500800)the National Natural Science Foundation of China(No.51807134)the State Key Laboratory of Reliability and Intelligence of Electrical Equipment,Hebei University of Technology(No.EERI_KF20200014)。
文摘To improve energy efficiency and protect the environment,the integrated energy system(IES)becomes a significant direction of energy structure adjustment.This paper innovatively proposes a wavelet neural network(WNN)model optimized by the improved particle swarm optimization(IPSO)and chaos optimization algorithm(COA)for short-term load prediction of IES.The proposed model overcomes the disadvantages of the slow convergence and the tendency to fall into the local optimum in traditional WNN models.First,the Pearson correlation coefficient is employed to select the key influencing factors of load prediction.Then,the traditional particle swarm optimization(PSO)is improved by the dynamic particle inertia weight.To jump out of the local optimum,the COA is employed to search for individual optimal particles in IPSO.In the iteration,the parameters of WNN are continually optimized by IPSO-COA.Meanwhile,the feedback link is added to the proposed model,where the output error is adopted to modify the prediction results.Finally,the proposed model is employed for load prediction.The experimental simulation verifies that the proposed model significantly improves the prediction accuracy and operation efficiency compared with the artificial neural network(ANN),WNN,and PSO-WNN.
基金The National High Technology Research and Development Program of China (863 Program) (No2007AA01Z404)
文摘Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of computing resources in the monitoring model are analyzed. Then, a time-series autoregressive prediction model is devised. And an autoregressive support vector regression( ARSVR) monitoring method is put forward to predict the node load of the data grid. Finally, a model for historical observations sequences is set up using the autoregressive (AR) model and the model order is determined. The support vector regression(SVR) model is trained using historical data and the regression function is obtained. Simulation results show that the ARSVR method can effectively predict the node load.
基金funded by the technology innovation guidance special project of Shaanxi Province(Grant No.2020CGXNX009)the supported by the National Natural Science Foundation of China(Grant No.62203344)+1 种基金the Shaanxi Provincial Department of Education serves local special projects(Grant No.22JC036)the Natural Science Basic Research Plan of Shaanxi Province(Grant No.2022JM-322)。
文摘Landslide deformation is affected by its geological conditions and many environmental factors.So it has the characteristics of dynamic,nonlinear and unstable,which makes the prediction of landslide displacement difficult.In view of the above problems,this paper proposes a dynamic prediction model of landslide displacement based on the improvement of complete ensemble empirical mode decomposition with adaptive noise(ICEEMDAN),approximate entropy(ApEn)and convolution long short-term memory(CNN-LSTM)neural network.Firstly,ICEEMDAN and Ap En are used to decompose the cumulative displacements into trend,periodic and random displacements.Then,the least square quintic polynomial function is used to fit the displacement of trend term,and the CNN-LSTM is used to predict the displacement of periodic term and random term.Finally,the displacement prediction results of trend term,periodic term and random term are superimposed to obtain the cumulative displacement prediction value.The proposed model has been verified in Bazimen landslide in the Three Gorges Reservoir area of China.The experimental results show that the model proposed in this paper can more effectively predict the displacement changes of landslides.As compared with long short-term memory(LSTM)neural network,gated recurrent unit(GRU)network model and back propagation(BP)neural network,CNN-LSTM neural network had higher prediction accuracy in predicting the periodic displacement,with the mean absolute percentage error(MAPE)reduced by 3.621%,6.893% and 15.886% respectively,and the root mean square error(RMSE)reduced by 3.834 mm,3.945 mm and 7.422mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide a new insight for practical landslide prevention and control engineering.