Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currentl...Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currently,least squares(LS)+auto-regressive(AR)hybrid method is one of the main techniques of PM prediction.Besides,the weighted LS+AR hybrid method performs well for PM short-term prediction.However,the corresponding covariance information of LS fitting residuals deserves further exploration in the AR model.In this study,we have derived a modified stochastic model for the LS+AR hybrid method,namely the weighted LS+weighted AR hybrid method.By using the PM data products of IERS EOP 14 C04,the numerical results indicate that for PM short-term forecasting,the proposed weighted LS+weighted AR hybrid method shows an advantage over both the LS+AR hybrid method and the weighted LS+AR hybrid method.Compared to the mean absolute errors(MAEs)of PMX/PMY sho rt-term prediction of the LS+AR hybrid method and the weighted LS+AR hybrid method,the weighted LS+weighted AR hybrid method shows average improvements of 6.61%/12.08%and 0.24%/11.65%,respectively.Besides,for the slopes of the linear regression lines fitted to the errors of each method,the growth of the prediction error of the proposed method is slower than that of the other two methods.展开更多
Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same g...Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same granularity,segmenting them into different granularity events can effectively mitigate the impact of varying time scales on prediction accuracy.However,these events of varying granularity frequently intersect with each other,which may possess unequal durations.Even minor differences can result in significant errors when matching time series with future trends.Besides,directly using matched events but unaligned events as state vectors in machine learning-based prediction models can lead to insufficient prediction accuracy.Therefore,this paper proposes a short-term forecasting method for time series based on a multi-granularity event,MGE-SP(multi-granularity event-based short-termprediction).First,amethodological framework for MGE-SP established guides the implementation steps.The framework consists of three key steps,including multi-granularity event matching based on the LTF(latest time first)strategy,multi-granularity event alignment using a piecewise aggregate approximation based on the compression ratio,and a short-term prediction model based on XGBoost.The data from a nationwide online car-hailing service in China ensures the method’s reliability.The average RMSE(root mean square error)and MAE(mean absolute error)of the proposed method are 3.204 and 2.360,lower than the respective values of 4.056 and 3.101 obtained using theARIMA(autoregressive integratedmoving average)method,as well as the values of 4.278 and 2.994 obtained using k-means-SVR(support vector regression)method.The other experiment is conducted on stock data froma public data set.The proposed method achieved an average RMSE and MAE of 0.836 and 0.696,lower than the respective values of 1.019 and 0.844 obtained using the ARIMA method,as well as the values of 1.350 and 1.172 obtained using the k-means-SVR method.展开更多
Accurate origin–destination(OD)demand prediction is crucial for the efficient operation and management of urban rail transit(URT)systems,particularly during a pandemic.However,this task faces several limitations,incl...Accurate origin–destination(OD)demand prediction is crucial for the efficient operation and management of urban rail transit(URT)systems,particularly during a pandemic.However,this task faces several limitations,including real-time availability,sparsity,and high-dimensionality issues,and the impact of the pandemic.Consequently,this study proposes a unified framework called the physics-guided adaptive graph spatial–temporal attention network(PAG-STAN)for metro OD demand prediction under pandemic conditions.Specifically,PAG-STAN introduces a real-time OD estimation module to estimate real-time complete OD demand matrices.Subsequently,a novel dynamic OD demand matrix compression module is proposed to generate dense real-time OD demand matrices.Thereafter,PAG-STAN leverages various heterogeneous data to learn the evolutionary trend of future OD ridership during the pandemic.Finally,a masked physics-guided loss function(MPG-loss function)incorporates the physical quantity information between the OD demand and inbound flow into the loss function to enhance model interpretability.PAG-STAN demonstrated favorable performance on two real-world metro OD demand datasets under the pandemic and conventional scenarios,highlighting its robustness and sensitivity for metro OD demand prediction.A series of ablation studies were conducted to verify the indispensability of each module in PAG-STAN.展开更多
With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning ...With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning and operating traffic structures.This study proposed an improved ensemble-based deep learning method to solve traffic volume prediction problems.A set of optimal hyperparameters is also applied for the suggested approach to improve the performance of the learning process.The fusion of these methodologies aims to harness ensemble empirical mode decomposition’s capacity to discern complex traffic patterns and long short-term memory’s proficiency in learning temporal relationships.Firstly,a dataset for automatic vehicle identification is obtained and utilized in the preprocessing stage of the ensemble empirical mode decomposition model.The second aspect involves predicting traffic volume using the long short-term memory algorithm.Next,the study employs a trial-and-error approach to select a set of optimal hyperparameters,including the lookback window,the number of neurons in the hidden layers,and the gradient descent optimization.Finally,the fusion of the obtained results leads to a final traffic volume prediction.The experimental results show that the proposed method outperforms other benchmarks regarding various evaluation measures,including mean absolute error,root mean squared error,mean absolute percentage error,and R-squared.The achieved R-squared value reaches an impressive 98%,while the other evaluation indices surpass the competing.These findings highlight the accuracy of traffic pattern prediction.Consequently,this offers promising prospects for enhancing transportation management systems and urban infrastructure planning.展开更多
To tackle the problem of inaccurate short-term bus load prediction,especially during holidays,a Transformer-based scheme with tailored architectural enhancements is proposed.First,the input data are clustered to reduc...To tackle the problem of inaccurate short-term bus load prediction,especially during holidays,a Transformer-based scheme with tailored architectural enhancements is proposed.First,the input data are clustered to reduce complexity and capture inherent characteristics more effectively.Gated residual connections are then employed to selectively propagate salient features across layers,while an attention mechanism focuses on identifying prominent patterns in multivariate time-series data.Ultimately,a pre-trained structure is incorporated to reduce computational complexity.Experimental results based on extensive data show that the proposed scheme achieves improved prediction accuracy over comparative algorithms by at least 32.00%consistently across all buses evaluated,and the fitting effect of holiday load curves is outstanding.Meanwhile,the pre-trained structure drastically reduces the training time of the proposed algorithm by more than 65.75%.The proposed scheme can efficiently predict bus load results while enhancing robustness for holiday predictions,making it better adapted to real-world prediction scenarios.展开更多
With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting m...With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting model is outdated due to the continuous updating of wind power data,a short-term wind power forecasting algorithm based on Incremental Learning-Bagging Deep Hybrid Kernel Extreme Learning Machine(IL-Bagging-DHKELM)error affinity propagation cluster analysis is proposed.The algorithm effectively combines deep hybrid kernel extreme learning machine(DHKELM)with incremental learning(IL).Firstly,an initial wind power prediction model is trained using the Bagging-DHKELM model.Secondly,Euclidean morphological distance affinity propagation AP clustering algorithm is used to cluster and analyze the prediction error of wind power obtained from the initial training model.Finally,the correlation between wind power prediction errors and Numerical Weather Prediction(NWP)data is introduced as incremental updates to the initial wind power prediction model.During the incremental learning process,multiple error performance indicators are used to measure the overall model performance,thereby enabling incremental updates of wind power models.Practical examples show the method proposed in this article reduces the root mean square error of the initial model by 1.9 percentage points,indicating that this method can be better adapted to the current scenario of the continuous increase in wind power penetration rate.The accuracy and precision of wind power generation prediction are effectively improved through the method.展开更多
BACKGROUND Endometrial cancer(EC)is a common gynecological malignancy that typically requires prompt surgical intervention;however,the advantage of surgical management is limited by the high postoperative recurrence r...BACKGROUND Endometrial cancer(EC)is a common gynecological malignancy that typically requires prompt surgical intervention;however,the advantage of surgical management is limited by the high postoperative recurrence rates and adverse outcomes.Previous studies have highlighted the prognostic potential of circulating tumor DNA(ctDNA)monitoring for minimal residual disease in patients with EC.AIM To develop and validate an optimized ctDNA-based model for predicting shortterm postoperative EC recurrence.METHODS We retrospectively analyzed 294 EC patients treated surgically from 2015-2019 to devise a short-term recurrence prediction model,which was validated on 143 EC patients operated between 2020 and 2021.Prognostic factors were identified using univariate Cox,Lasso,and multivariate Cox regressions.A nomogram was created to predict the 1,1.5,and 2-year recurrence-free survival(RFS).Model performance was assessed via receiver operating characteristic(ROC),calibration,and decision curve analyses(DCA),leading to a recurrence risk stratification system.RESULTS Based on the regression analysis and the nomogram created,patients with postoperative ctDNA-negativity,postoperative carcinoembryonic antigen 125(CA125)levels of<19 U/mL,and grade G1 tumors had improved RFS after surgery.The nomogram’s efficacy for recurrence prediction was confirmed through ROC analysis,calibration curves,and DCA methods,highlighting its high accuracy and clinical utility.Furthermore,using the nomogram,the patients were successfully classified into three risk subgroups.CONCLUSION The nomogram accurately predicted RFS after EC surgery at 1,1.5,and 2 years.This model will help clinicians personalize treatments,stratify risks,and enhance clinical outcomes for patients with EC.展开更多
Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and a...Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.展开更多
The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning mode...The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning models have some problems,such as poor nonlinear performance,local optimum and incomplete factors feature extraction.These issues can affect the accuracy of slope stability prediction.Therefore,a deep learning algorithm called Long short-term memory(LSTM)has been innovatively proposed to predict slope stability.Taking the Ganzhou City in China as the study area,the landslide inventory and their characteristics of geotechnical parameters,slope height and slope angle are analyzed.Based on these characteristics,typical soil slopes are constructed using the Geo-Studio software.Five control factors affecting slope stability,including slope height,slope angle,internal friction angle,cohesion and volumetric weight,are selected to form different slope and construct model input variables.Then,the limit equilibrium method is used to calculate the stability coefficients of these typical soil slopes under different control factors.Each slope stability coefficient and its corresponding control factors is a slope sample.As a result,a total of 2160 training samples and 450 testing samples are constructed.These sample sets are imported into LSTM for modelling and compared with the support vector machine(SVM),random forest(RF)and convo-lutional neural network(CNN).The results show that the LSTM overcomes the problem that the commonly used machine learning models have difficulty extracting global features.Furthermore,LSTM has a better prediction performance for slope stability compared to SVM,RF and CNN models.展开更多
The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine l...The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine learning(ML)models effectively deal with such challenges.This research paper comprehensively analyses recent advancements in crop yield prediction from January 2016 to March 2024.In addition,it analyses the effectiveness of various input parameters considered in crop yield prediction models.We conducted an in-depth search and gathered studies that employed crop modeling and AI-based methods to predict crop yield.The total number of articles reviewed for crop yield prediction using ML,meta-modeling(Crop models coupled with ML/DL),and DL-based prediction models and input parameter selection is 125.We conduct the research by setting up five objectives for this research and discussing them after analyzing the selected research papers.Each study is assessed based on the crop type,input parameters employed for prediction,the modeling techniques adopted,and the evaluation metrics used for estimatingmodel performance.We also discuss the ethical and social impacts of AI on agriculture.However,various approaches presented in the scientific literature have delivered impressive predictions,they are complicateddue to intricate,multifactorial influences oncropgrowthand theneed for accuratedata-driven models.Therefore,thorough research is required to deal with challenges in predicting agricultural output.展开更多
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear...Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.展开更多
The complexity of river-tide interaction poses a significant challenge in predicting discharge in tidal rivers.Long short-term memory(LSTM)networks excel in processing and predicting crucial events with extended inter...The complexity of river-tide interaction poses a significant challenge in predicting discharge in tidal rivers.Long short-term memory(LSTM)networks excel in processing and predicting crucial events with extended intervals and time delays in time series data.Additionally,the sequence-to-sequence(Seq2Seq)model,known for handling temporal relationships,adapting to variable-length sequences,effectively capturing historical information,and accommodating various influencing factors,emerges as a robust and flexible tool in discharge forecasting.In this study,we introduce the application of LSTM-based Seq2Seq models for the first time in forecasting the discharge of a tidal reach of the Changjiang River(Yangtze River)Estuary.This study focuses on discharge forecasting using three key input characteristics:flow velocity,water level,and discharge,which means the structure of multiple input and single output is adopted.The experiment used the discharge data of the whole year of 2020,of which the first 80%is used as the training set,and the last 20%is used as the test set.This means that the data covers different tidal cycles,which helps to test the forecasting effect of different models in different tidal cycles and different runoff.The experimental results indicate that the proposed models demonstrate advantages in long-term,mid-term,and short-term discharge forecasting.The Seq2Seq models improved by 6%-60%and 5%-20%of the relative standard deviation compared to the harmonic analysis models and improved back propagation neural network models in discharge prediction,respectively.In addition,the relative accuracy of the Seq2Seq model is 1%to 3%higher than that of the LSTM model.Analytical assessment of the prediction errors shows that the Seq2Seq models are insensitive to the forecast lead time and they can capture characteristic values such as maximum flood tide flow and maximum ebb tide flow in the tidal cycle well.This indicates the significance of the Seq2Seq models.展开更多
Landslide deformation is affected by its geological conditions and many environmental factors.So it has the characteristics of dynamic,nonlinear and unstable,which makes the prediction of landslide displacement diffic...Landslide deformation is affected by its geological conditions and many environmental factors.So it has the characteristics of dynamic,nonlinear and unstable,which makes the prediction of landslide displacement difficult.In view of the above problems,this paper proposes a dynamic prediction model of landslide displacement based on the improvement of complete ensemble empirical mode decomposition with adaptive noise(ICEEMDAN),approximate entropy(ApEn)and convolution long short-term memory(CNN-LSTM)neural network.Firstly,ICEEMDAN and Ap En are used to decompose the cumulative displacements into trend,periodic and random displacements.Then,the least square quintic polynomial function is used to fit the displacement of trend term,and the CNN-LSTM is used to predict the displacement of periodic term and random term.Finally,the displacement prediction results of trend term,periodic term and random term are superimposed to obtain the cumulative displacement prediction value.The proposed model has been verified in Bazimen landslide in the Three Gorges Reservoir area of China.The experimental results show that the model proposed in this paper can more effectively predict the displacement changes of landslides.As compared with long short-term memory(LSTM)neural network,gated recurrent unit(GRU)network model and back propagation(BP)neural network,CNN-LSTM neural network had higher prediction accuracy in predicting the periodic displacement,with the mean absolute percentage error(MAPE)reduced by 3.621%,6.893% and 15.886% respectively,and the root mean square error(RMSE)reduced by 3.834 mm,3.945 mm and 7.422mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide a new insight for practical landslide prevention and control engineering.展开更多
To meet the ever-increasing traffic demand and enhance the coverage of cellular networks,network densification is one of the crucial paradigms of 5G and beyond mobile networks,which can improve system capacity by depl...To meet the ever-increasing traffic demand and enhance the coverage of cellular networks,network densification is one of the crucial paradigms of 5G and beyond mobile networks,which can improve system capacity by deploying a large number of Access Points(APs)in the service area.However,since the energy consumption of APs generally accounts for a substantial part of the communication system,how to deal with the consequent energy issue is a challenging task for a mobile network with densely deployed APs.In this paper,we propose an intelligent AP switching on/off scheme to reduce the system energy consumption with the prerequisite of guaranteeing the quality of service,where the signaling overhead is also taken into consideration to ensure the stability of the network.First,based on historical traffic data,a long short-term memory method is introduced to predict the future traffic distribution,by which we can roughly determine when the AP switching operation should be triggered;second,we present an efficient three-step AP selection strategy to determine which of the APs would be switched on or off;third,an AP switching scheme with a threshold is proposed to adjust the switching frequency so as to improve the stability of the system.Experiment results indicate that our proposed traffic forecasting method performs well in practical scenarios,where the normalized root mean square error is within 10%.Furthermore,the achieved energy-saving is more than 28% on average with a reasonable outage probability and switching frequency for an area served by 40 APs in a commercial mobile network.展开更多
In terms of the modular fuzzy neural network (MFNN) combining fuzzy c-mean (FCM) cluster and single-layer neural network, a short-term climate prediction model is developed. It is found from modeling results that the ...In terms of the modular fuzzy neural network (MFNN) combining fuzzy c-mean (FCM) cluster and single-layer neural network, a short-term climate prediction model is developed. It is found from modeling results that the MFNN model for short-term climate prediction has advantages of simple structure, no hidden layer and stable network parameters because of the assembling of sound functions of the self-adaptive learning, association and fuzzy information processing of fuzzy mathematics and neural network methods. The case computational results of Guangxi flood season (JJA) rainfall show that the mean absolute error (MAE) and mean relative error (MRE) of the prediction during 1998-2002 are 68.8 mm and 9.78%, and in comparison with the regression method, under the conditions of the same predictors and period they are 97.8 mm and 12.28% respectively. Furthermore, it is also found from the stability analysis of the modular model that the change of the prediction results of independent samples with training times in the stably convergent interval of the model is less than 1.3 mm. The obvious oscillation phenomenon of prediction results with training times, such as in the common back-propagation neural network (BPNN) model, does not occur, indicating a better practical application potential of the MFNN model.展开更多
A statistical downscaling approach was developed to improve seasonal-to-interannual prediction of summer rainfall over North China by considering the effect of decadal variability based on observational datasets and d...A statistical downscaling approach was developed to improve seasonal-to-interannual prediction of summer rainfall over North China by considering the effect of decadal variability based on observational datasets and dynamical model outputs.Both predictands and predictors were first decomposed into interannual and decadal components.Two predictive equations were then built separately for the two distinct timescales by using multivariate linear regressions based on independent sample validation.For the interannual timescale,850-hPa meridional wind and 500-hPa geopotential heights from multiple dynamical models' hindcasts and SSTs from observational datasets were used to construct predictors.For the decadal timescale,two well-known basin-scale SST decadal oscillation (the Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation) indices were used as predictors.Then,the downscaled predictands were combined to represent the predicted/hindcasted total rainfall.The prediction was compared with the models' raw hindcasts and those from a similar approach but without timescale decomposition.In comparison to hindcasts from individual models or their multi-model ensemble mean,the skill of the present scheme was found to be significantly higher,with anomaly correlation coefficients increasing from nearly neutral to over 0.4 and with RMSE decreasing by up to 0.6 mm d-1.The improvements were also seen in the station-based temporal correlation of the predictions with observed rainfall,with the coefficients ranging from-0.1 to 0.87,obviously higher than the models' raw hindcasted rainfall results.Thus,the present approach exhibits a great advantage and may be appropriate for use in operational predictions.展开更多
Based on data from the Jilin Water Diversion Tunnels from the Songhua River(China),an improved and real-time prediction method optimized by multi-algorithm for tunnel boring machine(TBM)cutter-head torque is presented...Based on data from the Jilin Water Diversion Tunnels from the Songhua River(China),an improved and real-time prediction method optimized by multi-algorithm for tunnel boring machine(TBM)cutter-head torque is presented.Firstly,a function excluding invalid and abnormal data is established to distinguish TBM operating state,and a feature selection method based on the SelectKBest algorithm is proposed.Accordingly,ten features that are most closely related to the cutter-head torque are selected as input variables,which,in descending order of influence,include the sum of motor torque,cutter-head power,sum of motor power,sum of motor current,advance rate,cutter-head pressure,total thrust force,penetration rate,cutter-head rotational velocity,and field penetration index.Secondly,a real-time cutterhead torque prediction model’s structure is developed,based on the bidirectional long short-term memory(BLSTM)network integrating the dropout algorithm to prevent overfitting.Then,an algorithm to optimize hyperparameters of model based on Bayesian and cross-validation is proposed.Early stopping and checkpoint algorithms are integrated to optimize the training process.Finally,a BLSTMbased real-time cutter-head torque prediction model is developed,which fully utilizes the previous time-series tunneling information.The mean absolute percentage error(MAPE)of the model in the verification section is 7.3%,implying that the presented model is suitable for real-time cutter-head torque prediction.Furthermore,an incremental learning method based on the above base model is introduced to improve the adaptability of the model during the TBM tunneling.Comparison of the prediction performance between the base and incremental learning models in the same tunneling section shows that:(1)the MAPE of the predicted results of the BLSTM-based real-time cutter-head torque prediction model remains below 10%,and both the coefficient of determination(R^(2))and correlation coefficient(r)between measured and predicted values exceed 0.95;and(2)the incremental learning method is suitable for realtime cutter-head torque prediction and can effectively improve the prediction accuracy and generalization capacity of the model during the excavation process.展开更多
An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models...An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models, this paper proposes a dynamic prediction model of landslide displacement based on singular spectrum analysis(SSA) and stack long short-term memory(SLSTM) network. The SSA is used to decompose the landslide accumulated displacement time series data into trend term and periodic term displacement subsequences. A cubic polynomial function is used to predict the trend term displacement subsequence, and the SLSTM neural network is used to predict the periodic term displacement subsequence. At the same time, the Bayesian optimization algorithm is used to determine that the SLSTM network input sequence length is 12 and the number of hidden layer nodes is 18. The SLSTM network is updated by adding predicted values to the training set to achieve dynamic displacement prediction. Finally, the accumulated landslide displacement is obtained by superimposing the predicted value of each displacement subsequence. The proposed model was verified on the Xintan landslide in Hubei Province, China. The results show that when predicting the displacement of the periodic term, the SLSTM network has higher prediction accuracy than the support vector machine(SVM) and auto regressive integrated moving average(ARIMA). The mean relative error(MRE) is reduced by 4.099% and 3.548% respectively, while the root mean square error(RMSE) is reduced by 5.830 mm and 3.854 mm respectively. It is concluded that the SLSTM network model can better simulate the dynamic characteristics of landslides.展开更多
We present a verification of the short-term predictions of solar X-ray bursts for the maximum phase (2000–2001) of Solar Cycle 23, issued by two prediction centers. The results are that the rate of correct prediction...We present a verification of the short-term predictions of solar X-ray bursts for the maximum phase (2000–2001) of Solar Cycle 23, issued by two prediction centers. The results are that the rate of correct predictions is about equal for RWC-China and WWA; the rate of too high predictions is greater for RWC-China than for WWA, while the rate of too low predictions is smaller for RWC-China than for WWA.展开更多
Predicting the usage of container cloud resources has always been an important and challenging problem in improving the performance of cloud resource clusters.We proposed an integrated prediction method of stacking co...Predicting the usage of container cloud resources has always been an important and challenging problem in improving the performance of cloud resource clusters.We proposed an integrated prediction method of stacking container cloud resources based on variational modal decomposition(VMD)-Permutation entropy(PE)and long short-term memory(LSTM)neural network to solve the prediction difficulties caused by the non-stationarity and volatility of resource data.The variational modal decomposition algorithm decomposes the time series data of cloud resources to obtain intrinsic mode function and residual components,which solves the signal decomposition algorithm’s end-effect and modal confusion problems.The permutation entropy is used to evaluate the complexity of the intrinsic mode function,and the reconstruction based on similar entropy and low complexity is used to reduce the difficulty of modeling.Finally,we use the LSTM and stacking fusion models to predict and superimpose;the stacking integration model integrates Gradient boosting regression(GBR),Kernel ridge regression(KRR),and Elastic net regression(ENet)as primary learners,and the secondary learner adopts the kernel ridge regression method with solid generalization ability.The Amazon public data set experiment shows that compared with Holt-winters,LSTM,and Neuralprophet models,we can see that the optimization range of multiple evaluation indicators is 0.338∼1.913,0.057∼0.940,0.000∼0.017 and 1.038∼8.481 in root means square error(RMSE),mean absolute error(MAE),mean absolute percentage error(MAPE)and variance(VAR),showing its stability and better prediction accuracy.展开更多
基金supported by National Natural Science Foundation of China,China(No.42004016)HuBei Natural Science Fund,China(No.2020CFB329)+1 种基金HuNan Natural Science Fund,China(No.2023JJ60559,2023JJ60560)the State Key Laboratory of Geodesy and Earth’s Dynamics self-deployment project,China(No.S21L6101)。
文摘Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currently,least squares(LS)+auto-regressive(AR)hybrid method is one of the main techniques of PM prediction.Besides,the weighted LS+AR hybrid method performs well for PM short-term prediction.However,the corresponding covariance information of LS fitting residuals deserves further exploration in the AR model.In this study,we have derived a modified stochastic model for the LS+AR hybrid method,namely the weighted LS+weighted AR hybrid method.By using the PM data products of IERS EOP 14 C04,the numerical results indicate that for PM short-term forecasting,the proposed weighted LS+weighted AR hybrid method shows an advantage over both the LS+AR hybrid method and the weighted LS+AR hybrid method.Compared to the mean absolute errors(MAEs)of PMX/PMY sho rt-term prediction of the LS+AR hybrid method and the weighted LS+AR hybrid method,the weighted LS+weighted AR hybrid method shows average improvements of 6.61%/12.08%and 0.24%/11.65%,respectively.Besides,for the slopes of the linear regression lines fitted to the errors of each method,the growth of the prediction error of the proposed method is slower than that of the other two methods.
基金funded by the Fujian Province Science and Technology Plan,China(Grant Number 2019H0017).
文摘Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same granularity,segmenting them into different granularity events can effectively mitigate the impact of varying time scales on prediction accuracy.However,these events of varying granularity frequently intersect with each other,which may possess unequal durations.Even minor differences can result in significant errors when matching time series with future trends.Besides,directly using matched events but unaligned events as state vectors in machine learning-based prediction models can lead to insufficient prediction accuracy.Therefore,this paper proposes a short-term forecasting method for time series based on a multi-granularity event,MGE-SP(multi-granularity event-based short-termprediction).First,amethodological framework for MGE-SP established guides the implementation steps.The framework consists of three key steps,including multi-granularity event matching based on the LTF(latest time first)strategy,multi-granularity event alignment using a piecewise aggregate approximation based on the compression ratio,and a short-term prediction model based on XGBoost.The data from a nationwide online car-hailing service in China ensures the method’s reliability.The average RMSE(root mean square error)and MAE(mean absolute error)of the proposed method are 3.204 and 2.360,lower than the respective values of 4.056 and 3.101 obtained using theARIMA(autoregressive integratedmoving average)method,as well as the values of 4.278 and 2.994 obtained using k-means-SVR(support vector regression)method.The other experiment is conducted on stock data froma public data set.The proposed method achieved an average RMSE and MAE of 0.836 and 0.696,lower than the respective values of 1.019 and 0.844 obtained using the ARIMA method,as well as the values of 1.350 and 1.172 obtained using the k-means-SVR method.
基金supported by the National Natural Science Foundation of China(72288101,72201029,and 72322022).
文摘Accurate origin–destination(OD)demand prediction is crucial for the efficient operation and management of urban rail transit(URT)systems,particularly during a pandemic.However,this task faces several limitations,including real-time availability,sparsity,and high-dimensionality issues,and the impact of the pandemic.Consequently,this study proposes a unified framework called the physics-guided adaptive graph spatial–temporal attention network(PAG-STAN)for metro OD demand prediction under pandemic conditions.Specifically,PAG-STAN introduces a real-time OD estimation module to estimate real-time complete OD demand matrices.Subsequently,a novel dynamic OD demand matrix compression module is proposed to generate dense real-time OD demand matrices.Thereafter,PAG-STAN leverages various heterogeneous data to learn the evolutionary trend of future OD ridership during the pandemic.Finally,a masked physics-guided loss function(MPG-loss function)incorporates the physical quantity information between the OD demand and inbound flow into the loss function to enhance model interpretability.PAG-STAN demonstrated favorable performance on two real-world metro OD demand datasets under the pandemic and conventional scenarios,highlighting its robustness and sensitivity for metro OD demand prediction.A series of ablation studies were conducted to verify the indispensability of each module in PAG-STAN.
文摘With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning and operating traffic structures.This study proposed an improved ensemble-based deep learning method to solve traffic volume prediction problems.A set of optimal hyperparameters is also applied for the suggested approach to improve the performance of the learning process.The fusion of these methodologies aims to harness ensemble empirical mode decomposition’s capacity to discern complex traffic patterns and long short-term memory’s proficiency in learning temporal relationships.Firstly,a dataset for automatic vehicle identification is obtained and utilized in the preprocessing stage of the ensemble empirical mode decomposition model.The second aspect involves predicting traffic volume using the long short-term memory algorithm.Next,the study employs a trial-and-error approach to select a set of optimal hyperparameters,including the lookback window,the number of neurons in the hidden layers,and the gradient descent optimization.Finally,the fusion of the obtained results leads to a final traffic volume prediction.The experimental results show that the proposed method outperforms other benchmarks regarding various evaluation measures,including mean absolute error,root mean squared error,mean absolute percentage error,and R-squared.The achieved R-squared value reaches an impressive 98%,while the other evaluation indices surpass the competing.These findings highlight the accuracy of traffic pattern prediction.Consequently,this offers promising prospects for enhancing transportation management systems and urban infrastructure planning.
文摘To tackle the problem of inaccurate short-term bus load prediction,especially during holidays,a Transformer-based scheme with tailored architectural enhancements is proposed.First,the input data are clustered to reduce complexity and capture inherent characteristics more effectively.Gated residual connections are then employed to selectively propagate salient features across layers,while an attention mechanism focuses on identifying prominent patterns in multivariate time-series data.Ultimately,a pre-trained structure is incorporated to reduce computational complexity.Experimental results based on extensive data show that the proposed scheme achieves improved prediction accuracy over comparative algorithms by at least 32.00%consistently across all buses evaluated,and the fitting effect of holiday load curves is outstanding.Meanwhile,the pre-trained structure drastically reduces the training time of the proposed algorithm by more than 65.75%.The proposed scheme can efficiently predict bus load results while enhancing robustness for holiday predictions,making it better adapted to real-world prediction scenarios.
基金funded by Liaoning Provincial Department of Science and Technology(2023JH2/101600058)。
文摘With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting model is outdated due to the continuous updating of wind power data,a short-term wind power forecasting algorithm based on Incremental Learning-Bagging Deep Hybrid Kernel Extreme Learning Machine(IL-Bagging-DHKELM)error affinity propagation cluster analysis is proposed.The algorithm effectively combines deep hybrid kernel extreme learning machine(DHKELM)with incremental learning(IL).Firstly,an initial wind power prediction model is trained using the Bagging-DHKELM model.Secondly,Euclidean morphological distance affinity propagation AP clustering algorithm is used to cluster and analyze the prediction error of wind power obtained from the initial training model.Finally,the correlation between wind power prediction errors and Numerical Weather Prediction(NWP)data is introduced as incremental updates to the initial wind power prediction model.During the incremental learning process,multiple error performance indicators are used to measure the overall model performance,thereby enabling incremental updates of wind power models.Practical examples show the method proposed in this article reduces the root mean square error of the initial model by 1.9 percentage points,indicating that this method can be better adapted to the current scenario of the continuous increase in wind power penetration rate.The accuracy and precision of wind power generation prediction are effectively improved through the method.
文摘BACKGROUND Endometrial cancer(EC)is a common gynecological malignancy that typically requires prompt surgical intervention;however,the advantage of surgical management is limited by the high postoperative recurrence rates and adverse outcomes.Previous studies have highlighted the prognostic potential of circulating tumor DNA(ctDNA)monitoring for minimal residual disease in patients with EC.AIM To develop and validate an optimized ctDNA-based model for predicting shortterm postoperative EC recurrence.METHODS We retrospectively analyzed 294 EC patients treated surgically from 2015-2019 to devise a short-term recurrence prediction model,which was validated on 143 EC patients operated between 2020 and 2021.Prognostic factors were identified using univariate Cox,Lasso,and multivariate Cox regressions.A nomogram was created to predict the 1,1.5,and 2-year recurrence-free survival(RFS).Model performance was assessed via receiver operating characteristic(ROC),calibration,and decision curve analyses(DCA),leading to a recurrence risk stratification system.RESULTS Based on the regression analysis and the nomogram created,patients with postoperative ctDNA-negativity,postoperative carcinoembryonic antigen 125(CA125)levels of<19 U/mL,and grade G1 tumors had improved RFS after surgery.The nomogram’s efficacy for recurrence prediction was confirmed through ROC analysis,calibration curves,and DCA methods,highlighting its high accuracy and clinical utility.Furthermore,using the nomogram,the patients were successfully classified into three risk subgroups.CONCLUSION The nomogram accurately predicted RFS after EC surgery at 1,1.5,and 2 years.This model will help clinicians personalize treatments,stratify risks,and enhance clinical outcomes for patients with EC.
基金supported in part by the National Natural Science Foundation of China under Grant 62203468in part by the Technological Research and Development Program of China State Railway Group Co.,Ltd.under Grant Q2023X011+1 种基金in part by the Young Elite Scientist Sponsorship Program by China Association for Science and Technology(CAST)under Grant 2022QNRC001in part by the Youth Talent Program Supported by China Railway Society,and in part by the Research Program of China Academy of Railway Sciences Corporation Limited under Grant 2023YJ112.
文摘Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.
基金funded by the National Natural Science Foundation of China (41807285)。
文摘The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning models have some problems,such as poor nonlinear performance,local optimum and incomplete factors feature extraction.These issues can affect the accuracy of slope stability prediction.Therefore,a deep learning algorithm called Long short-term memory(LSTM)has been innovatively proposed to predict slope stability.Taking the Ganzhou City in China as the study area,the landslide inventory and their characteristics of geotechnical parameters,slope height and slope angle are analyzed.Based on these characteristics,typical soil slopes are constructed using the Geo-Studio software.Five control factors affecting slope stability,including slope height,slope angle,internal friction angle,cohesion and volumetric weight,are selected to form different slope and construct model input variables.Then,the limit equilibrium method is used to calculate the stability coefficients of these typical soil slopes under different control factors.Each slope stability coefficient and its corresponding control factors is a slope sample.As a result,a total of 2160 training samples and 450 testing samples are constructed.These sample sets are imported into LSTM for modelling and compared with the support vector machine(SVM),random forest(RF)and convo-lutional neural network(CNN).The results show that the LSTM overcomes the problem that the commonly used machine learning models have difficulty extracting global features.Furthermore,LSTM has a better prediction performance for slope stability compared to SVM,RF and CNN models.
文摘The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine learning(ML)models effectively deal with such challenges.This research paper comprehensively analyses recent advancements in crop yield prediction from January 2016 to March 2024.In addition,it analyses the effectiveness of various input parameters considered in crop yield prediction models.We conducted an in-depth search and gathered studies that employed crop modeling and AI-based methods to predict crop yield.The total number of articles reviewed for crop yield prediction using ML,meta-modeling(Crop models coupled with ML/DL),and DL-based prediction models and input parameter selection is 125.We conduct the research by setting up five objectives for this research and discussing them after analyzing the selected research papers.Each study is assessed based on the crop type,input parameters employed for prediction,the modeling techniques adopted,and the evaluation metrics used for estimatingmodel performance.We also discuss the ethical and social impacts of AI on agriculture.However,various approaches presented in the scientific literature have delivered impressive predictions,they are complicateddue to intricate,multifactorial influences oncropgrowthand theneed for accuratedata-driven models.Therefore,thorough research is required to deal with challenges in predicting agricultural output.
基金funded by the Natural Science Foundation of Fujian Province,China (Grant No.2022J05291)Xiamen Scientific Research Funding for Overseas Chinese Scholars.
文摘Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.
基金The National Natural Science Foundation of China under contract Nos 42266006 and 41806114the Jiangxi Provincial Natural Science Foundation under contract Nos 20232BAB204089 and 20202ACBL214019.
文摘The complexity of river-tide interaction poses a significant challenge in predicting discharge in tidal rivers.Long short-term memory(LSTM)networks excel in processing and predicting crucial events with extended intervals and time delays in time series data.Additionally,the sequence-to-sequence(Seq2Seq)model,known for handling temporal relationships,adapting to variable-length sequences,effectively capturing historical information,and accommodating various influencing factors,emerges as a robust and flexible tool in discharge forecasting.In this study,we introduce the application of LSTM-based Seq2Seq models for the first time in forecasting the discharge of a tidal reach of the Changjiang River(Yangtze River)Estuary.This study focuses on discharge forecasting using three key input characteristics:flow velocity,water level,and discharge,which means the structure of multiple input and single output is adopted.The experiment used the discharge data of the whole year of 2020,of which the first 80%is used as the training set,and the last 20%is used as the test set.This means that the data covers different tidal cycles,which helps to test the forecasting effect of different models in different tidal cycles and different runoff.The experimental results indicate that the proposed models demonstrate advantages in long-term,mid-term,and short-term discharge forecasting.The Seq2Seq models improved by 6%-60%and 5%-20%of the relative standard deviation compared to the harmonic analysis models and improved back propagation neural network models in discharge prediction,respectively.In addition,the relative accuracy of the Seq2Seq model is 1%to 3%higher than that of the LSTM model.Analytical assessment of the prediction errors shows that the Seq2Seq models are insensitive to the forecast lead time and they can capture characteristic values such as maximum flood tide flow and maximum ebb tide flow in the tidal cycle well.This indicates the significance of the Seq2Seq models.
基金funded by the technology innovation guidance special project of Shaanxi Province(Grant No.2020CGXNX009)the supported by the National Natural Science Foundation of China(Grant No.62203344)+1 种基金the Shaanxi Provincial Department of Education serves local special projects(Grant No.22JC036)the Natural Science Basic Research Plan of Shaanxi Province(Grant No.2022JM-322)。
文摘Landslide deformation is affected by its geological conditions and many environmental factors.So it has the characteristics of dynamic,nonlinear and unstable,which makes the prediction of landslide displacement difficult.In view of the above problems,this paper proposes a dynamic prediction model of landslide displacement based on the improvement of complete ensemble empirical mode decomposition with adaptive noise(ICEEMDAN),approximate entropy(ApEn)and convolution long short-term memory(CNN-LSTM)neural network.Firstly,ICEEMDAN and Ap En are used to decompose the cumulative displacements into trend,periodic and random displacements.Then,the least square quintic polynomial function is used to fit the displacement of trend term,and the CNN-LSTM is used to predict the displacement of periodic term and random term.Finally,the displacement prediction results of trend term,periodic term and random term are superimposed to obtain the cumulative displacement prediction value.The proposed model has been verified in Bazimen landslide in the Three Gorges Reservoir area of China.The experimental results show that the model proposed in this paper can more effectively predict the displacement changes of landslides.As compared with long short-term memory(LSTM)neural network,gated recurrent unit(GRU)network model and back propagation(BP)neural network,CNN-LSTM neural network had higher prediction accuracy in predicting the periodic displacement,with the mean absolute percentage error(MAPE)reduced by 3.621%,6.893% and 15.886% respectively,and the root mean square error(RMSE)reduced by 3.834 mm,3.945 mm and 7.422mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide a new insight for practical landslide prevention and control engineering.
基金partially supported by the National Natural Science Foundation of China under Grants 61801208,61931023,and U1936202.
文摘To meet the ever-increasing traffic demand and enhance the coverage of cellular networks,network densification is one of the crucial paradigms of 5G and beyond mobile networks,which can improve system capacity by deploying a large number of Access Points(APs)in the service area.However,since the energy consumption of APs generally accounts for a substantial part of the communication system,how to deal with the consequent energy issue is a challenging task for a mobile network with densely deployed APs.In this paper,we propose an intelligent AP switching on/off scheme to reduce the system energy consumption with the prerequisite of guaranteeing the quality of service,where the signaling overhead is also taken into consideration to ensure the stability of the network.First,based on historical traffic data,a long short-term memory method is introduced to predict the future traffic distribution,by which we can roughly determine when the AP switching operation should be triggered;second,we present an efficient three-step AP selection strategy to determine which of the APs would be switched on or off;third,an AP switching scheme with a threshold is proposed to adjust the switching frequency so as to improve the stability of the system.Experiment results indicate that our proposed traffic forecasting method performs well in practical scenarios,where the normalized root mean square error is within 10%.Furthermore,the achieved energy-saving is more than 28% on average with a reasonable outage probability and switching frequency for an area served by 40 APs in a commercial mobile network.
基金This reasearch was supported by the Science Foundation of Guangxi under grant No.0339025the Natural Sciences Foundation of China under grant No.40075021.
文摘In terms of the modular fuzzy neural network (MFNN) combining fuzzy c-mean (FCM) cluster and single-layer neural network, a short-term climate prediction model is developed. It is found from modeling results that the MFNN model for short-term climate prediction has advantages of simple structure, no hidden layer and stable network parameters because of the assembling of sound functions of the self-adaptive learning, association and fuzzy information processing of fuzzy mathematics and neural network methods. The case computational results of Guangxi flood season (JJA) rainfall show that the mean absolute error (MAE) and mean relative error (MRE) of the prediction during 1998-2002 are 68.8 mm and 9.78%, and in comparison with the regression method, under the conditions of the same predictors and period they are 97.8 mm and 12.28% respectively. Furthermore, it is also found from the stability analysis of the modular model that the change of the prediction results of independent samples with training times in the stably convergent interval of the model is less than 1.3 mm. The obvious oscillation phenomenon of prediction results with training times, such as in the common back-propagation neural network (BPNN) model, does not occur, indicating a better practical application potential of the MFNN model.
基金supported by the Special Program in the Public Interest of the China Meteorological Administration (Grant No. GYHY201006022)the Strategic Special Projects of the Chinese Academy of Sciences (Grant No. XDA05090000)
文摘A statistical downscaling approach was developed to improve seasonal-to-interannual prediction of summer rainfall over North China by considering the effect of decadal variability based on observational datasets and dynamical model outputs.Both predictands and predictors were first decomposed into interannual and decadal components.Two predictive equations were then built separately for the two distinct timescales by using multivariate linear regressions based on independent sample validation.For the interannual timescale,850-hPa meridional wind and 500-hPa geopotential heights from multiple dynamical models' hindcasts and SSTs from observational datasets were used to construct predictors.For the decadal timescale,two well-known basin-scale SST decadal oscillation (the Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation) indices were used as predictors.Then,the downscaled predictands were combined to represent the predicted/hindcasted total rainfall.The prediction was compared with the models' raw hindcasts and those from a similar approach but without timescale decomposition.In comparison to hindcasts from individual models or their multi-model ensemble mean,the skill of the present scheme was found to be significantly higher,with anomaly correlation coefficients increasing from nearly neutral to over 0.4 and with RMSE decreasing by up to 0.6 mm d-1.The improvements were also seen in the station-based temporal correlation of the predictions with observed rainfall,with the coefficients ranging from-0.1 to 0.87,obviously higher than the models' raw hindcasted rainfall results.Thus,the present approach exhibits a great advantage and may be appropriate for use in operational predictions.
基金financially supported by the National Natural Science Foundation of China (Grant Nos. 52074258, 41941018, and U21A20153)
文摘Based on data from the Jilin Water Diversion Tunnels from the Songhua River(China),an improved and real-time prediction method optimized by multi-algorithm for tunnel boring machine(TBM)cutter-head torque is presented.Firstly,a function excluding invalid and abnormal data is established to distinguish TBM operating state,and a feature selection method based on the SelectKBest algorithm is proposed.Accordingly,ten features that are most closely related to the cutter-head torque are selected as input variables,which,in descending order of influence,include the sum of motor torque,cutter-head power,sum of motor power,sum of motor current,advance rate,cutter-head pressure,total thrust force,penetration rate,cutter-head rotational velocity,and field penetration index.Secondly,a real-time cutterhead torque prediction model’s structure is developed,based on the bidirectional long short-term memory(BLSTM)network integrating the dropout algorithm to prevent overfitting.Then,an algorithm to optimize hyperparameters of model based on Bayesian and cross-validation is proposed.Early stopping and checkpoint algorithms are integrated to optimize the training process.Finally,a BLSTMbased real-time cutter-head torque prediction model is developed,which fully utilizes the previous time-series tunneling information.The mean absolute percentage error(MAPE)of the model in the verification section is 7.3%,implying that the presented model is suitable for real-time cutter-head torque prediction.Furthermore,an incremental learning method based on the above base model is introduced to improve the adaptability of the model during the TBM tunneling.Comparison of the prediction performance between the base and incremental learning models in the same tunneling section shows that:(1)the MAPE of the predicted results of the BLSTM-based real-time cutter-head torque prediction model remains below 10%,and both the coefficient of determination(R^(2))and correlation coefficient(r)between measured and predicted values exceed 0.95;and(2)the incremental learning method is suitable for realtime cutter-head torque prediction and can effectively improve the prediction accuracy and generalization capacity of the model during the excavation process.
基金supported by the Natural Science Foundation of Shaanxi Province under Grant 2019JQ206in part by the Science and Technology Department of Shaanxi Province under Grant 2020CGXNG-009in part by the Education Department of Shaanxi Province under Grant 17JK0346。
文摘An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models, this paper proposes a dynamic prediction model of landslide displacement based on singular spectrum analysis(SSA) and stack long short-term memory(SLSTM) network. The SSA is used to decompose the landslide accumulated displacement time series data into trend term and periodic term displacement subsequences. A cubic polynomial function is used to predict the trend term displacement subsequence, and the SLSTM neural network is used to predict the periodic term displacement subsequence. At the same time, the Bayesian optimization algorithm is used to determine that the SLSTM network input sequence length is 12 and the number of hidden layer nodes is 18. The SLSTM network is updated by adding predicted values to the training set to achieve dynamic displacement prediction. Finally, the accumulated landslide displacement is obtained by superimposing the predicted value of each displacement subsequence. The proposed model was verified on the Xintan landslide in Hubei Province, China. The results show that when predicting the displacement of the periodic term, the SLSTM network has higher prediction accuracy than the support vector machine(SVM) and auto regressive integrated moving average(ARIMA). The mean relative error(MRE) is reduced by 4.099% and 3.548% respectively, while the root mean square error(RMSE) is reduced by 5.830 mm and 3.854 mm respectively. It is concluded that the SLSTM network model can better simulate the dynamic characteristics of landslides.
基金Supported by the National Natural Science Foundation of China
文摘We present a verification of the short-term predictions of solar X-ray bursts for the maximum phase (2000–2001) of Solar Cycle 23, issued by two prediction centers. The results are that the rate of correct predictions is about equal for RWC-China and WWA; the rate of too high predictions is greater for RWC-China than for WWA, while the rate of too low predictions is smaller for RWC-China than for WWA.
基金The National Natural Science Foundation of China (No.62262011)The Natural Science Foundation of Guangxi (No.2021JJA170130).
文摘Predicting the usage of container cloud resources has always been an important and challenging problem in improving the performance of cloud resource clusters.We proposed an integrated prediction method of stacking container cloud resources based on variational modal decomposition(VMD)-Permutation entropy(PE)and long short-term memory(LSTM)neural network to solve the prediction difficulties caused by the non-stationarity and volatility of resource data.The variational modal decomposition algorithm decomposes the time series data of cloud resources to obtain intrinsic mode function and residual components,which solves the signal decomposition algorithm’s end-effect and modal confusion problems.The permutation entropy is used to evaluate the complexity of the intrinsic mode function,and the reconstruction based on similar entropy and low complexity is used to reduce the difficulty of modeling.Finally,we use the LSTM and stacking fusion models to predict and superimpose;the stacking integration model integrates Gradient boosting regression(GBR),Kernel ridge regression(KRR),and Elastic net regression(ENet)as primary learners,and the secondary learner adopts the kernel ridge regression method with solid generalization ability.The Amazon public data set experiment shows that compared with Holt-winters,LSTM,and Neuralprophet models,we can see that the optimization range of multiple evaluation indicators is 0.338∼1.913,0.057∼0.940,0.000∼0.017 and 1.038∼8.481 in root means square error(RMSE),mean absolute error(MAE),mean absolute percentage error(MAPE)and variance(VAR),showing its stability and better prediction accuracy.