There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement an...There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement and time series of a landslide.The second one is the dynamic evolution of a landslide,which could not be feasibly simulated simply by traditional prediction models.In this paper,a dynamic model of displacement prediction is introduced for composite landslides based on a combination of empirical mode decomposition with soft screening stop criteria(SSSC-EMD)and deep bidirectional long short-term memory(DBi-LSTM)neural network.In the proposed model,the time series analysis and SSSC-EMD are used to decompose the observed accumulated displacements of a slope into three components,viz.trend displacement,periodic displacement,and random displacement.Then,by analyzing the evolution pattern of a landslide and its key factors triggering landslides,appropriate influencing factors are selected for each displacement component,and DBi-LSTM neural network to carry out multi-datadriven dynamic prediction for each displacement component.An accumulated displacement prediction has been obtained by a summation of each component.For accuracy verification and engineering practicability of the model,field observations from two known landslides in China,the Xintan landslide and the Bazimen landslide were collected for comparison and evaluation.The case study verified that the model proposed in this paper can better characterize the"stepwise"deformation characteristics of a slope.As compared with long short-term memory(LSTM)neural network,support vector machine(SVM),and autoregressive integrated moving average(ARIMA)model,DBi-LSTM neural network has higher accuracy in predicting the periodic displacement of slope deformation,with the mean absolute percentage error reduced by 3.063%,14.913%,and 13.960%respectively,and the root mean square error reduced by 1.951 mm,8.954 mm and 7.790 mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide new insight for practical landslide prevention and control engineering.展开更多
Speaker separation in complex acoustic environment is one of challenging tasks in speech separation.In practice,speakers are very often unmoving or moving slowly in normal communication.In this case,the spatial featur...Speaker separation in complex acoustic environment is one of challenging tasks in speech separation.In practice,speakers are very often unmoving or moving slowly in normal communication.In this case,the spatial features among the consecutive speech frames become highly correlated such that it is helpful for speaker separation by providing additional spatial information.To fully exploit this information,we design a separation system on Recurrent Neural Network(RNN)with long short-term memory(LSTM)which effectively learns the temporal dynamics of spatial features.In detail,a LSTM-based speaker separation algorithm is proposed to extract the spatial features in each time-frequency(TF)unit and form the corresponding feature vector.Then,we treat speaker separation as a supervised learning problem,where a modified ideal ratio mask(IRM)is defined as the training function during LSTM learning.Simulations show that the proposed system achieves attractive separation performance in noisy and reverberant environments.Specifically,during the untrained acoustic test with limited priors,e.g.,unmatched signal to noise ratio(SNR)and reverberation,the proposed LSTM based algorithm can still outperforms the existing DNN based method in the measures of PESQ and STOI.It indicates our method is more robust in untrained conditions.展开更多
A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively ...A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.展开更多
Wind power volatility not only limits the large-scale grid connection but also poses many challenges to safe grid operation.Accurate wind power prediction can mitigate the adverse effects of wind power volatility on w...Wind power volatility not only limits the large-scale grid connection but also poses many challenges to safe grid operation.Accurate wind power prediction can mitigate the adverse effects of wind power volatility on wind power grid connections.For the characteristics of wind power antecedent data and precedent data jointly to determine the prediction accuracy of the prediction model,the short-term prediction of wind power based on a combined neural network is proposed.First,the Bi-directional Long Short Term Memory(BiLSTM)network prediction model is constructed,and the bi-directional nature of the BiLSTM network is used to deeply mine the wind power data information and find the correlation information within the data.Secondly,to avoid the limitation of a single prediction model when the wind power changes abruptly,the Wavelet Transform-Improved Adaptive Genetic Algorithm-Back Propagation(WT-IAGA-BP)neural network based on the combination of the WT-IAGA-BP neural network and BiLSTM network is constructed for the short-term prediction of wind power.Finally,comparing with LSTM,BiLSTM,WT-LSTM,WT-BiLSTM,WT-IAGA-BP,and WT-IAGA-BP&LSTM prediction models,it is verified that the wind power short-term prediction model based on the combination of WT-IAGA-BP neural network and BiLSTM network has higher prediction accuracy.展开更多
Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning netwo...Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.展开更多
To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with...To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with sea surface wind and wave heights as training samples.The prediction performance of the model is evaluated,and the error analysis shows that when using the same set of numerically predicted sea surface wind as input,the prediction error produced by the proposed LSTM model at Sta.N01 is 20%,18%and 23%lower than the conventional numerical wave models in terms of the total root mean square error(RMSE),scatter index(SI)and mean absolute error(MAE),respectively.Particularly,for significant wave height in the range of 3–5 m,the prediction accuracy of the LSTM model is improved the most remarkably,with RMSE,SI and MAE all decreasing by 24%.It is also evident that the numbers of hidden neurons,the numbers of buoys used and the time length of training samples all have impact on the prediction accuracy.However,the prediction does not necessary improve with the increase of number of hidden neurons or number of buoys used.The experiment trained by data with the longest time length is found to perform the best overall compared to other experiments with a shorter time length for training.Overall,long short-term memory neural network was proved to be a very promising method for future development and applications in wave forecasting.展开更多
A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a force...A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.展开更多
In dense pedestrian tracking,frequent object occlusions and close distances between objects cause difficulty when accurately estimating object trajectories.In this study,a conditional random field tracking model is es...In dense pedestrian tracking,frequent object occlusions and close distances between objects cause difficulty when accurately estimating object trajectories.In this study,a conditional random field tracking model is established by using a visual long short term memory network in the three-dimensional(3D)space and the motion estimations jointly performed on object trajectory segments.Object visual field information is added to the long short term memory network to improve the accuracy of the motion related object pair selection and motion estimation.To address the uncertainty of the length and interval of trajectory segments,a multimode long short term memory network is proposed for the object motion estimation.The tracking performance is evaluated using the PETS2009 dataset.The experimental results show that the proposed method achieves better performance than the tracking methods based on the independent motion estimation.展开更多
The fraction defective of semi-finished products is predicted to optimize the process of relay production lines, by which production quality and productivity are increased, and the costs are decreased. The process par...The fraction defective of semi-finished products is predicted to optimize the process of relay production lines, by which production quality and productivity are increased, and the costs are decreased. The process parameters of relay production lines are studied based on the long-and-short-term memory network. Then, the Keras deep learning framework is utilized to build up a short-term relay quality prediction algorithm for the semi-finished product. A simulation model is used to study prediction algorithm. The simulation results show that the average prediction absolute error of the fraction is less than 5%. This work displays great application potential in the relay production lines.展开更多
In this paper,the recurrent neural network structure of a bidirectional long shortterm memory network(Bi-LSTM)with special memory cells that store information is used to characterize the deep features of the variation...In this paper,the recurrent neural network structure of a bidirectional long shortterm memory network(Bi-LSTM)with special memory cells that store information is used to characterize the deep features of the variation pattern between logging and seismic data.A mapping relationship model between high-frequency logging data and low-frequency seismic data is established via nonlinear mapping.The seismic waveform is infinitely approximated using the logging curve in the low-frequency band to obtain a nonlinear mapping model of this scale,which then stepwise approach the logging curve in the high-frequency band.Finally,a seismic-inversion method of nonlinear mapping multilevel well–seismic matching based on the Bi-LSTM network is developed.The characteristic of this method is that by applying the multilevel well–seismic matching process,the seismic data are stepwise matched to the scale range that is consistent with the logging curve.Further,the matching operator at each level can be stably obtained to effectively overcome the problems that occur in the well–seismic matching process,such as the inconsistency in the scale of two types of data,accuracy in extracting the seismic wavelet of the well-side seismic traces,and multiplicity of solutions.Model test and practical application demonstrate that this method improves the vertical resolution of inversion results,and at the same time,the boundary and the lateral characteristics of the sand body are well maintained to improve the accuracy of thin-layer sand body prediction and achieve an improved practical application effect.展开更多
Lithium-ion batteries are the most widely accepted type of battery in the electric vehicle industry because of some of their positive inherent characteristics. However, the safety problems associated with inaccurate e...Lithium-ion batteries are the most widely accepted type of battery in the electric vehicle industry because of some of their positive inherent characteristics. However, the safety problems associated with inaccurate estimation and prediction of the state of health of these batteries have attracted wide attention due to the adverse negative effect on vehicle safety. In this paper, both machine and deep learning models were used to estimate the state of health of lithium-ion batteries. The paper introduces the definition of battery health status and its importance in the electric vehicle industry. Based on the data preprocessing and visualization analysis, three features related to actual battery capacity degradation are extracted from the data. Two learning models, SVR and LSTM were employed for the state of health estimation and their respective results are compared in this paper. The mean square error and coefficient of determination were the two metrics for the performance evaluation of the models. The experimental results indicate that both models have high estimation results. However, the metrics indicated that the SVR was the overall best model.展开更多
The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning al...The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning algorithms after artificial feature extraction.However,guaranteeing the effectiveness of the extracted features is difficult.The current trend focuses on using a convolution neural network to automatically extract features for classification.This method can be used to extract signal spatial features automatically through a convolution kernel;however,infrasound signals contain not only spatial information but also temporal information when used as a time series.These extracted temporal features are also crucial.If only a convolution neural network is used,then the time dependence of the infrasound sequence will be missed.Using long short-term memory networks can compensate for the missing time-series features but induces spatial feature information loss of the infrasound signal.A multiscale squeeze excitation–convolution neural network–bidirectional long short-term memory network infrasound event classification fusion model is proposed in this study to address these problems.This model automatically extracted temporal and spatial features,adaptively selected features,and also realized the fusion of the two types of features.Experimental results showed that the classification accuracy of the model was more than 98%,thus verifying the effectiveness and superiority of the proposed model.展开更多
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear...Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.展开更多
Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and a...Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.展开更多
Currently,Bitcoin is the world’s most popular cryptocurrency.The price of Bitcoin is extremely volatile,which can be described as high-benefit and high-risk.To minimize the risk involved,a means of more accurately pr...Currently,Bitcoin is the world’s most popular cryptocurrency.The price of Bitcoin is extremely volatile,which can be described as high-benefit and high-risk.To minimize the risk involved,a means of more accurately predicting the Bitcoin price is required.Most of the existing studies of Bitcoin prediction are based on historical(i.e.,benchmark)data,without considering the real-time(i.e.,live)data.To mitigate the issue of price volatility and achieve more precise outcomes,this study suggests using historical and real-time data to predict the Bitcoin candlestick—or open,high,low,and close(OHLC)—prices.Seeking a better prediction model,the present study proposes time series-based deep learning models.In particular,two deep learning algorithms were applied,namely,long short-term memory(LSTM)and gated recurrent unit(GRU).Using real-time data,the Bitcoin candlesticks were predicted for three intervals:the next 4 h,the next 12 h,and the next 24 h.The results showed that the best-performing model was the LSTM-based model with the 4-h interval.In particular,this model achieved a stellar performance with a mean absolute percentage error(MAPE)of 0.63,a root mean square error(RMSE)of 0.0009,a mean square error(MSE)of 9e-07,a mean absolute error(MAE)of 0.0005,and an R-squared coefficient(R2)of 0.994.With these results,the proposed prediction model has demonstrated its efficiency over the models proposed in previous studies.The findings of this study have considerable implications in the business field,as the proposed model can assist investors and traders in precisely identifying Bitcoin sales and buying opportunities.展开更多
As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical...As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical workers and patients because of its ability to assist in the diagnosis of diseases.Therefore,the research of real-time diagnosis and classification algorithms for arrhythmia can help to improve the diagnostic efficiency of diseases.In this paper,we design an automatic arrhythmia classification algorithm model based on Convolutional Neural Network(CNN)and Encoder-Decoder model.The model uses Long Short-Term Memory(LSTM)to consider the influence of time series features on classification results.Simultaneously,it is trained and tested by the MIT-BIH arrhythmia database.Besides,Generative Adversarial Networks(GAN)is adopted as a method of data equalization for solving data imbalance problem.The simulation results show that for the inter-patient arrhythmia classification,the hybrid model combining CNN and Encoder-Decoder model has the best classification accuracy,of which the accuracy can reach 94.05%.Especially,it has a better advantage for the classification effect of supraventricular ectopic beats(class S)and fusion beats(class F).展开更多
Geochemical survey data analysis is recognized as an implemented and feasible way for lithological mapping to assist mineral exploration.With respect to available approaches,recent methodological advances have focused...Geochemical survey data analysis is recognized as an implemented and feasible way for lithological mapping to assist mineral exploration.With respect to available approaches,recent methodological advances have focused on deep learning algorithms which provide access to learn and extract information directly from geochemical survey data through multi-level networks and outputting end-to-end classification.Accordingly,this study developed a lithological mapping framework with the joint application of a convolutional neural network(CNN)and a long short-term memory(LSTM).The CNN-LSTM model is dominant in correlation extraction from CNN layers and coupling interaction learning from LSTM layers.This hybrid approach was demonstrated by mapping leucogranites in the Himalayan orogen based on stream sediment geochemical survey data,where the targeted leucogranite was expected to be potential resources of rare metals such as Li,Be,and W mineralization.Three comparative case studies were carried out from both visual and quantitative perspectives to illustrate the superiority of the proposed model.A guided spatial distribution map of leucogranites in the Himalayan orogen,divided into high-,moderate-,and low-potential areas,was delineated by the success rate curve,which further improves the efficiency for identifying unmapped leucogranites through geological mapping.In light of these results,this study provides an alternative solution for lithologic mapping using geochemical survey data at a regional scale and reduces the risk for decision making associated with mineral exploration.展开更多
Predicting wind speed accurately is essential to ensure the stability of the wind power system and improve the utilization rate of wind energy.However,owing to the stochastic and intermittent of wind speed,predicting ...Predicting wind speed accurately is essential to ensure the stability of the wind power system and improve the utilization rate of wind energy.However,owing to the stochastic and intermittent of wind speed,predicting wind speed accurately is difficult.A new hybrid deep learning model based on empirical wavelet transform,recurrent neural network and error correction for short-term wind speed prediction is proposed in this paper.The empirical wavelet transformation is applied to decompose the original wind speed series.The long short term memory network and the Elman neural network are adopted to predict low-frequency and high-frequency wind speed sub-layers respectively to balance the calculation efficiency and prediction accuracy.The error correction strategy based on deep long short term memory network is developed to modify the prediction errors.Four actual wind speed series are utilized to verify the effectiveness of the proposed model.The empirical results indicate that the method proposed in this paper has satisfactory performance in wind speed prediction.展开更多
Knowledge of pore-water pressure(PWP)variation is fundamental for slope stability.A precise prediction of PWP is difficult due to complex physical mechanisms and in situ natural variability.To explore the applicabilit...Knowledge of pore-water pressure(PWP)variation is fundamental for slope stability.A precise prediction of PWP is difficult due to complex physical mechanisms and in situ natural variability.To explore the applicability and advantages of recurrent neural networks(RNNs)on PWP prediction,three variants of RNNs,i.e.,standard RNN,long short-term memory(LSTM)and gated recurrent unit(GRU)are adopted and compared with a traditional static artificial neural network(ANN),i.e.,multi-layer perceptron(MLP).Measurements of rainfall and PWP of representative piezometers from a fully instrumented natural slope in Hong Kong are used to establish the prediction models.The coefficient of determination(R^2)and root mean square error(RMSE)are used for model evaluations.The influence of input time series length on the model performance is investigated.The results reveal that MLP can provide acceptable performance but is not robust.The uncertainty bounds of RMSE of the MLP model range from 0.24 kPa to 1.12 k Pa for the selected two piezometers.The standard RNN can perform better but the robustness is slightly affected when there are significant time lags between PWP changes and rainfall.The GRU and LSTM models can provide more precise and robust predictions than the standard RNN.The effects of the hidden layer structure and the dropout technique are investigated.The single-layer GRU is accurate enough for PWP prediction,whereas a double-layer GRU brings extra time cost with little accuracy improvement.The dropout technique is essential to overfitting prevention and improvement of accuracy.展开更多
The remaining useful life(RUL)of a system is generally predicted by utilising the data collected from the sensors that continuously monitor different indicators.Recently,different deep learning(DL)techniques have been...The remaining useful life(RUL)of a system is generally predicted by utilising the data collected from the sensors that continuously monitor different indicators.Recently,different deep learning(DL)techniques have been used for RUL prediction and achieved great success.Because the data is often time-sequential,recurrent neural network(RNN)has attracted significant interests due to its efficiency in dealing with such data.This paper systematically reviews RNN and its variants for RUL prediction,with a specific focus on understanding how different components(e.g.,types of optimisers and activation functions)or parameters(e.g.,sequence length,neuron quantities)affect their performance.After that,a case study using the well-studied NASA’s C-MAPSS dataset is presented to quantitatively evaluate the influence of various state-of-the-art RNN structures on the RUL prediction performance.The result suggests that the variant methods usually perform better than the original RNN,and among which,Bi-directional Long Short-Term Memory generally has the best performance in terms of stability,precision and accuracy.Certain model structures may fail to produce valid RUL prediction result due to the gradient vanishing or gradient exploring problem if the parameters are not chosen appropriately.It is concluded that parameter tuning is a crucial step to achieve optimal prediction performance.展开更多
文摘There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement and time series of a landslide.The second one is the dynamic evolution of a landslide,which could not be feasibly simulated simply by traditional prediction models.In this paper,a dynamic model of displacement prediction is introduced for composite landslides based on a combination of empirical mode decomposition with soft screening stop criteria(SSSC-EMD)and deep bidirectional long short-term memory(DBi-LSTM)neural network.In the proposed model,the time series analysis and SSSC-EMD are used to decompose the observed accumulated displacements of a slope into three components,viz.trend displacement,periodic displacement,and random displacement.Then,by analyzing the evolution pattern of a landslide and its key factors triggering landslides,appropriate influencing factors are selected for each displacement component,and DBi-LSTM neural network to carry out multi-datadriven dynamic prediction for each displacement component.An accumulated displacement prediction has been obtained by a summation of each component.For accuracy verification and engineering practicability of the model,field observations from two known landslides in China,the Xintan landslide and the Bazimen landslide were collected for comparison and evaluation.The case study verified that the model proposed in this paper can better characterize the"stepwise"deformation characteristics of a slope.As compared with long short-term memory(LSTM)neural network,support vector machine(SVM),and autoregressive integrated moving average(ARIMA)model,DBi-LSTM neural network has higher accuracy in predicting the periodic displacement of slope deformation,with the mean absolute percentage error reduced by 3.063%,14.913%,and 13.960%respectively,and the root mean square error reduced by 1.951 mm,8.954 mm and 7.790 mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide new insight for practical landslide prevention and control engineering.
基金This work is supported by the National Nature Science Foundation of China(NSFC)under Grant Nos.61571106,61501169,41706103the Fundamental Research Funds for the Central Universities under Grant No.2242013K30010.
文摘Speaker separation in complex acoustic environment is one of challenging tasks in speech separation.In practice,speakers are very often unmoving or moving slowly in normal communication.In this case,the spatial features among the consecutive speech frames become highly correlated such that it is helpful for speaker separation by providing additional spatial information.To fully exploit this information,we design a separation system on Recurrent Neural Network(RNN)with long short-term memory(LSTM)which effectively learns the temporal dynamics of spatial features.In detail,a LSTM-based speaker separation algorithm is proposed to extract the spatial features in each time-frequency(TF)unit and form the corresponding feature vector.Then,we treat speaker separation as a supervised learning problem,where a modified ideal ratio mask(IRM)is defined as the training function during LSTM learning.Simulations show that the proposed system achieves attractive separation performance in noisy and reverberant environments.Specifically,during the untrained acoustic test with limited priors,e.g.,unmatched signal to noise ratio(SNR)and reverberation,the proposed LSTM based algorithm can still outperforms the existing DNN based method in the measures of PESQ and STOI.It indicates our method is more robust in untrained conditions.
文摘A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.
基金support of national natural science foundation of China(No.52067021)natural science foundation of Xinjiang(2022D01C35)+1 种基金excellent youth scientific and technological talents plan of Xinjiang(No.2019Q012)major science&technology special project of Xinjiang Uygur Autonomous Region(2022A01002-2)。
文摘Wind power volatility not only limits the large-scale grid connection but also poses many challenges to safe grid operation.Accurate wind power prediction can mitigate the adverse effects of wind power volatility on wind power grid connections.For the characteristics of wind power antecedent data and precedent data jointly to determine the prediction accuracy of the prediction model,the short-term prediction of wind power based on a combined neural network is proposed.First,the Bi-directional Long Short Term Memory(BiLSTM)network prediction model is constructed,and the bi-directional nature of the BiLSTM network is used to deeply mine the wind power data information and find the correlation information within the data.Secondly,to avoid the limitation of a single prediction model when the wind power changes abruptly,the Wavelet Transform-Improved Adaptive Genetic Algorithm-Back Propagation(WT-IAGA-BP)neural network based on the combination of the WT-IAGA-BP neural network and BiLSTM network is constructed for the short-term prediction of wind power.Finally,comparing with LSTM,BiLSTM,WT-LSTM,WT-BiLSTM,WT-IAGA-BP,and WT-IAGA-BP&LSTM prediction models,it is verified that the wind power short-term prediction model based on the combination of WT-IAGA-BP neural network and BiLSTM network has higher prediction accuracy.
文摘Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.
基金The National Key R&D Program of China under contract No.2016YFC1402103
文摘To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with sea surface wind and wave heights as training samples.The prediction performance of the model is evaluated,and the error analysis shows that when using the same set of numerically predicted sea surface wind as input,the prediction error produced by the proposed LSTM model at Sta.N01 is 20%,18%and 23%lower than the conventional numerical wave models in terms of the total root mean square error(RMSE),scatter index(SI)and mean absolute error(MAE),respectively.Particularly,for significant wave height in the range of 3–5 m,the prediction accuracy of the LSTM model is improved the most remarkably,with RMSE,SI and MAE all decreasing by 24%.It is also evident that the numbers of hidden neurons,the numbers of buoys used and the time length of training samples all have impact on the prediction accuracy.However,the prediction does not necessary improve with the increase of number of hidden neurons or number of buoys used.The experiment trained by data with the longest time length is found to perform the best overall compared to other experiments with a shorter time length for training.Overall,long short-term memory neural network was proved to be a very promising method for future development and applications in wave forecasting.
基金supported by the Ministry of Trade,Industry & Energy(MOTIE,Korea) under Industrial Technology Innovation Program (No.10063424,'development of distant speech recognition and multi-task dialog processing technologies for in-door conversational robots')
文摘A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.
文摘In dense pedestrian tracking,frequent object occlusions and close distances between objects cause difficulty when accurately estimating object trajectories.In this study,a conditional random field tracking model is established by using a visual long short term memory network in the three-dimensional(3D)space and the motion estimations jointly performed on object trajectory segments.Object visual field information is added to the long short term memory network to improve the accuracy of the motion related object pair selection and motion estimation.To address the uncertainty of the length and interval of trajectory segments,a multimode long short term memory network is proposed for the object motion estimation.The tracking performance is evaluated using the PETS2009 dataset.The experimental results show that the proposed method achieves better performance than the tracking methods based on the independent motion estimation.
基金funded by Fujian Science and Technology Key Project(No.2016H6022,2018J01099,2017H0037)
文摘The fraction defective of semi-finished products is predicted to optimize the process of relay production lines, by which production quality and productivity are increased, and the costs are decreased. The process parameters of relay production lines are studied based on the long-and-short-term memory network. Then, the Keras deep learning framework is utilized to build up a short-term relay quality prediction algorithm for the semi-finished product. A simulation model is used to study prediction algorithm. The simulation results show that the average prediction absolute error of the fraction is less than 5%. This work displays great application potential in the relay production lines.
基金supported by the National Major Science and Technology Special Project(No.2016ZX05026-002).
文摘In this paper,the recurrent neural network structure of a bidirectional long shortterm memory network(Bi-LSTM)with special memory cells that store information is used to characterize the deep features of the variation pattern between logging and seismic data.A mapping relationship model between high-frequency logging data and low-frequency seismic data is established via nonlinear mapping.The seismic waveform is infinitely approximated using the logging curve in the low-frequency band to obtain a nonlinear mapping model of this scale,which then stepwise approach the logging curve in the high-frequency band.Finally,a seismic-inversion method of nonlinear mapping multilevel well–seismic matching based on the Bi-LSTM network is developed.The characteristic of this method is that by applying the multilevel well–seismic matching process,the seismic data are stepwise matched to the scale range that is consistent with the logging curve.Further,the matching operator at each level can be stably obtained to effectively overcome the problems that occur in the well–seismic matching process,such as the inconsistency in the scale of two types of data,accuracy in extracting the seismic wavelet of the well-side seismic traces,and multiplicity of solutions.Model test and practical application demonstrate that this method improves the vertical resolution of inversion results,and at the same time,the boundary and the lateral characteristics of the sand body are well maintained to improve the accuracy of thin-layer sand body prediction and achieve an improved practical application effect.
文摘Lithium-ion batteries are the most widely accepted type of battery in the electric vehicle industry because of some of their positive inherent characteristics. However, the safety problems associated with inaccurate estimation and prediction of the state of health of these batteries have attracted wide attention due to the adverse negative effect on vehicle safety. In this paper, both machine and deep learning models were used to estimate the state of health of lithium-ion batteries. The paper introduces the definition of battery health status and its importance in the electric vehicle industry. Based on the data preprocessing and visualization analysis, three features related to actual battery capacity degradation are extracted from the data. Two learning models, SVR and LSTM were employed for the state of health estimation and their respective results are compared in this paper. The mean square error and coefficient of determination were the two metrics for the performance evaluation of the models. The experimental results indicate that both models have high estimation results. However, the metrics indicated that the SVR was the overall best model.
文摘The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning algorithms after artificial feature extraction.However,guaranteeing the effectiveness of the extracted features is difficult.The current trend focuses on using a convolution neural network to automatically extract features for classification.This method can be used to extract signal spatial features automatically through a convolution kernel;however,infrasound signals contain not only spatial information but also temporal information when used as a time series.These extracted temporal features are also crucial.If only a convolution neural network is used,then the time dependence of the infrasound sequence will be missed.Using long short-term memory networks can compensate for the missing time-series features but induces spatial feature information loss of the infrasound signal.A multiscale squeeze excitation–convolution neural network–bidirectional long short-term memory network infrasound event classification fusion model is proposed in this study to address these problems.This model automatically extracted temporal and spatial features,adaptively selected features,and also realized the fusion of the two types of features.Experimental results showed that the classification accuracy of the model was more than 98%,thus verifying the effectiveness and superiority of the proposed model.
基金funded by the Natural Science Foundation of Fujian Province,China (Grant No.2022J05291)Xiamen Scientific Research Funding for Overseas Chinese Scholars.
文摘Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.
基金supported in part by the National Natural Science Foundation of China under Grant 62203468in part by the Technological Research and Development Program of China State Railway Group Co.,Ltd.under Grant Q2023X011+1 种基金in part by the Young Elite Scientist Sponsorship Program by China Association for Science and Technology(CAST)under Grant 2022QNRC001in part by the Youth Talent Program Supported by China Railway Society,and in part by the Research Program of China Academy of Railway Sciences Corporation Limited under Grant 2023YJ112.
文摘Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.
文摘Currently,Bitcoin is the world’s most popular cryptocurrency.The price of Bitcoin is extremely volatile,which can be described as high-benefit and high-risk.To minimize the risk involved,a means of more accurately predicting the Bitcoin price is required.Most of the existing studies of Bitcoin prediction are based on historical(i.e.,benchmark)data,without considering the real-time(i.e.,live)data.To mitigate the issue of price volatility and achieve more precise outcomes,this study suggests using historical and real-time data to predict the Bitcoin candlestick—or open,high,low,and close(OHLC)—prices.Seeking a better prediction model,the present study proposes time series-based deep learning models.In particular,two deep learning algorithms were applied,namely,long short-term memory(LSTM)and gated recurrent unit(GRU).Using real-time data,the Bitcoin candlesticks were predicted for three intervals:the next 4 h,the next 12 h,and the next 24 h.The results showed that the best-performing model was the LSTM-based model with the 4-h interval.In particular,this model achieved a stellar performance with a mean absolute percentage error(MAPE)of 0.63,a root mean square error(RMSE)of 0.0009,a mean square error(MSE)of 9e-07,a mean absolute error(MAE)of 0.0005,and an R-squared coefficient(R2)of 0.994.With these results,the proposed prediction model has demonstrated its efficiency over the models proposed in previous studies.The findings of this study have considerable implications in the business field,as the proposed model can assist investors and traders in precisely identifying Bitcoin sales and buying opportunities.
基金Fundamental Research Funds for the Central Universities(Grant No.FRF-TP-19-006A3).
文摘As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical workers and patients because of its ability to assist in the diagnosis of diseases.Therefore,the research of real-time diagnosis and classification algorithms for arrhythmia can help to improve the diagnostic efficiency of diseases.In this paper,we design an automatic arrhythmia classification algorithm model based on Convolutional Neural Network(CNN)and Encoder-Decoder model.The model uses Long Short-Term Memory(LSTM)to consider the influence of time series features on classification results.Simultaneously,it is trained and tested by the MIT-BIH arrhythmia database.Besides,Generative Adversarial Networks(GAN)is adopted as a method of data equalization for solving data imbalance problem.The simulation results show that for the inter-patient arrhythmia classification,the hybrid model combining CNN and Encoder-Decoder model has the best classification accuracy,of which the accuracy can reach 94.05%.Especially,it has a better advantage for the classification effect of supraventricular ectopic beats(class S)and fusion beats(class F).
基金supported by the National Natural Science Foundation of China (Nos.41972303 and 42102332)the Natural Science Foundation of Hubei Province (China) (Nos.2023AFA001 and 2023AFD232).
文摘Geochemical survey data analysis is recognized as an implemented and feasible way for lithological mapping to assist mineral exploration.With respect to available approaches,recent methodological advances have focused on deep learning algorithms which provide access to learn and extract information directly from geochemical survey data through multi-level networks and outputting end-to-end classification.Accordingly,this study developed a lithological mapping framework with the joint application of a convolutional neural network(CNN)and a long short-term memory(LSTM).The CNN-LSTM model is dominant in correlation extraction from CNN layers and coupling interaction learning from LSTM layers.This hybrid approach was demonstrated by mapping leucogranites in the Himalayan orogen based on stream sediment geochemical survey data,where the targeted leucogranite was expected to be potential resources of rare metals such as Li,Be,and W mineralization.Three comparative case studies were carried out from both visual and quantitative perspectives to illustrate the superiority of the proposed model.A guided spatial distribution map of leucogranites in the Himalayan orogen,divided into high-,moderate-,and low-potential areas,was delineated by the success rate curve,which further improves the efficiency for identifying unmapped leucogranites through geological mapping.In light of these results,this study provides an alternative solution for lithologic mapping using geochemical survey data at a regional scale and reduces the risk for decision making associated with mineral exploration.
基金the Gansu Province Soft Scientific Research Projects(No.2015GS06516)the Funds for Distinguished Young Scientists of Lanzhou University of Technology,China(No.J201304)。
文摘Predicting wind speed accurately is essential to ensure the stability of the wind power system and improve the utilization rate of wind energy.However,owing to the stochastic and intermittent of wind speed,predicting wind speed accurately is difficult.A new hybrid deep learning model based on empirical wavelet transform,recurrent neural network and error correction for short-term wind speed prediction is proposed in this paper.The empirical wavelet transformation is applied to decompose the original wind speed series.The long short term memory network and the Elman neural network are adopted to predict low-frequency and high-frequency wind speed sub-layers respectively to balance the calculation efficiency and prediction accuracy.The error correction strategy based on deep long short term memory network is developed to modify the prediction errors.Four actual wind speed series are utilized to verify the effectiveness of the proposed model.The empirical results indicate that the method proposed in this paper has satisfactory performance in wind speed prediction.
基金supported by the Natural Science Foundation of China(Grant Nos.51979158,51639008,51679135,and 51422905)the Program of Shanghai Academic Research Leader by Science and Technology Commission of Shanghai Municipality(Project No.19XD1421900)。
文摘Knowledge of pore-water pressure(PWP)variation is fundamental for slope stability.A precise prediction of PWP is difficult due to complex physical mechanisms and in situ natural variability.To explore the applicability and advantages of recurrent neural networks(RNNs)on PWP prediction,three variants of RNNs,i.e.,standard RNN,long short-term memory(LSTM)and gated recurrent unit(GRU)are adopted and compared with a traditional static artificial neural network(ANN),i.e.,multi-layer perceptron(MLP).Measurements of rainfall and PWP of representative piezometers from a fully instrumented natural slope in Hong Kong are used to establish the prediction models.The coefficient of determination(R^2)and root mean square error(RMSE)are used for model evaluations.The influence of input time series length on the model performance is investigated.The results reveal that MLP can provide acceptable performance but is not robust.The uncertainty bounds of RMSE of the MLP model range from 0.24 kPa to 1.12 k Pa for the selected two piezometers.The standard RNN can perform better but the robustness is slightly affected when there are significant time lags between PWP changes and rainfall.The GRU and LSTM models can provide more precise and robust predictions than the standard RNN.The effects of the hidden layer structure and the dropout technique are investigated.The single-layer GRU is accurate enough for PWP prediction,whereas a double-layer GRU brings extra time cost with little accuracy improvement.The dropout technique is essential to overfitting prevention and improvement of accuracy.
基金Supported by U.K.EPSRC Platform Grant(Grant No.EP/P027121/1).
文摘The remaining useful life(RUL)of a system is generally predicted by utilising the data collected from the sensors that continuously monitor different indicators.Recently,different deep learning(DL)techniques have been used for RUL prediction and achieved great success.Because the data is often time-sequential,recurrent neural network(RNN)has attracted significant interests due to its efficiency in dealing with such data.This paper systematically reviews RNN and its variants for RUL prediction,with a specific focus on understanding how different components(e.g.,types of optimisers and activation functions)or parameters(e.g.,sequence length,neuron quantities)affect their performance.After that,a case study using the well-studied NASA’s C-MAPSS dataset is presented to quantitatively evaluate the influence of various state-of-the-art RNN structures on the RUL prediction performance.The result suggests that the variant methods usually perform better than the original RNN,and among which,Bi-directional Long Short-Term Memory generally has the best performance in terms of stability,precision and accuracy.Certain model structures may fail to produce valid RUL prediction result due to the gradient vanishing or gradient exploring problem if the parameters are not chosen appropriately.It is concluded that parameter tuning is a crucial step to achieve optimal prediction performance.