A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan....A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.展开更多
A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively ...A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.展开更多
There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement an...There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement and time series of a landslide.The second one is the dynamic evolution of a landslide,which could not be feasibly simulated simply by traditional prediction models.In this paper,a dynamic model of displacement prediction is introduced for composite landslides based on a combination of empirical mode decomposition with soft screening stop criteria(SSSC-EMD)and deep bidirectional long short-term memory(DBi-LSTM)neural network.In the proposed model,the time series analysis and SSSC-EMD are used to decompose the observed accumulated displacements of a slope into three components,viz.trend displacement,periodic displacement,and random displacement.Then,by analyzing the evolution pattern of a landslide and its key factors triggering landslides,appropriate influencing factors are selected for each displacement component,and DBi-LSTM neural network to carry out multi-datadriven dynamic prediction for each displacement component.An accumulated displacement prediction has been obtained by a summation of each component.For accuracy verification and engineering practicability of the model,field observations from two known landslides in China,the Xintan landslide and the Bazimen landslide were collected for comparison and evaluation.The case study verified that the model proposed in this paper can better characterize the"stepwise"deformation characteristics of a slope.As compared with long short-term memory(LSTM)neural network,support vector machine(SVM),and autoregressive integrated moving average(ARIMA)model,DBi-LSTM neural network has higher accuracy in predicting the periodic displacement of slope deformation,with the mean absolute percentage error reduced by 3.063%,14.913%,and 13.960%respectively,and the root mean square error reduced by 1.951 mm,8.954 mm and 7.790 mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide new insight for practical landslide prevention and control engineering.展开更多
A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a force...A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.展开更多
To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with...To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with sea surface wind and wave heights as training samples.The prediction performance of the model is evaluated,and the error analysis shows that when using the same set of numerically predicted sea surface wind as input,the prediction error produced by the proposed LSTM model at Sta.N01 is 20%,18%and 23%lower than the conventional numerical wave models in terms of the total root mean square error(RMSE),scatter index(SI)and mean absolute error(MAE),respectively.Particularly,for significant wave height in the range of 3–5 m,the prediction accuracy of the LSTM model is improved the most remarkably,with RMSE,SI and MAE all decreasing by 24%.It is also evident that the numbers of hidden neurons,the numbers of buoys used and the time length of training samples all have impact on the prediction accuracy.However,the prediction does not necessary improve with the increase of number of hidden neurons or number of buoys used.The experiment trained by data with the longest time length is found to perform the best overall compared to other experiments with a shorter time length for training.Overall,long short-term memory neural network was proved to be a very promising method for future development and applications in wave forecasting.展开更多
An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models...An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models, this paper proposes a dynamic prediction model of landslide displacement based on singular spectrum analysis(SSA) and stack long short-term memory(SLSTM) network. The SSA is used to decompose the landslide accumulated displacement time series data into trend term and periodic term displacement subsequences. A cubic polynomial function is used to predict the trend term displacement subsequence, and the SLSTM neural network is used to predict the periodic term displacement subsequence. At the same time, the Bayesian optimization algorithm is used to determine that the SLSTM network input sequence length is 12 and the number of hidden layer nodes is 18. The SLSTM network is updated by adding predicted values to the training set to achieve dynamic displacement prediction. Finally, the accumulated landslide displacement is obtained by superimposing the predicted value of each displacement subsequence. The proposed model was verified on the Xintan landslide in Hubei Province, China. The results show that when predicting the displacement of the periodic term, the SLSTM network has higher prediction accuracy than the support vector machine(SVM) and auto regressive integrated moving average(ARIMA). The mean relative error(MRE) is reduced by 4.099% and 3.548% respectively, while the root mean square error(RMSE) is reduced by 5.830 mm and 3.854 mm respectively. It is concluded that the SLSTM network model can better simulate the dynamic characteristics of landslides.展开更多
Holter usually monitors electrocardiogram(ECG)signals for more than 24 hours to capture short-lived cardiac abnormalities.In view of the large amount of Holter data and the fact that the normal part accounts for the m...Holter usually monitors electrocardiogram(ECG)signals for more than 24 hours to capture short-lived cardiac abnormalities.In view of the large amount of Holter data and the fact that the normal part accounts for the majority,it is reasonable to design an algorithm that can automatically eliminate normal data segments as much as possible without missing any abnormal data segments,and then take the left segments to the doctors or the computer programs for further diagnosis.In this paper,we propose a preliminary abnormal segment screening method for Holter data.Based on long short-term memory(LSTM)networks,the prediction model is established and trained with the normal data of a monitored object.Then,on the basis of kernel density estimation,we learn the distribution law of prediction errors after applying the trained LSTM model to the regular data.Based on these,the preliminary abnormal ECG segment screening analysis is carried out without R wave detection.Experiments on the MIT-BIH arrhythmia database show that,under the condition of ensuring that no abnormal point is missed,53.89% of normal segments can be effectively obviated.This work can greatly reduce the workload of subsequent further processing.展开更多
In this paper,the recurrent neural network structure of a bidirectional long shortterm memory network(Bi-LSTM)with special memory cells that store information is used to characterize the deep features of the variation...In this paper,the recurrent neural network structure of a bidirectional long shortterm memory network(Bi-LSTM)with special memory cells that store information is used to characterize the deep features of the variation pattern between logging and seismic data.A mapping relationship model between high-frequency logging data and low-frequency seismic data is established via nonlinear mapping.The seismic waveform is infinitely approximated using the logging curve in the low-frequency band to obtain a nonlinear mapping model of this scale,which then stepwise approach the logging curve in the high-frequency band.Finally,a seismic-inversion method of nonlinear mapping multilevel well–seismic matching based on the Bi-LSTM network is developed.The characteristic of this method is that by applying the multilevel well–seismic matching process,the seismic data are stepwise matched to the scale range that is consistent with the logging curve.Further,the matching operator at each level can be stably obtained to effectively overcome the problems that occur in the well–seismic matching process,such as the inconsistency in the scale of two types of data,accuracy in extracting the seismic wavelet of the well-side seismic traces,and multiplicity of solutions.Model test and practical application demonstrate that this method improves the vertical resolution of inversion results,and at the same time,the boundary and the lateral characteristics of the sand body are well maintained to improve the accuracy of thin-layer sand body prediction and achieve an improved practical application effect.展开更多
The fraction defective of semi-finished products is predicted to optimize the process of relay production lines, by which production quality and productivity are increased, and the costs are decreased. The process par...The fraction defective of semi-finished products is predicted to optimize the process of relay production lines, by which production quality and productivity are increased, and the costs are decreased. The process parameters of relay production lines are studied based on the long-and-short-term memory network. Then, the Keras deep learning framework is utilized to build up a short-term relay quality prediction algorithm for the semi-finished product. A simulation model is used to study prediction algorithm. The simulation results show that the average prediction absolute error of the fraction is less than 5%. This work displays great application potential in the relay production lines.展开更多
To address the shortcomings of single-step decision making in the existing deep reinforcement learning based unmanned aerial vehicle(UAV)real-time path planning problem,a real-time UAV path planning algorithm based on...To address the shortcomings of single-step decision making in the existing deep reinforcement learning based unmanned aerial vehicle(UAV)real-time path planning problem,a real-time UAV path planning algorithm based on long shortterm memory(RPP-LSTM)network is proposed,which combines the memory characteristics of recurrent neural network(RNN)and the deep reinforcement learning algorithm.LSTM networks are used in this algorithm as Q-value networks for the deep Q network(DQN)algorithm,which makes the decision of the Q-value network has some memory.Thanks to LSTM network,the Q-value network can use the previous environmental information and action information which effectively avoids the problem of single-step decision considering only the current environment.Besides,the algorithm proposes a hierarchical reward and punishment function for the specific problem of UAV real-time path planning,so that the UAV can more reasonably perform path planning.Simulation verification shows that compared with the traditional feed-forward neural network(FNN)based UAV autonomous path planning algorithm,the RPP-LSTM proposed in this paper can adapt to more complex environments and has significantly improved robustness and accuracy when performing UAV real-time path planning.展开更多
Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a s...Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a single prediction model is hard to capture temporal features effectively, resulting in diminished predictionaccuracy. In this study, a hybrid deep learning framework that integrates attention mechanism, convolution neuralnetwork (CNN), improved chaotic particle swarm optimization (ICPSO), and long short-term memory (LSTM), isproposed for short-term household load forecasting. Firstly, the CNN model is employed to extract features fromthe original data, enhancing the quality of data features. Subsequently, the moving average method is used for datapreprocessing, followed by the application of the LSTM network to predict the processed data. Moreover, the ICPSOalgorithm is introduced to optimize the parameters of LSTM, aimed at boosting the model’s running speed andaccuracy. Finally, the attention mechanism is employed to optimize the output value of LSTM, effectively addressinginformation loss in LSTM induced by lengthy sequences and further elevating prediction accuracy. According tothe numerical analysis, the accuracy and effectiveness of the proposed hybrid model have been verified. It canexplore data features adeptly, achieving superior prediction accuracy compared to other forecasting methods forthe household load exhibiting significant fluctuations across different seasons.展开更多
Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and a...Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.展开更多
Surface EMG contains a lot of physiological information reflecting the intention of human movement.Gesture recognition by surface EMG has been widely concerned in the field of human-computer interaction and rehabilita...Surface EMG contains a lot of physiological information reflecting the intention of human movement.Gesture recognition by surface EMG has been widely concerned in the field of human-computer interaction and rehabilitation.At present,most studies on gesture recognition based on surface EMG signal are obtained by discrete separation method,ignoring continuous natural motion.A gesture recognition method of surface EMG based on improved long short-term memory network is proposed.sEMG sensors are rationally arranged according to physiological structure and muscle function.In this paper,the finger curvature is used to describe the gesture state,and the gesture at every moment can be represented by the set of different finger curvature,so as to realize continuous gesture recognition.Finally,the proposed gesture recognition model is tested on Ninapro(a large gesture recognition database).The results show that the proposed method can effectively improve the representation mining ability of surface EMG signal,and provide reference for deep learning modeling of human gesture recognition.展开更多
Lithium-ion batteries are the most widely accepted type of battery in the electric vehicle industry because of some of their positive inherent characteristics. However, the safety problems associated with inaccurate e...Lithium-ion batteries are the most widely accepted type of battery in the electric vehicle industry because of some of their positive inherent characteristics. However, the safety problems associated with inaccurate estimation and prediction of the state of health of these batteries have attracted wide attention due to the adverse negative effect on vehicle safety. In this paper, both machine and deep learning models were used to estimate the state of health of lithium-ion batteries. The paper introduces the definition of battery health status and its importance in the electric vehicle industry. Based on the data preprocessing and visualization analysis, three features related to actual battery capacity degradation are extracted from the data. Two learning models, SVR and LSTM were employed for the state of health estimation and their respective results are compared in this paper. The mean square error and coefficient of determination were the two metrics for the performance evaluation of the models. The experimental results indicate that both models have high estimation results. However, the metrics indicated that the SVR was the overall best model.展开更多
Stress changes due to changes in fluid pressure and temperature in a faulted formation may lead to the opening/shearing of the fault.This can be due to subsurface(geo)engineering activities such as fluid injections an...Stress changes due to changes in fluid pressure and temperature in a faulted formation may lead to the opening/shearing of the fault.This can be due to subsurface(geo)engineering activities such as fluid injections and geologic disposal of nuclear waste.Such activities are expected to rise in the future making it necessary to assess their short-and long-term safety.Here,a new machine learning(ML)approach to model pore pressure and fault displacements in response to high-pressure fluid injection cycles is developed.The focus is on fault behavior near the injection borehole.To capture the temporal dependencies in the data,long short-term memory(LSTM)networks are utilized.To prevent error accumulation within the forecast window,four critical measures to train a robust LSTM model for predicting fault response are highlighted:(i)setting an appropriate value of LSTM lag,(ii)calibrating the LSTM cell dimension,(iii)learning rate reduction during weight optimization,and(iv)not adopting an independent injection cycle as a validation set.Several numerical experiments were conducted,which demonstrated that the ML model can capture peaks in pressure and associated fault displacement that accompany an increase in fluid injection.The model also captured the decay in pressure and displacement during the injection shut-in period.Further,the ability of an ML model to highlight key changes in fault hydromechanical activation processes was investigated,which shows that ML can be used to monitor risk of fault activation and leakage during high pressure fluid injections.展开更多
Joint operation optimization for electric vehicles(EVs)and on-site or adjacent photovoltaic generation(PVG)are pivotal to maintaining the security and economics of the operation of the power system concerned.Conventio...Joint operation optimization for electric vehicles(EVs)and on-site or adjacent photovoltaic generation(PVG)are pivotal to maintaining the security and economics of the operation of the power system concerned.Conventional offline optimization algorithms lack real-time applicability due to uncertainties involved in the charging service of an EV charging station(EVCS).Firstly,an optimization model for real-time EV charging strategy is proposed to address these challenges,which accounts for environmental uncertainties of an EVCS,encompassing EV arrivals,charging demands,PVG outputs,and the electricity price.Then,a scenario-based two-stage optimization approach is formulated.The scenarios of the underlying uncertain environmental factors are generated by the Bayesian long short-term memory(B-LSTM)network.Finally,numerical results substantiate the efficacy of the proposed optimization approach,and demonstrate superior profitability compared with prevalent approaches.展开更多
Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning netwo...Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.展开更多
Continuous sign language recognition(CSLR)is challenging due to the complexity of video background,hand gesture variability,and temporal modeling difficulties.This work proposes a CSLR method based on a spatialtempora...Continuous sign language recognition(CSLR)is challenging due to the complexity of video background,hand gesture variability,and temporal modeling difficulties.This work proposes a CSLR method based on a spatialtemporal graph attention network to focus on essential features of video series.The method considers local details of sign language movements by taking the information on joints and bones as inputs and constructing a spatialtemporal graph to reflect inter-frame relevance and physical connections between nodes.The graph-based multihead attention mechanism is utilized with adjacent matrix calculation for better local-feature exploration,and short-term motion correlation modeling is completed via a temporal convolutional network.We adopted BLSTM to learn the long-termdependence and connectionist temporal classification to align the word-level sequences.The proposed method achieves competitive results regarding word error rates(1.59%)on the Chinese Sign Language dataset and the mean Jaccard Index(65.78%)on the ChaLearn LAP Continuous Gesture Dataset.展开更多
To meet the ever-increasing traffic demand and enhance the coverage of cellular networks,network densification is one of the crucial paradigms of 5G and beyond mobile networks,which can improve system capacity by depl...To meet the ever-increasing traffic demand and enhance the coverage of cellular networks,network densification is one of the crucial paradigms of 5G and beyond mobile networks,which can improve system capacity by deploying a large number of Access Points(APs)in the service area.However,since the energy consumption of APs generally accounts for a substantial part of the communication system,how to deal with the consequent energy issue is a challenging task for a mobile network with densely deployed APs.In this paper,we propose an intelligent AP switching on/off scheme to reduce the system energy consumption with the prerequisite of guaranteeing the quality of service,where the signaling overhead is also taken into consideration to ensure the stability of the network.First,based on historical traffic data,a long short-term memory method is introduced to predict the future traffic distribution,by which we can roughly determine when the AP switching operation should be triggered;second,we present an efficient three-step AP selection strategy to determine which of the APs would be switched on or off;third,an AP switching scheme with a threshold is proposed to adjust the switching frequency so as to improve the stability of the system.Experiment results indicate that our proposed traffic forecasting method performs well in practical scenarios,where the normalized root mean square error is within 10%.Furthermore,the achieved energy-saving is more than 28% on average with a reasonable outage probability and switching frequency for an area served by 40 APs in a commercial mobile network.展开更多
Time series forecasting and analysis are widely used in many fields and application scenarios.Time series historical data reflects the change pattern and trend,which can serve the application and decision in each appl...Time series forecasting and analysis are widely used in many fields and application scenarios.Time series historical data reflects the change pattern and trend,which can serve the application and decision in each application scenario to a certain extent.In this paper,we select the time series prediction problem in the atmospheric environment scenario to start the application research.In terms of data support,we obtain the data of nearly 3500 vehicles in some cities in China fromRunwoda Research Institute,focusing on the major pollutant emission data of non-road mobile machinery and high emission vehicles in Beijing and Bozhou,Anhui Province to build the dataset and conduct the time series prediction analysis experiments on them.This paper proposes a P-gLSTNet model,and uses Autoregressive Integrated Moving Average model(ARIMA),long and short-term memory(LSTM),and Prophet to predict and compare the emissions in the future period.The experiments are validated on four public data sets and one self-collected data set,and the mean absolute error(MAE),root mean square error(RMSE),and mean absolute percentage error(MAPE)are selected as the evaluationmetrics.The experimental results show that the proposed P-gLSTNet fusion model predicts less error,outperforms the backbone method,and is more suitable for the prediction of time-series data in this scenario.展开更多
基金National Key Research and Development Program of China (Grant No. 2022YFE0102700)National Natural Science Foundation of China (Grant No. 52102420)+2 种基金research project “Safe Da Batt” (03EMF0409A) funded by the German Federal Ministry of Digital and Transport (BMDV)China Postdoctoral Science Foundation (Grant No. 2023T160085)Sichuan Science and Technology Program (Grant No. 2024NSFSC0938)。
文摘A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.
文摘A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.
文摘There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement and time series of a landslide.The second one is the dynamic evolution of a landslide,which could not be feasibly simulated simply by traditional prediction models.In this paper,a dynamic model of displacement prediction is introduced for composite landslides based on a combination of empirical mode decomposition with soft screening stop criteria(SSSC-EMD)and deep bidirectional long short-term memory(DBi-LSTM)neural network.In the proposed model,the time series analysis and SSSC-EMD are used to decompose the observed accumulated displacements of a slope into three components,viz.trend displacement,periodic displacement,and random displacement.Then,by analyzing the evolution pattern of a landslide and its key factors triggering landslides,appropriate influencing factors are selected for each displacement component,and DBi-LSTM neural network to carry out multi-datadriven dynamic prediction for each displacement component.An accumulated displacement prediction has been obtained by a summation of each component.For accuracy verification and engineering practicability of the model,field observations from two known landslides in China,the Xintan landslide and the Bazimen landslide were collected for comparison and evaluation.The case study verified that the model proposed in this paper can better characterize the"stepwise"deformation characteristics of a slope.As compared with long short-term memory(LSTM)neural network,support vector machine(SVM),and autoregressive integrated moving average(ARIMA)model,DBi-LSTM neural network has higher accuracy in predicting the periodic displacement of slope deformation,with the mean absolute percentage error reduced by 3.063%,14.913%,and 13.960%respectively,and the root mean square error reduced by 1.951 mm,8.954 mm and 7.790 mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide new insight for practical landslide prevention and control engineering.
基金supported by the Ministry of Trade,Industry & Energy(MOTIE,Korea) under Industrial Technology Innovation Program (No.10063424,'development of distant speech recognition and multi-task dialog processing technologies for in-door conversational robots')
文摘A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.
基金The National Key R&D Program of China under contract No.2016YFC1402103
文摘To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with sea surface wind and wave heights as training samples.The prediction performance of the model is evaluated,and the error analysis shows that when using the same set of numerically predicted sea surface wind as input,the prediction error produced by the proposed LSTM model at Sta.N01 is 20%,18%and 23%lower than the conventional numerical wave models in terms of the total root mean square error(RMSE),scatter index(SI)and mean absolute error(MAE),respectively.Particularly,for significant wave height in the range of 3–5 m,the prediction accuracy of the LSTM model is improved the most remarkably,with RMSE,SI and MAE all decreasing by 24%.It is also evident that the numbers of hidden neurons,the numbers of buoys used and the time length of training samples all have impact on the prediction accuracy.However,the prediction does not necessary improve with the increase of number of hidden neurons or number of buoys used.The experiment trained by data with the longest time length is found to perform the best overall compared to other experiments with a shorter time length for training.Overall,long short-term memory neural network was proved to be a very promising method for future development and applications in wave forecasting.
基金supported by the Natural Science Foundation of Shaanxi Province under Grant 2019JQ206in part by the Science and Technology Department of Shaanxi Province under Grant 2020CGXNG-009in part by the Education Department of Shaanxi Province under Grant 17JK0346。
文摘An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models, this paper proposes a dynamic prediction model of landslide displacement based on singular spectrum analysis(SSA) and stack long short-term memory(SLSTM) network. The SSA is used to decompose the landslide accumulated displacement time series data into trend term and periodic term displacement subsequences. A cubic polynomial function is used to predict the trend term displacement subsequence, and the SLSTM neural network is used to predict the periodic term displacement subsequence. At the same time, the Bayesian optimization algorithm is used to determine that the SLSTM network input sequence length is 12 and the number of hidden layer nodes is 18. The SLSTM network is updated by adding predicted values to the training set to achieve dynamic displacement prediction. Finally, the accumulated landslide displacement is obtained by superimposing the predicted value of each displacement subsequence. The proposed model was verified on the Xintan landslide in Hubei Province, China. The results show that when predicting the displacement of the periodic term, the SLSTM network has higher prediction accuracy than the support vector machine(SVM) and auto regressive integrated moving average(ARIMA). The mean relative error(MRE) is reduced by 4.099% and 3.548% respectively, while the root mean square error(RMSE) is reduced by 5.830 mm and 3.854 mm respectively. It is concluded that the SLSTM network model can better simulate the dynamic characteristics of landslides.
文摘Holter usually monitors electrocardiogram(ECG)signals for more than 24 hours to capture short-lived cardiac abnormalities.In view of the large amount of Holter data and the fact that the normal part accounts for the majority,it is reasonable to design an algorithm that can automatically eliminate normal data segments as much as possible without missing any abnormal data segments,and then take the left segments to the doctors or the computer programs for further diagnosis.In this paper,we propose a preliminary abnormal segment screening method for Holter data.Based on long short-term memory(LSTM)networks,the prediction model is established and trained with the normal data of a monitored object.Then,on the basis of kernel density estimation,we learn the distribution law of prediction errors after applying the trained LSTM model to the regular data.Based on these,the preliminary abnormal ECG segment screening analysis is carried out without R wave detection.Experiments on the MIT-BIH arrhythmia database show that,under the condition of ensuring that no abnormal point is missed,53.89% of normal segments can be effectively obviated.This work can greatly reduce the workload of subsequent further processing.
基金supported by the National Major Science and Technology Special Project(No.2016ZX05026-002).
文摘In this paper,the recurrent neural network structure of a bidirectional long shortterm memory network(Bi-LSTM)with special memory cells that store information is used to characterize the deep features of the variation pattern between logging and seismic data.A mapping relationship model between high-frequency logging data and low-frequency seismic data is established via nonlinear mapping.The seismic waveform is infinitely approximated using the logging curve in the low-frequency band to obtain a nonlinear mapping model of this scale,which then stepwise approach the logging curve in the high-frequency band.Finally,a seismic-inversion method of nonlinear mapping multilevel well–seismic matching based on the Bi-LSTM network is developed.The characteristic of this method is that by applying the multilevel well–seismic matching process,the seismic data are stepwise matched to the scale range that is consistent with the logging curve.Further,the matching operator at each level can be stably obtained to effectively overcome the problems that occur in the well–seismic matching process,such as the inconsistency in the scale of two types of data,accuracy in extracting the seismic wavelet of the well-side seismic traces,and multiplicity of solutions.Model test and practical application demonstrate that this method improves the vertical resolution of inversion results,and at the same time,the boundary and the lateral characteristics of the sand body are well maintained to improve the accuracy of thin-layer sand body prediction and achieve an improved practical application effect.
基金funded by Fujian Science and Technology Key Project(No.2016H6022,2018J01099,2017H0037)
文摘The fraction defective of semi-finished products is predicted to optimize the process of relay production lines, by which production quality and productivity are increased, and the costs are decreased. The process parameters of relay production lines are studied based on the long-and-short-term memory network. Then, the Keras deep learning framework is utilized to build up a short-term relay quality prediction algorithm for the semi-finished product. A simulation model is used to study prediction algorithm. The simulation results show that the average prediction absolute error of the fraction is less than 5%. This work displays great application potential in the relay production lines.
基金supported by the Natural Science Basic Research Prog ram of Shaanxi(2022JQ-593)。
文摘To address the shortcomings of single-step decision making in the existing deep reinforcement learning based unmanned aerial vehicle(UAV)real-time path planning problem,a real-time UAV path planning algorithm based on long shortterm memory(RPP-LSTM)network is proposed,which combines the memory characteristics of recurrent neural network(RNN)and the deep reinforcement learning algorithm.LSTM networks are used in this algorithm as Q-value networks for the deep Q network(DQN)algorithm,which makes the decision of the Q-value network has some memory.Thanks to LSTM network,the Q-value network can use the previous environmental information and action information which effectively avoids the problem of single-step decision considering only the current environment.Besides,the algorithm proposes a hierarchical reward and punishment function for the specific problem of UAV real-time path planning,so that the UAV can more reasonably perform path planning.Simulation verification shows that compared with the traditional feed-forward neural network(FNN)based UAV autonomous path planning algorithm,the RPP-LSTM proposed in this paper can adapt to more complex environments and has significantly improved robustness and accuracy when performing UAV real-time path planning.
基金the Shanghai Rising-Star Program(No.22QA1403900)the National Natural Science Foundation of China(No.71804106)the Noncarbon Energy Conversion and Utilization Institute under the Shanghai Class IV Peak Disciplinary Development Program.
文摘Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a single prediction model is hard to capture temporal features effectively, resulting in diminished predictionaccuracy. In this study, a hybrid deep learning framework that integrates attention mechanism, convolution neuralnetwork (CNN), improved chaotic particle swarm optimization (ICPSO), and long short-term memory (LSTM), isproposed for short-term household load forecasting. Firstly, the CNN model is employed to extract features fromthe original data, enhancing the quality of data features. Subsequently, the moving average method is used for datapreprocessing, followed by the application of the LSTM network to predict the processed data. Moreover, the ICPSOalgorithm is introduced to optimize the parameters of LSTM, aimed at boosting the model’s running speed andaccuracy. Finally, the attention mechanism is employed to optimize the output value of LSTM, effectively addressinginformation loss in LSTM induced by lengthy sequences and further elevating prediction accuracy. According tothe numerical analysis, the accuracy and effectiveness of the proposed hybrid model have been verified. It canexplore data features adeptly, achieving superior prediction accuracy compared to other forecasting methods forthe household load exhibiting significant fluctuations across different seasons.
基金supported in part by the National Natural Science Foundation of China under Grant 62203468in part by the Technological Research and Development Program of China State Railway Group Co.,Ltd.under Grant Q2023X011+1 种基金in part by the Young Elite Scientist Sponsorship Program by China Association for Science and Technology(CAST)under Grant 2022QNRC001in part by the Youth Talent Program Supported by China Railway Society,and in part by the Research Program of China Academy of Railway Sciences Corporation Limited under Grant 2023YJ112.
文摘Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.
文摘Surface EMG contains a lot of physiological information reflecting the intention of human movement.Gesture recognition by surface EMG has been widely concerned in the field of human-computer interaction and rehabilitation.At present,most studies on gesture recognition based on surface EMG signal are obtained by discrete separation method,ignoring continuous natural motion.A gesture recognition method of surface EMG based on improved long short-term memory network is proposed.sEMG sensors are rationally arranged according to physiological structure and muscle function.In this paper,the finger curvature is used to describe the gesture state,and the gesture at every moment can be represented by the set of different finger curvature,so as to realize continuous gesture recognition.Finally,the proposed gesture recognition model is tested on Ninapro(a large gesture recognition database).The results show that the proposed method can effectively improve the representation mining ability of surface EMG signal,and provide reference for deep learning modeling of human gesture recognition.
文摘Lithium-ion batteries are the most widely accepted type of battery in the electric vehicle industry because of some of their positive inherent characteristics. However, the safety problems associated with inaccurate estimation and prediction of the state of health of these batteries have attracted wide attention due to the adverse negative effect on vehicle safety. In this paper, both machine and deep learning models were used to estimate the state of health of lithium-ion batteries. The paper introduces the definition of battery health status and its importance in the electric vehicle industry. Based on the data preprocessing and visualization analysis, three features related to actual battery capacity degradation are extracted from the data. Two learning models, SVR and LSTM were employed for the state of health estimation and their respective results are compared in this paper. The mean square error and coefficient of determination were the two metrics for the performance evaluation of the models. The experimental results indicate that both models have high estimation results. However, the metrics indicated that the SVR was the overall best model.
基金supported by the US Department of Energy (DOE),the Office of Nuclear Energy,Spent Fuel and Waste Science and Technology Campaign,under Contract Number DE-AC02-05CH11231the National Energy Technology Laboratory under the award number FP00013650 at Lawrence Berkeley National Laboratory.
文摘Stress changes due to changes in fluid pressure and temperature in a faulted formation may lead to the opening/shearing of the fault.This can be due to subsurface(geo)engineering activities such as fluid injections and geologic disposal of nuclear waste.Such activities are expected to rise in the future making it necessary to assess their short-and long-term safety.Here,a new machine learning(ML)approach to model pore pressure and fault displacements in response to high-pressure fluid injection cycles is developed.The focus is on fault behavior near the injection borehole.To capture the temporal dependencies in the data,long short-term memory(LSTM)networks are utilized.To prevent error accumulation within the forecast window,four critical measures to train a robust LSTM model for predicting fault response are highlighted:(i)setting an appropriate value of LSTM lag,(ii)calibrating the LSTM cell dimension,(iii)learning rate reduction during weight optimization,and(iv)not adopting an independent injection cycle as a validation set.Several numerical experiments were conducted,which demonstrated that the ML model can capture peaks in pressure and associated fault displacement that accompany an increase in fluid injection.The model also captured the decay in pressure and displacement during the injection shut-in period.Further,the ability of an ML model to highlight key changes in fault hydromechanical activation processes was investigated,which shows that ML can be used to monitor risk of fault activation and leakage during high pressure fluid injections.
基金supported in part by the National Natural Science Foundation of China(No.U1910216)in part by the Science and Technology Project of China Southern Power Grid Company Limited(No.080037KK52190039/GZHKJXM20190100)。
文摘Joint operation optimization for electric vehicles(EVs)and on-site or adjacent photovoltaic generation(PVG)are pivotal to maintaining the security and economics of the operation of the power system concerned.Conventional offline optimization algorithms lack real-time applicability due to uncertainties involved in the charging service of an EV charging station(EVCS).Firstly,an optimization model for real-time EV charging strategy is proposed to address these challenges,which accounts for environmental uncertainties of an EVCS,encompassing EV arrivals,charging demands,PVG outputs,and the electricity price.Then,a scenario-based two-stage optimization approach is formulated.The scenarios of the underlying uncertain environmental factors are generated by the Bayesian long short-term memory(B-LSTM)network.Finally,numerical results substantiate the efficacy of the proposed optimization approach,and demonstrate superior profitability compared with prevalent approaches.
文摘Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.
基金supported by the Key Research&Development Plan Project of Shandong Province,China(No.2017GGX10127).
文摘Continuous sign language recognition(CSLR)is challenging due to the complexity of video background,hand gesture variability,and temporal modeling difficulties.This work proposes a CSLR method based on a spatialtemporal graph attention network to focus on essential features of video series.The method considers local details of sign language movements by taking the information on joints and bones as inputs and constructing a spatialtemporal graph to reflect inter-frame relevance and physical connections between nodes.The graph-based multihead attention mechanism is utilized with adjacent matrix calculation for better local-feature exploration,and short-term motion correlation modeling is completed via a temporal convolutional network.We adopted BLSTM to learn the long-termdependence and connectionist temporal classification to align the word-level sequences.The proposed method achieves competitive results regarding word error rates(1.59%)on the Chinese Sign Language dataset and the mean Jaccard Index(65.78%)on the ChaLearn LAP Continuous Gesture Dataset.
基金partially supported by the National Natural Science Foundation of China under Grants 61801208,61931023,and U1936202.
文摘To meet the ever-increasing traffic demand and enhance the coverage of cellular networks,network densification is one of the crucial paradigms of 5G and beyond mobile networks,which can improve system capacity by deploying a large number of Access Points(APs)in the service area.However,since the energy consumption of APs generally accounts for a substantial part of the communication system,how to deal with the consequent energy issue is a challenging task for a mobile network with densely deployed APs.In this paper,we propose an intelligent AP switching on/off scheme to reduce the system energy consumption with the prerequisite of guaranteeing the quality of service,where the signaling overhead is also taken into consideration to ensure the stability of the network.First,based on historical traffic data,a long short-term memory method is introduced to predict the future traffic distribution,by which we can roughly determine when the AP switching operation should be triggered;second,we present an efficient three-step AP selection strategy to determine which of the APs would be switched on or off;third,an AP switching scheme with a threshold is proposed to adjust the switching frequency so as to improve the stability of the system.Experiment results indicate that our proposed traffic forecasting method performs well in practical scenarios,where the normalized root mean square error is within 10%.Furthermore,the achieved energy-saving is more than 28% on average with a reasonable outage probability and switching frequency for an area served by 40 APs in a commercial mobile network.
基金the Beijing Chaoyang District Collaborative Innovation Project(No.CYXT2013)the subject support of Beijing Municipal Science and Technology Key R&D Program-Capital Blue Sky Action Cultivation Project(Z19110900910000)+1 种基金“Research and Demonstration ofHigh Emission Vehicle Monitoring Equipment System Based on Sensor Integration Technology”(Z19110000911003)This work was supported by the Academic Research Projects of Beijing Union University(No.ZK80202103).
文摘Time series forecasting and analysis are widely used in many fields and application scenarios.Time series historical data reflects the change pattern and trend,which can serve the application and decision in each application scenario to a certain extent.In this paper,we select the time series prediction problem in the atmospheric environment scenario to start the application research.In terms of data support,we obtain the data of nearly 3500 vehicles in some cities in China fromRunwoda Research Institute,focusing on the major pollutant emission data of non-road mobile machinery and high emission vehicles in Beijing and Bozhou,Anhui Province to build the dataset and conduct the time series prediction analysis experiments on them.This paper proposes a P-gLSTNet model,and uses Autoregressive Integrated Moving Average model(ARIMA),long and short-term memory(LSTM),and Prophet to predict and compare the emissions in the future period.The experiments are validated on four public data sets and one self-collected data set,and the mean absolute error(MAE),root mean square error(RMSE),and mean absolute percentage error(MAPE)are selected as the evaluationmetrics.The experimental results show that the proposed P-gLSTNet fusion model predicts less error,outperforms the backbone method,and is more suitable for the prediction of time-series data in this scenario.