To fully make use of information from different representation subspaces,a multi-head attention-based long short-term memory(LSTM)model is proposed in this study for speech emotion recognition(SER).The proposed model ...To fully make use of information from different representation subspaces,a multi-head attention-based long short-term memory(LSTM)model is proposed in this study for speech emotion recognition(SER).The proposed model uses frame-level features and takes the temporal information of emotion speech as the input of the LSTM layer.Here,a multi-head time-dimension attention(MHTA)layer was employed to linearly project the output of the LSTM layer into different subspaces for the reduced-dimension context vectors.To provide relative vital information from other dimensions,the output of MHTA,the output of feature-dimension attention,and the last time-step output of LSTM were utilized to form multiple context vectors as the input of the fully connected layer.To improve the performance of multiple vectors,feature-dimension attention was employed for the all-time output of the first LSTM layer.The proposed model was evaluated on the eNTERFACE and GEMEP corpora,respectively.The results indicate that the proposed model outperforms LSTM by 14.6%and 10.5%for eNTERFACE and GEMEP,respectively,proving the effectiveness of the proposed model in SER tasks.展开更多
Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currentl...Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currently,least squares(LS)+auto-regressive(AR)hybrid method is one of the main techniques of PM prediction.Besides,the weighted LS+AR hybrid method performs well for PM short-term prediction.However,the corresponding covariance information of LS fitting residuals deserves further exploration in the AR model.In this study,we have derived a modified stochastic model for the LS+AR hybrid method,namely the weighted LS+weighted AR hybrid method.By using the PM data products of IERS EOP 14 C04,the numerical results indicate that for PM short-term forecasting,the proposed weighted LS+weighted AR hybrid method shows an advantage over both the LS+AR hybrid method and the weighted LS+AR hybrid method.Compared to the mean absolute errors(MAEs)of PMX/PMY sho rt-term prediction of the LS+AR hybrid method and the weighted LS+AR hybrid method,the weighted LS+weighted AR hybrid method shows average improvements of 6.61%/12.08%and 0.24%/11.65%,respectively.Besides,for the slopes of the linear regression lines fitted to the errors of each method,the growth of the prediction error of the proposed method is slower than that of the other two methods.展开更多
The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning mode...The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning models have some problems,such as poor nonlinear performance,local optimum and incomplete factors feature extraction.These issues can affect the accuracy of slope stability prediction.Therefore,a deep learning algorithm called Long short-term memory(LSTM)has been innovatively proposed to predict slope stability.Taking the Ganzhou City in China as the study area,the landslide inventory and their characteristics of geotechnical parameters,slope height and slope angle are analyzed.Based on these characteristics,typical soil slopes are constructed using the Geo-Studio software.Five control factors affecting slope stability,including slope height,slope angle,internal friction angle,cohesion and volumetric weight,are selected to form different slope and construct model input variables.Then,the limit equilibrium method is used to calculate the stability coefficients of these typical soil slopes under different control factors.Each slope stability coefficient and its corresponding control factors is a slope sample.As a result,a total of 2160 training samples and 450 testing samples are constructed.These sample sets are imported into LSTM for modelling and compared with the support vector machine(SVM),random forest(RF)and convo-lutional neural network(CNN).The results show that the LSTM overcomes the problem that the commonly used machine learning models have difficulty extracting global features.Furthermore,LSTM has a better prediction performance for slope stability compared to SVM,RF and CNN models.展开更多
With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning ...With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning and operating traffic structures.This study proposed an improved ensemble-based deep learning method to solve traffic volume prediction problems.A set of optimal hyperparameters is also applied for the suggested approach to improve the performance of the learning process.The fusion of these methodologies aims to harness ensemble empirical mode decomposition’s capacity to discern complex traffic patterns and long short-term memory’s proficiency in learning temporal relationships.Firstly,a dataset for automatic vehicle identification is obtained and utilized in the preprocessing stage of the ensemble empirical mode decomposition model.The second aspect involves predicting traffic volume using the long short-term memory algorithm.Next,the study employs a trial-and-error approach to select a set of optimal hyperparameters,including the lookback window,the number of neurons in the hidden layers,and the gradient descent optimization.Finally,the fusion of the obtained results leads to a final traffic volume prediction.The experimental results show that the proposed method outperforms other benchmarks regarding various evaluation measures,including mean absolute error,root mean squared error,mean absolute percentage error,and R-squared.The achieved R-squared value reaches an impressive 98%,while the other evaluation indices surpass the competing.These findings highlight the accuracy of traffic pattern prediction.Consequently,this offers promising prospects for enhancing transportation management systems and urban infrastructure planning.展开更多
Breast cancer is a significant threat to the global population,affecting not only women but also a threat to the entire population.With recent advancements in digital pathology,Eosin and hematoxylin images provide enh...Breast cancer is a significant threat to the global population,affecting not only women but also a threat to the entire population.With recent advancements in digital pathology,Eosin and hematoxylin images provide enhanced clarity in examiningmicroscopic features of breast tissues based on their staining properties.Early cancer detection facilitates the quickening of the therapeutic process,thereby increasing survival rates.The analysis made by medical professionals,especially pathologists,is time-consuming and challenging,and there arises a need for automated breast cancer detection systems.The upcoming artificial intelligence platforms,especially deep learning models,play an important role in image diagnosis and prediction.Initially,the histopathology biopsy images are taken from standard data sources.Further,the gathered images are given as input to the Multi-Scale Dilated Vision Transformer,where the essential features are acquired.Subsequently,the features are subjected to the Bidirectional Long Short-Term Memory(Bi-LSTM)for classifying the breast cancer disorder.The efficacy of the model is evaluated using divergent metrics.When compared with other methods,the proposed work reveals that it offers impressive results for detection.展开更多
BACKGROUND Endometrial cancer(EC)is a common gynecological malignancy that typically requires prompt surgical intervention;however,the advantage of surgical management is limited by the high postoperative recurrence r...BACKGROUND Endometrial cancer(EC)is a common gynecological malignancy that typically requires prompt surgical intervention;however,the advantage of surgical management is limited by the high postoperative recurrence rates and adverse outcomes.Previous studies have highlighted the prognostic potential of circulating tumor DNA(ctDNA)monitoring for minimal residual disease in patients with EC.AIM To develop and validate an optimized ctDNA-based model for predicting shortterm postoperative EC recurrence.METHODS We retrospectively analyzed 294 EC patients treated surgically from 2015-2019 to devise a short-term recurrence prediction model,which was validated on 143 EC patients operated between 2020 and 2021.Prognostic factors were identified using univariate Cox,Lasso,and multivariate Cox regressions.A nomogram was created to predict the 1,1.5,and 2-year recurrence-free survival(RFS).Model performance was assessed via receiver operating characteristic(ROC),calibration,and decision curve analyses(DCA),leading to a recurrence risk stratification system.RESULTS Based on the regression analysis and the nomogram created,patients with postoperative ctDNA-negativity,postoperative carcinoembryonic antigen 125(CA125)levels of<19 U/mL,and grade G1 tumors had improved RFS after surgery.The nomogram’s efficacy for recurrence prediction was confirmed through ROC analysis,calibration curves,and DCA methods,highlighting its high accuracy and clinical utility.Furthermore,using the nomogram,the patients were successfully classified into three risk subgroups.CONCLUSION The nomogram accurately predicted RFS after EC surgery at 1,1.5,and 2 years.This model will help clinicians personalize treatments,stratify risks,and enhance clinical outcomes for patients with EC.展开更多
A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively ...A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.展开更多
There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement an...There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement and time series of a landslide.The second one is the dynamic evolution of a landslide,which could not be feasibly simulated simply by traditional prediction models.In this paper,a dynamic model of displacement prediction is introduced for composite landslides based on a combination of empirical mode decomposition with soft screening stop criteria(SSSC-EMD)and deep bidirectional long short-term memory(DBi-LSTM)neural network.In the proposed model,the time series analysis and SSSC-EMD are used to decompose the observed accumulated displacements of a slope into three components,viz.trend displacement,periodic displacement,and random displacement.Then,by analyzing the evolution pattern of a landslide and its key factors triggering landslides,appropriate influencing factors are selected for each displacement component,and DBi-LSTM neural network to carry out multi-datadriven dynamic prediction for each displacement component.An accumulated displacement prediction has been obtained by a summation of each component.For accuracy verification and engineering practicability of the model,field observations from two known landslides in China,the Xintan landslide and the Bazimen landslide were collected for comparison and evaluation.The case study verified that the model proposed in this paper can better characterize the"stepwise"deformation characteristics of a slope.As compared with long short-term memory(LSTM)neural network,support vector machine(SVM),and autoregressive integrated moving average(ARIMA)model,DBi-LSTM neural network has higher accuracy in predicting the periodic displacement of slope deformation,with the mean absolute percentage error reduced by 3.063%,14.913%,and 13.960%respectively,and the root mean square error reduced by 1.951 mm,8.954 mm and 7.790 mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide new insight for practical landslide prevention and control engineering.展开更多
Wind power prediction is very important for the economic dispatching of power systems containing wind power.In this work,a novel short-term wind power prediction method based on improved complete ensemble empirical mo...Wind power prediction is very important for the economic dispatching of power systems containing wind power.In this work,a novel short-term wind power prediction method based on improved complete ensemble empirical mode decomposition with adaptive noise(ICEEMDAN)and(long short-term memory)LSTM neural network is proposed and studied.First,the original data is prepossessed including removing outliers and filling in the gaps.Then,the random forest algorithm is used to sort the importance of each meteorological factor and determine the input climate characteristics of the forecast model.In addition,this study conducts seasonal classification of the annual data where ICEEMDAN is adopted to divide the original wind power sequence into numerous modal components according to different seasons.On this basis,sample entropy is used to calculate the complexity of each component and reconstruct them into trend components,oscillation components,and random components.Then,these three components are input into the LSTM neural network,respectively.Combined with the predicted values of the three components,the overall power prediction results are obtained.The simulation shows that ICEEMDAN-SE-LSTM achieves higher prediction accuracy ranging from 1.57%to 9.46%than other traditional models,which indicates the reliability and effectiveness of the proposed method for power prediction.展开更多
A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a force...A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.展开更多
To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with...To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with sea surface wind and wave heights as training samples.The prediction performance of the model is evaluated,and the error analysis shows that when using the same set of numerically predicted sea surface wind as input,the prediction error produced by the proposed LSTM model at Sta.N01 is 20%,18%and 23%lower than the conventional numerical wave models in terms of the total root mean square error(RMSE),scatter index(SI)and mean absolute error(MAE),respectively.Particularly,for significant wave height in the range of 3–5 m,the prediction accuracy of the LSTM model is improved the most remarkably,with RMSE,SI and MAE all decreasing by 24%.It is also evident that the numbers of hidden neurons,the numbers of buoys used and the time length of training samples all have impact on the prediction accuracy.However,the prediction does not necessary improve with the increase of number of hidden neurons or number of buoys used.The experiment trained by data with the longest time length is found to perform the best overall compared to other experiments with a shorter time length for training.Overall,long short-term memory neural network was proved to be a very promising method for future development and applications in wave forecasting.展开更多
As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge device...As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge devices.The execution speed of the deployed model is a key element to ensure service quality.Considering a highly heterogeneous edge deployment scenario,deep learning compiling is a novel approach that aims to solve this problem.It defines models using certain DSLs and generates efficient code implementations on different hardware devices.However,there are still two aspects that are not yet thoroughly investigated yet.The first is the optimization of memory-intensive operations,and the second problem is the heterogeneity of the deployment target.To that end,in this work,we propose a system solution that optimizes memory-intensive operation,optimizes the subgraph distribution,and enables the compiling and deployment of DNN models on multiple targets.The evaluation results show the performance of our proposed system.展开更多
The problems in equipment fault detection include data dimension explosion,computational complexity,low detection accuracy,etc.To solve these problems,a device anomaly detection algorithm based on enhanced long short-...The problems in equipment fault detection include data dimension explosion,computational complexity,low detection accuracy,etc.To solve these problems,a device anomaly detection algorithm based on enhanced long short-term memory(LSTM)is proposed.The algorithm first reduces the dimensionality of the device sensor data by principal component analysis(PCA),extracts the strongly correlated variable data among the multidimensional sensor data with the lowest possible information loss,and then uses the enhanced stacked LSTM to predict the extracted temporal data,thus improving the accuracy of anomaly detection.To improve the efficiency of the anomaly detection,a genetic algorithm(GA)is used to adjust the magnitude of the enhancements made by the LSTM model.The validation of the actual data from the pumps shows that the algorithm has significantly improved the recall rate and the detection speed of device anomaly detection,with the recall rate of 97.07%,which indicates that the algorithm is effective and efficient for device anomaly detection in the actual production environment.展开更多
A precise and timely forecast of short-term rail transit passenger flow provides data support for traffic management and operation,assisting rail operators in efficiently allocating resources and timely relieving pres...A precise and timely forecast of short-term rail transit passenger flow provides data support for traffic management and operation,assisting rail operators in efficiently allocating resources and timely relieving pressure on passenger safety and operation.First,the passenger flow sequence models in the study are broken down using VMD for noise reduction.The objective environment features are then added to the characteristic factors that affect the passenger flow.The target station serves as an additional spatial feature and is mined concurrently using the KNN algorithm.It is shown that the hybrid model VMD-CLSMT has a higher prediction accuracy,by setting BP,CNN,and LSTM reference experiments.All models’second order prediction effects are superior to their first order effects,showing that the residual network can significantly raise model prediction accuracy.Additionally,it confirms the efficacy of supplementary and objective environmental features.展开更多
In this paper,we propose a long short-term memory(LSTM)deep learning model to deal with the smoothed monthly sunspot number(SSN),aiming to address the problem whereby the prediction results of the existing sunspot pre...In this paper,we propose a long short-term memory(LSTM)deep learning model to deal with the smoothed monthly sunspot number(SSN),aiming to address the problem whereby the prediction results of the existing sunspot prediction methods are not uniform and have large deviations.Our method optimizes the number of hidden nodes and batch sizes of the LSTM network structures to 19 and 20,respectively.The best length of time series and the value of the timesteps were then determined for the network training,and one-step and multi-step predictions for Cycle 22 to Cycle 24 were made using the well-established network.The results showed that the maximum root-mean-square error(RMSE)of the one-step prediction model was6.12 and the minimum was only 2.45.The maximum amplitude prediction error of the multi-step prediction was 17.2%and the minimum was only 3.0%.Finally,the next solar cycles(Cycle 25)peak amplitude was predicted to occur around 2023,with a peak value of about 114.3.The accuracy of this prediction method is better than that of the other commonly used methods,and the method has high applicability.展开更多
We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any...We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any two contiguous interpunctions I<sub>p</sub>, because this parameter can model how the human mind memorizes “chunks” of information. Since I<sub>P</sub> can be calculated for any alphabetical text, we can perform experiments—otherwise impossible— with ancient readers by studying the literary works they used to read. The “experiments” compare the I<sub>P</sub> of texts of a language/translation to those of another language/translation by measuring the minimum average probability of finding joint readers (those who can read both texts because of similar short-term memory capacity) and by defining an “overlap index”. We also define the population of universal readers, people who can read any New Testament text in any language. Future work is vast, with many research tracks, because alphabetical literatures are very large and allow many experiments, such as comparing authors, translations or even texts written by artificial intelligence tools.展开更多
Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the ...Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the increasing size and complexity of these models have led to increased training costs and reduced efficiency.This study aims to minimize the inference time of such models while maintaining computational performance.It also proposes a novel Distillation model for PAL-BERT(DPAL-BERT),specifically,employs knowledge distillation,using the PAL-BERT model as the teacher model to train two student models:DPAL-BERT-Bi and DPAL-BERTC.This research enhances the dataset through techniques such as masking,replacement,and n-gram sampling to optimize knowledge transfer.The experimental results showed that the distilled models greatly outperform models trained from scratch.In addition,although the distilled models exhibit a slight decrease in performance compared to PAL-BERT,they significantly reduce inference time to just 0.25%of the original.This demonstrates the effectiveness of the proposed approach in balancing model performance and efficiency.展开更多
In energy-dispersive X-ray fluorescence spectroscopy,the estimation of the pulse amplitude determines the accuracy of the spectrum measurement.The error generated by the amplitude estimation of the pulse output distor...In energy-dispersive X-ray fluorescence spectroscopy,the estimation of the pulse amplitude determines the accuracy of the spectrum measurement.The error generated by the amplitude estimation of the pulse output distorted by the measurement system leads to false peaks in the measured spectrum.To eliminate these false peaks and achieve an accurate estimation of the distorted pulse amplitude,a composite neural network model is proposed,which embeds long and short-term memory(LSTM)into the UNet structure.The UNet network realizes the fusion of pulse sequence features and the LSTM model realizes pulse amplitude estimation.The model is trained using simulated pulse datasets with different amplitudes and distortion times.For the pulse height estimation,the average relative error of the trained model on the test set was approximately 0.64%,which is 27.37% lower than that of the traditional trapezoidal shaping algorithm.Offline processing of a standard iron source further validated the pulse height estimation performance of the UNet-LSTM model.After estimating the amplitude of the distorted pulses using the model,the false peak area was reduced by approximately 91% over the full spectrum and was corrected to the characteristic peak region of interest(ROI).The corrected peak area accounted for approximately 1.32%of the characteristic peak ROI area.The results indicate that the model can accurately estimate the height of distorted pulses and has substantial corrective effects on false peaks.展开更多
The complexity of river-tide interaction poses a significant challenge in predicting discharge in tidal rivers.Long short-term memory(LSTM)networks excel in processing and predicting crucial events with extended inter...The complexity of river-tide interaction poses a significant challenge in predicting discharge in tidal rivers.Long short-term memory(LSTM)networks excel in processing and predicting crucial events with extended intervals and time delays in time series data.Additionally,the sequence-to-sequence(Seq2Seq)model,known for handling temporal relationships,adapting to variable-length sequences,effectively capturing historical information,and accommodating various influencing factors,emerges as a robust and flexible tool in discharge forecasting.In this study,we introduce the application of LSTM-based Seq2Seq models for the first time in forecasting the discharge of a tidal reach of the Changjiang River(Yangtze River)Estuary.This study focuses on discharge forecasting using three key input characteristics:flow velocity,water level,and discharge,which means the structure of multiple input and single output is adopted.The experiment used the discharge data of the whole year of 2020,of which the first 80%is used as the training set,and the last 20%is used as the test set.This means that the data covers different tidal cycles,which helps to test the forecasting effect of different models in different tidal cycles and different runoff.The experimental results indicate that the proposed models demonstrate advantages in long-term,mid-term,and short-term discharge forecasting.The Seq2Seq models improved by 6%-60%and 5%-20%of the relative standard deviation compared to the harmonic analysis models and improved back propagation neural network models in discharge prediction,respectively.In addition,the relative accuracy of the Seq2Seq model is 1%to 3%higher than that of the LSTM model.Analytical assessment of the prediction errors shows that the Seq2Seq models are insensitive to the forecast lead time and they can capture characteristic values such as maximum flood tide flow and maximum ebb tide flow in the tidal cycle well.This indicates the significance of the Seq2Seq models.展开更多
The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning al...The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning algorithms after artificial feature extraction.However,guaranteeing the effectiveness of the extracted features is difficult.The current trend focuses on using a convolution neural network to automatically extract features for classification.This method can be used to extract signal spatial features automatically through a convolution kernel;however,infrasound signals contain not only spatial information but also temporal information when used as a time series.These extracted temporal features are also crucial.If only a convolution neural network is used,then the time dependence of the infrasound sequence will be missed.Using long short-term memory networks can compensate for the missing time-series features but induces spatial feature information loss of the infrasound signal.A multiscale squeeze excitation–convolution neural network–bidirectional long short-term memory network infrasound event classification fusion model is proposed in this study to address these problems.This model automatically extracted temporal and spatial features,adaptively selected features,and also realized the fusion of the two types of features.Experimental results showed that the classification accuracy of the model was more than 98%,thus verifying the effectiveness and superiority of the proposed model.展开更多
基金The National Natural Science Foundation of China(No.61571106,61633013,61673108,81871444).
文摘To fully make use of information from different representation subspaces,a multi-head attention-based long short-term memory(LSTM)model is proposed in this study for speech emotion recognition(SER).The proposed model uses frame-level features and takes the temporal information of emotion speech as the input of the LSTM layer.Here,a multi-head time-dimension attention(MHTA)layer was employed to linearly project the output of the LSTM layer into different subspaces for the reduced-dimension context vectors.To provide relative vital information from other dimensions,the output of MHTA,the output of feature-dimension attention,and the last time-step output of LSTM were utilized to form multiple context vectors as the input of the fully connected layer.To improve the performance of multiple vectors,feature-dimension attention was employed for the all-time output of the first LSTM layer.The proposed model was evaluated on the eNTERFACE and GEMEP corpora,respectively.The results indicate that the proposed model outperforms LSTM by 14.6%and 10.5%for eNTERFACE and GEMEP,respectively,proving the effectiveness of the proposed model in SER tasks.
基金supported by National Natural Science Foundation of China,China(No.42004016)HuBei Natural Science Fund,China(No.2020CFB329)+1 种基金HuNan Natural Science Fund,China(No.2023JJ60559,2023JJ60560)the State Key Laboratory of Geodesy and Earth’s Dynamics self-deployment project,China(No.S21L6101)。
文摘Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currently,least squares(LS)+auto-regressive(AR)hybrid method is one of the main techniques of PM prediction.Besides,the weighted LS+AR hybrid method performs well for PM short-term prediction.However,the corresponding covariance information of LS fitting residuals deserves further exploration in the AR model.In this study,we have derived a modified stochastic model for the LS+AR hybrid method,namely the weighted LS+weighted AR hybrid method.By using the PM data products of IERS EOP 14 C04,the numerical results indicate that for PM short-term forecasting,the proposed weighted LS+weighted AR hybrid method shows an advantage over both the LS+AR hybrid method and the weighted LS+AR hybrid method.Compared to the mean absolute errors(MAEs)of PMX/PMY sho rt-term prediction of the LS+AR hybrid method and the weighted LS+AR hybrid method,the weighted LS+weighted AR hybrid method shows average improvements of 6.61%/12.08%and 0.24%/11.65%,respectively.Besides,for the slopes of the linear regression lines fitted to the errors of each method,the growth of the prediction error of the proposed method is slower than that of the other two methods.
基金funded by the National Natural Science Foundation of China (41807285)。
文摘The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning models have some problems,such as poor nonlinear performance,local optimum and incomplete factors feature extraction.These issues can affect the accuracy of slope stability prediction.Therefore,a deep learning algorithm called Long short-term memory(LSTM)has been innovatively proposed to predict slope stability.Taking the Ganzhou City in China as the study area,the landslide inventory and their characteristics of geotechnical parameters,slope height and slope angle are analyzed.Based on these characteristics,typical soil slopes are constructed using the Geo-Studio software.Five control factors affecting slope stability,including slope height,slope angle,internal friction angle,cohesion and volumetric weight,are selected to form different slope and construct model input variables.Then,the limit equilibrium method is used to calculate the stability coefficients of these typical soil slopes under different control factors.Each slope stability coefficient and its corresponding control factors is a slope sample.As a result,a total of 2160 training samples and 450 testing samples are constructed.These sample sets are imported into LSTM for modelling and compared with the support vector machine(SVM),random forest(RF)and convo-lutional neural network(CNN).The results show that the LSTM overcomes the problem that the commonly used machine learning models have difficulty extracting global features.Furthermore,LSTM has a better prediction performance for slope stability compared to SVM,RF and CNN models.
文摘With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning and operating traffic structures.This study proposed an improved ensemble-based deep learning method to solve traffic volume prediction problems.A set of optimal hyperparameters is also applied for the suggested approach to improve the performance of the learning process.The fusion of these methodologies aims to harness ensemble empirical mode decomposition’s capacity to discern complex traffic patterns and long short-term memory’s proficiency in learning temporal relationships.Firstly,a dataset for automatic vehicle identification is obtained and utilized in the preprocessing stage of the ensemble empirical mode decomposition model.The second aspect involves predicting traffic volume using the long short-term memory algorithm.Next,the study employs a trial-and-error approach to select a set of optimal hyperparameters,including the lookback window,the number of neurons in the hidden layers,and the gradient descent optimization.Finally,the fusion of the obtained results leads to a final traffic volume prediction.The experimental results show that the proposed method outperforms other benchmarks regarding various evaluation measures,including mean absolute error,root mean squared error,mean absolute percentage error,and R-squared.The achieved R-squared value reaches an impressive 98%,while the other evaluation indices surpass the competing.These findings highlight the accuracy of traffic pattern prediction.Consequently,this offers promising prospects for enhancing transportation management systems and urban infrastructure planning.
基金Deanship of Research and Graduate Studies at King Khalid University for funding this work through Small Group Research Project under Grant Number RGP1/261/45.
文摘Breast cancer is a significant threat to the global population,affecting not only women but also a threat to the entire population.With recent advancements in digital pathology,Eosin and hematoxylin images provide enhanced clarity in examiningmicroscopic features of breast tissues based on their staining properties.Early cancer detection facilitates the quickening of the therapeutic process,thereby increasing survival rates.The analysis made by medical professionals,especially pathologists,is time-consuming and challenging,and there arises a need for automated breast cancer detection systems.The upcoming artificial intelligence platforms,especially deep learning models,play an important role in image diagnosis and prediction.Initially,the histopathology biopsy images are taken from standard data sources.Further,the gathered images are given as input to the Multi-Scale Dilated Vision Transformer,where the essential features are acquired.Subsequently,the features are subjected to the Bidirectional Long Short-Term Memory(Bi-LSTM)for classifying the breast cancer disorder.The efficacy of the model is evaluated using divergent metrics.When compared with other methods,the proposed work reveals that it offers impressive results for detection.
文摘BACKGROUND Endometrial cancer(EC)is a common gynecological malignancy that typically requires prompt surgical intervention;however,the advantage of surgical management is limited by the high postoperative recurrence rates and adverse outcomes.Previous studies have highlighted the prognostic potential of circulating tumor DNA(ctDNA)monitoring for minimal residual disease in patients with EC.AIM To develop and validate an optimized ctDNA-based model for predicting shortterm postoperative EC recurrence.METHODS We retrospectively analyzed 294 EC patients treated surgically from 2015-2019 to devise a short-term recurrence prediction model,which was validated on 143 EC patients operated between 2020 and 2021.Prognostic factors were identified using univariate Cox,Lasso,and multivariate Cox regressions.A nomogram was created to predict the 1,1.5,and 2-year recurrence-free survival(RFS).Model performance was assessed via receiver operating characteristic(ROC),calibration,and decision curve analyses(DCA),leading to a recurrence risk stratification system.RESULTS Based on the regression analysis and the nomogram created,patients with postoperative ctDNA-negativity,postoperative carcinoembryonic antigen 125(CA125)levels of<19 U/mL,and grade G1 tumors had improved RFS after surgery.The nomogram’s efficacy for recurrence prediction was confirmed through ROC analysis,calibration curves,and DCA methods,highlighting its high accuracy and clinical utility.Furthermore,using the nomogram,the patients were successfully classified into three risk subgroups.CONCLUSION The nomogram accurately predicted RFS after EC surgery at 1,1.5,and 2 years.This model will help clinicians personalize treatments,stratify risks,and enhance clinical outcomes for patients with EC.
文摘A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.
文摘There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement and time series of a landslide.The second one is the dynamic evolution of a landslide,which could not be feasibly simulated simply by traditional prediction models.In this paper,a dynamic model of displacement prediction is introduced for composite landslides based on a combination of empirical mode decomposition with soft screening stop criteria(SSSC-EMD)and deep bidirectional long short-term memory(DBi-LSTM)neural network.In the proposed model,the time series analysis and SSSC-EMD are used to decompose the observed accumulated displacements of a slope into three components,viz.trend displacement,periodic displacement,and random displacement.Then,by analyzing the evolution pattern of a landslide and its key factors triggering landslides,appropriate influencing factors are selected for each displacement component,and DBi-LSTM neural network to carry out multi-datadriven dynamic prediction for each displacement component.An accumulated displacement prediction has been obtained by a summation of each component.For accuracy verification and engineering practicability of the model,field observations from two known landslides in China,the Xintan landslide and the Bazimen landslide were collected for comparison and evaluation.The case study verified that the model proposed in this paper can better characterize the"stepwise"deformation characteristics of a slope.As compared with long short-term memory(LSTM)neural network,support vector machine(SVM),and autoregressive integrated moving average(ARIMA)model,DBi-LSTM neural network has higher accuracy in predicting the periodic displacement of slope deformation,with the mean absolute percentage error reduced by 3.063%,14.913%,and 13.960%respectively,and the root mean square error reduced by 1.951 mm,8.954 mm and 7.790 mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide new insight for practical landslide prevention and control engineering.
基金supported by Science and Technology Project of State Grid Shandong Electric Power Company(52062622000R,Research on Aggregation and Regulation Technology of Regional Integrated Energy System).
文摘Wind power prediction is very important for the economic dispatching of power systems containing wind power.In this work,a novel short-term wind power prediction method based on improved complete ensemble empirical mode decomposition with adaptive noise(ICEEMDAN)and(long short-term memory)LSTM neural network is proposed and studied.First,the original data is prepossessed including removing outliers and filling in the gaps.Then,the random forest algorithm is used to sort the importance of each meteorological factor and determine the input climate characteristics of the forecast model.In addition,this study conducts seasonal classification of the annual data where ICEEMDAN is adopted to divide the original wind power sequence into numerous modal components according to different seasons.On this basis,sample entropy is used to calculate the complexity of each component and reconstruct them into trend components,oscillation components,and random components.Then,these three components are input into the LSTM neural network,respectively.Combined with the predicted values of the three components,the overall power prediction results are obtained.The simulation shows that ICEEMDAN-SE-LSTM achieves higher prediction accuracy ranging from 1.57%to 9.46%than other traditional models,which indicates the reliability and effectiveness of the proposed method for power prediction.
基金supported by the Ministry of Trade,Industry & Energy(MOTIE,Korea) under Industrial Technology Innovation Program (No.10063424,'development of distant speech recognition and multi-task dialog processing technologies for in-door conversational robots')
文摘A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.
基金The National Key R&D Program of China under contract No.2016YFC1402103
文摘To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with sea surface wind and wave heights as training samples.The prediction performance of the model is evaluated,and the error analysis shows that when using the same set of numerically predicted sea surface wind as input,the prediction error produced by the proposed LSTM model at Sta.N01 is 20%,18%and 23%lower than the conventional numerical wave models in terms of the total root mean square error(RMSE),scatter index(SI)and mean absolute error(MAE),respectively.Particularly,for significant wave height in the range of 3–5 m,the prediction accuracy of the LSTM model is improved the most remarkably,with RMSE,SI and MAE all decreasing by 24%.It is also evident that the numbers of hidden neurons,the numbers of buoys used and the time length of training samples all have impact on the prediction accuracy.However,the prediction does not necessary improve with the increase of number of hidden neurons or number of buoys used.The experiment trained by data with the longest time length is found to perform the best overall compared to other experiments with a shorter time length for training.Overall,long short-term memory neural network was proved to be a very promising method for future development and applications in wave forecasting.
基金supported by the National Natural Science Foundation of China(U21A20519)。
文摘As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge devices.The execution speed of the deployed model is a key element to ensure service quality.Considering a highly heterogeneous edge deployment scenario,deep learning compiling is a novel approach that aims to solve this problem.It defines models using certain DSLs and generates efficient code implementations on different hardware devices.However,there are still two aspects that are not yet thoroughly investigated yet.The first is the optimization of memory-intensive operations,and the second problem is the heterogeneity of the deployment target.To that end,in this work,we propose a system solution that optimizes memory-intensive operation,optimizes the subgraph distribution,and enables the compiling and deployment of DNN models on multiple targets.The evaluation results show the performance of our proposed system.
基金National Key R&D Program of China(No.2020YFB1707700)。
文摘The problems in equipment fault detection include data dimension explosion,computational complexity,low detection accuracy,etc.To solve these problems,a device anomaly detection algorithm based on enhanced long short-term memory(LSTM)is proposed.The algorithm first reduces the dimensionality of the device sensor data by principal component analysis(PCA),extracts the strongly correlated variable data among the multidimensional sensor data with the lowest possible information loss,and then uses the enhanced stacked LSTM to predict the extracted temporal data,thus improving the accuracy of anomaly detection.To improve the efficiency of the anomaly detection,a genetic algorithm(GA)is used to adjust the magnitude of the enhancements made by the LSTM model.The validation of the actual data from the pumps shows that the algorithm has significantly improved the recall rate and the detection speed of device anomaly detection,with the recall rate of 97.07%,which indicates that the algorithm is effective and efficient for device anomaly detection in the actual production environment.
基金the Major Projects of the National Social Science Fund in China(21&ZD127).
文摘A precise and timely forecast of short-term rail transit passenger flow provides data support for traffic management and operation,assisting rail operators in efficiently allocating resources and timely relieving pressure on passenger safety and operation.First,the passenger flow sequence models in the study are broken down using VMD for noise reduction.The objective environment features are then added to the characteristic factors that affect the passenger flow.The target station serves as an additional spatial feature and is mined concurrently using the KNN algorithm.It is shown that the hybrid model VMD-CLSMT has a higher prediction accuracy,by setting BP,CNN,and LSTM reference experiments.All models’second order prediction effects are superior to their first order effects,showing that the residual network can significantly raise model prediction accuracy.Additionally,it confirms the efficacy of supplementary and objective environmental features.
基金the National Natural Science Foundation of China(Grant No.U1531128)。
文摘In this paper,we propose a long short-term memory(LSTM)deep learning model to deal with the smoothed monthly sunspot number(SSN),aiming to address the problem whereby the prediction results of the existing sunspot prediction methods are not uniform and have large deviations.Our method optimizes the number of hidden nodes and batch sizes of the LSTM network structures to 19 and 20,respectively.The best length of time series and the value of the timesteps were then determined for the network training,and one-step and multi-step predictions for Cycle 22 to Cycle 24 were made using the well-established network.The results showed that the maximum root-mean-square error(RMSE)of the one-step prediction model was6.12 and the minimum was only 2.45.The maximum amplitude prediction error of the multi-step prediction was 17.2%and the minimum was only 3.0%.Finally,the next solar cycles(Cycle 25)peak amplitude was predicted to occur around 2023,with a peak value of about 114.3.The accuracy of this prediction method is better than that of the other commonly used methods,and the method has high applicability.
文摘We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any two contiguous interpunctions I<sub>p</sub>, because this parameter can model how the human mind memorizes “chunks” of information. Since I<sub>P</sub> can be calculated for any alphabetical text, we can perform experiments—otherwise impossible— with ancient readers by studying the literary works they used to read. The “experiments” compare the I<sub>P</sub> of texts of a language/translation to those of another language/translation by measuring the minimum average probability of finding joint readers (those who can read both texts because of similar short-term memory capacity) and by defining an “overlap index”. We also define the population of universal readers, people who can read any New Testament text in any language. Future work is vast, with many research tracks, because alphabetical literatures are very large and allow many experiments, such as comparing authors, translations or even texts written by artificial intelligence tools.
基金supported by Sichuan Science and Technology Program(2023YFSY0026,2023YFH0004).
文摘Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the increasing size and complexity of these models have led to increased training costs and reduced efficiency.This study aims to minimize the inference time of such models while maintaining computational performance.It also proposes a novel Distillation model for PAL-BERT(DPAL-BERT),specifically,employs knowledge distillation,using the PAL-BERT model as the teacher model to train two student models:DPAL-BERT-Bi and DPAL-BERTC.This research enhances the dataset through techniques such as masking,replacement,and n-gram sampling to optimize knowledge transfer.The experimental results showed that the distilled models greatly outperform models trained from scratch.In addition,although the distilled models exhibit a slight decrease in performance compared to PAL-BERT,they significantly reduce inference time to just 0.25%of the original.This demonstrates the effectiveness of the proposed approach in balancing model performance and efficiency.
基金supported by the Open Project of Guangxi Key Laboratory of Nuclear Physics and Nuclear Technology(No.NLK2022-05)the Central Government Guidance Funds for Local Scientific and Technological Development,China(No.Guike ZY22096024)+5 种基金the Sichuan Natural Science Youth Fund Project(No.2023NSFSC1366)Key R&D Projects of Sichuan Provincial Department of Science and Technology(No.2023YFG0287)the Open Research Fund of National Engineering Research Center for Agro-Ecological Big Data Analysis&Application,Anhui University(No.AE202209)the National Natural Science Youth Foundation of China(No.12305214)the Vanadium and Titanium Resource Comprehensive Utilization Key Laboratory of Sichuan Province(No.2023FTSZ03)the Key Laboratory of Interior Layout optimization and Security,Institutions of Higher Education of Sichuan Province(No.2023SNKJ-01)。
文摘In energy-dispersive X-ray fluorescence spectroscopy,the estimation of the pulse amplitude determines the accuracy of the spectrum measurement.The error generated by the amplitude estimation of the pulse output distorted by the measurement system leads to false peaks in the measured spectrum.To eliminate these false peaks and achieve an accurate estimation of the distorted pulse amplitude,a composite neural network model is proposed,which embeds long and short-term memory(LSTM)into the UNet structure.The UNet network realizes the fusion of pulse sequence features and the LSTM model realizes pulse amplitude estimation.The model is trained using simulated pulse datasets with different amplitudes and distortion times.For the pulse height estimation,the average relative error of the trained model on the test set was approximately 0.64%,which is 27.37% lower than that of the traditional trapezoidal shaping algorithm.Offline processing of a standard iron source further validated the pulse height estimation performance of the UNet-LSTM model.After estimating the amplitude of the distorted pulses using the model,the false peak area was reduced by approximately 91% over the full spectrum and was corrected to the characteristic peak region of interest(ROI).The corrected peak area accounted for approximately 1.32%of the characteristic peak ROI area.The results indicate that the model can accurately estimate the height of distorted pulses and has substantial corrective effects on false peaks.
基金The National Natural Science Foundation of China under contract Nos 42266006 and 41806114the Jiangxi Provincial Natural Science Foundation under contract Nos 20232BAB204089 and 20202ACBL214019.
文摘The complexity of river-tide interaction poses a significant challenge in predicting discharge in tidal rivers.Long short-term memory(LSTM)networks excel in processing and predicting crucial events with extended intervals and time delays in time series data.Additionally,the sequence-to-sequence(Seq2Seq)model,known for handling temporal relationships,adapting to variable-length sequences,effectively capturing historical information,and accommodating various influencing factors,emerges as a robust and flexible tool in discharge forecasting.In this study,we introduce the application of LSTM-based Seq2Seq models for the first time in forecasting the discharge of a tidal reach of the Changjiang River(Yangtze River)Estuary.This study focuses on discharge forecasting using three key input characteristics:flow velocity,water level,and discharge,which means the structure of multiple input and single output is adopted.The experiment used the discharge data of the whole year of 2020,of which the first 80%is used as the training set,and the last 20%is used as the test set.This means that the data covers different tidal cycles,which helps to test the forecasting effect of different models in different tidal cycles and different runoff.The experimental results indicate that the proposed models demonstrate advantages in long-term,mid-term,and short-term discharge forecasting.The Seq2Seq models improved by 6%-60%and 5%-20%of the relative standard deviation compared to the harmonic analysis models and improved back propagation neural network models in discharge prediction,respectively.In addition,the relative accuracy of the Seq2Seq model is 1%to 3%higher than that of the LSTM model.Analytical assessment of the prediction errors shows that the Seq2Seq models are insensitive to the forecast lead time and they can capture characteristic values such as maximum flood tide flow and maximum ebb tide flow in the tidal cycle well.This indicates the significance of the Seq2Seq models.
文摘The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning algorithms after artificial feature extraction.However,guaranteeing the effectiveness of the extracted features is difficult.The current trend focuses on using a convolution neural network to automatically extract features for classification.This method can be used to extract signal spatial features automatically through a convolution kernel;however,infrasound signals contain not only spatial information but also temporal information when used as a time series.These extracted temporal features are also crucial.If only a convolution neural network is used,then the time dependence of the infrasound sequence will be missed.Using long short-term memory networks can compensate for the missing time-series features but induces spatial feature information loss of the infrasound signal.A multiscale squeeze excitation–convolution neural network–bidirectional long short-term memory network infrasound event classification fusion model is proposed in this study to address these problems.This model automatically extracted temporal and spatial features,adaptively selected features,and also realized the fusion of the two types of features.Experimental results showed that the classification accuracy of the model was more than 98%,thus verifying the effectiveness and superiority of the proposed model.