A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan....A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.展开更多
With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning ...With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning and operating traffic structures.This study proposed an improved ensemble-based deep learning method to solve traffic volume prediction problems.A set of optimal hyperparameters is also applied for the suggested approach to improve the performance of the learning process.The fusion of these methodologies aims to harness ensemble empirical mode decomposition’s capacity to discern complex traffic patterns and long short-term memory’s proficiency in learning temporal relationships.Firstly,a dataset for automatic vehicle identification is obtained and utilized in the preprocessing stage of the ensemble empirical mode decomposition model.The second aspect involves predicting traffic volume using the long short-term memory algorithm.Next,the study employs a trial-and-error approach to select a set of optimal hyperparameters,including the lookback window,the number of neurons in the hidden layers,and the gradient descent optimization.Finally,the fusion of the obtained results leads to a final traffic volume prediction.The experimental results show that the proposed method outperforms other benchmarks regarding various evaluation measures,including mean absolute error,root mean squared error,mean absolute percentage error,and R-squared.The achieved R-squared value reaches an impressive 98%,while the other evaluation indices surpass the competing.These findings highlight the accuracy of traffic pattern prediction.Consequently,this offers promising prospects for enhancing transportation management systems and urban infrastructure planning.展开更多
Breast cancer is a significant threat to the global population,affecting not only women but also a threat to the entire population.With recent advancements in digital pathology,Eosin and hematoxylin images provide enh...Breast cancer is a significant threat to the global population,affecting not only women but also a threat to the entire population.With recent advancements in digital pathology,Eosin and hematoxylin images provide enhanced clarity in examiningmicroscopic features of breast tissues based on their staining properties.Early cancer detection facilitates the quickening of the therapeutic process,thereby increasing survival rates.The analysis made by medical professionals,especially pathologists,is time-consuming and challenging,and there arises a need for automated breast cancer detection systems.The upcoming artificial intelligence platforms,especially deep learning models,play an important role in image diagnosis and prediction.Initially,the histopathology biopsy images are taken from standard data sources.Further,the gathered images are given as input to the Multi-Scale Dilated Vision Transformer,where the essential features are acquired.Subsequently,the features are subjected to the Bidirectional Long Short-Term Memory(Bi-LSTM)for classifying the breast cancer disorder.The efficacy of the model is evaluated using divergent metrics.When compared with other methods,the proposed work reveals that it offers impressive results for detection.展开更多
The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning mode...The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning models have some problems,such as poor nonlinear performance,local optimum and incomplete factors feature extraction.These issues can affect the accuracy of slope stability prediction.Therefore,a deep learning algorithm called Long short-term memory(LSTM)has been innovatively proposed to predict slope stability.Taking the Ganzhou City in China as the study area,the landslide inventory and their characteristics of geotechnical parameters,slope height and slope angle are analyzed.Based on these characteristics,typical soil slopes are constructed using the Geo-Studio software.Five control factors affecting slope stability,including slope height,slope angle,internal friction angle,cohesion and volumetric weight,are selected to form different slope and construct model input variables.Then,the limit equilibrium method is used to calculate the stability coefficients of these typical soil slopes under different control factors.Each slope stability coefficient and its corresponding control factors is a slope sample.As a result,a total of 2160 training samples and 450 testing samples are constructed.These sample sets are imported into LSTM for modelling and compared with the support vector machine(SVM),random forest(RF)and convo-lutional neural network(CNN).The results show that the LSTM overcomes the problem that the commonly used machine learning models have difficulty extracting global features.Furthermore,LSTM has a better prediction performance for slope stability compared to SVM,RF and CNN models.展开更多
A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively ...A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.展开更多
There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement an...There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement and time series of a landslide.The second one is the dynamic evolution of a landslide,which could not be feasibly simulated simply by traditional prediction models.In this paper,a dynamic model of displacement prediction is introduced for composite landslides based on a combination of empirical mode decomposition with soft screening stop criteria(SSSC-EMD)and deep bidirectional long short-term memory(DBi-LSTM)neural network.In the proposed model,the time series analysis and SSSC-EMD are used to decompose the observed accumulated displacements of a slope into three components,viz.trend displacement,periodic displacement,and random displacement.Then,by analyzing the evolution pattern of a landslide and its key factors triggering landslides,appropriate influencing factors are selected for each displacement component,and DBi-LSTM neural network to carry out multi-datadriven dynamic prediction for each displacement component.An accumulated displacement prediction has been obtained by a summation of each component.For accuracy verification and engineering practicability of the model,field observations from two known landslides in China,the Xintan landslide and the Bazimen landslide were collected for comparison and evaluation.The case study verified that the model proposed in this paper can better characterize the"stepwise"deformation characteristics of a slope.As compared with long short-term memory(LSTM)neural network,support vector machine(SVM),and autoregressive integrated moving average(ARIMA)model,DBi-LSTM neural network has higher accuracy in predicting the periodic displacement of slope deformation,with the mean absolute percentage error reduced by 3.063%,14.913%,and 13.960%respectively,and the root mean square error reduced by 1.951 mm,8.954 mm and 7.790 mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide new insight for practical landslide prevention and control engineering.展开更多
The problems in equipment fault detection include data dimension explosion,computational complexity,low detection accuracy,etc.To solve these problems,a device anomaly detection algorithm based on enhanced long short-...The problems in equipment fault detection include data dimension explosion,computational complexity,low detection accuracy,etc.To solve these problems,a device anomaly detection algorithm based on enhanced long short-term memory(LSTM)is proposed.The algorithm first reduces the dimensionality of the device sensor data by principal component analysis(PCA),extracts the strongly correlated variable data among the multidimensional sensor data with the lowest possible information loss,and then uses the enhanced stacked LSTM to predict the extracted temporal data,thus improving the accuracy of anomaly detection.To improve the efficiency of the anomaly detection,a genetic algorithm(GA)is used to adjust the magnitude of the enhancements made by the LSTM model.The validation of the actual data from the pumps shows that the algorithm has significantly improved the recall rate and the detection speed of device anomaly detection,with the recall rate of 97.07%,which indicates that the algorithm is effective and efficient for device anomaly detection in the actual production environment.展开更多
We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any...We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any two contiguous interpunctions I<sub>p</sub>, because this parameter can model how the human mind memorizes “chunks” of information. Since I<sub>P</sub> can be calculated for any alphabetical text, we can perform experiments—otherwise impossible— with ancient readers by studying the literary works they used to read. The “experiments” compare the I<sub>P</sub> of texts of a language/translation to those of another language/translation by measuring the minimum average probability of finding joint readers (those who can read both texts because of similar short-term memory capacity) and by defining an “overlap index”. We also define the population of universal readers, people who can read any New Testament text in any language. Future work is vast, with many research tracks, because alphabetical literatures are very large and allow many experiments, such as comparing authors, translations or even texts written by artificial intelligence tools.展开更多
Background:This study was designed to investigate the feasibility of tumor-infiltrating immune cells with different phenotypic characteristics for predicting short-term clinical responses in patients with locally adva...Background:This study was designed to investigate the feasibility of tumor-infiltrating immune cells with different phenotypic characteristics for predicting short-term clinical responses in patients with locally advanced cervical cancer(LACC).Methods:Thirty-four patients who received concurrent chemoradiotherapy and twenty-one patients who merely underwent radiotherapy were enrolled in this study.We retrospectively analyzed the T cell markers(i.e.,CD3,CD4,CD8),memory markers(i.e.,CD45,CCR7),and differentiation markers(i.e.,CD27)in the peripheral blood and tumor tissues of patients with LACC before treatment based on flow cytometry.We also analyzed the relationship of T cell subsets between peripheral blood and tumor tissues,and their correlation with complete response or partial response.Results:The percentage of central memory CD8^(+)TCM(CD8^(+)CD45RA^(−)CD27^(+)CCR7^(+))cells in LACC patients was significantly lower than that of the control group.The percentage of CD8^(+)TN in the peripheral blood of LACC patients was significantly higher than that of tumor tissues.CD8^(+)TEM in the peripheral blood was significantly lower than that of tumor tissues.The percentage of CD8^(+)TN and CD8^(+)TCM in human papillomavirus(HPV)positive samples was significantly higher than that of HPV-negative samples.Similarly,the percentage of CD8^(+)TCM in tumor tissues was significantly higher in cancer tissue samples with lymph nodes compared with those without.Conclusion:A higher proportion of CD4^(+)TCM and a lower proportion of CD8^(+)TN in the tumor microenvironment of LACC may contribute to the therapy response prediction.展开更多
Mental workload plays a vital role in cognitive impairment. The impairment refers to a person’s difficulty in remembering, receiving new information, learning new things, concentrating, or making decisions that serio...Mental workload plays a vital role in cognitive impairment. The impairment refers to a person’s difficulty in remembering, receiving new information, learning new things, concentrating, or making decisions that seriously affect everyday life. In this paper, the simultaneous capacity (SIMKAP) experiment-based EEG workload analysis was presented using 45 subjects for multitasking mental workload estimation with subject wise attention loss calculation as well as short term memory loss measurement. Using an open access preprocessed EEG dataset, Discrete wavelet transforms (DWT) was utilized for feature extraction and Minimum redundancy and maximum relevancy (MRMR) technique was used to select most relevance features. Wavelet decomposition technique was also used for decomposing EEG signals into five sub bands. Fourteen statistical features were calculated from each sub band signal to form a 5 × 14 window size. The Neural Network (Narrow) classification algorithm was used to classify dataset for low and high workload conditions and comparison was made using some other machine learning models. The results show the classifier’s accuracy of 86.7%, precision of 84.4%, F1 score of 86.33%, and recall of 88.37% that crosses the state-of-the art methodologies in the literature. This prediction is expected to greatly facilitate the improved way in memory and attention loss impairments assessment.展开更多
Memory deficit,which is often associated with aging and many psychiatric,neurological,and neurodegenerative diseases,has been a challenging issue for treatment.Up till now,all potential drug candidates have failed to ...Memory deficit,which is often associated with aging and many psychiatric,neurological,and neurodegenerative diseases,has been a challenging issue for treatment.Up till now,all potential drug candidates have failed to produce satisfa ctory effects.Therefore,in the search for a solution,we found that a treatment with the gene corresponding to the RGS14414protein in visual area V2,a brain area connected with brain circuits of the ventral stream and the medial temporal lobe,which is crucial for object recognition memory(ORM),can induce enhancement of ORM.In this study,we demonstrated that the same treatment with RGS14414in visual area V2,which is relatively unaffected in neurodegenerative diseases such as Alzheimer s disease,produced longlasting enhancement of ORM in young animals and prevent ORM deficits in rodent models of aging and Alzheimer’s disease.Furthermore,we found that the prevention of memory deficits was mediated through the upregulation of neuronal arbo rization and spine density,as well as an increase in brain-derived neurotrophic factor(BDNF).A knockdown of BDNF gene in RGS14414-treated aging rats and Alzheimer s disease model mice caused complete loss in the upregulation of neuronal structural plasticity and in the prevention of ORM deficits.These findings suggest that BDNF-mediated neuronal structural plasticity in area V2 is crucial in the prevention of memory deficits in RGS14414-treated rodent models of aging and Alzheimer’s disease.Therefore,our findings of RGS14414gene-mediated activation of neuronal circuits in visual area V2 have therapeutic relevance in the treatment of memory deficits.展开更多
A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a force...A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.展开更多
To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with...To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with sea surface wind and wave heights as training samples.The prediction performance of the model is evaluated,and the error analysis shows that when using the same set of numerically predicted sea surface wind as input,the prediction error produced by the proposed LSTM model at Sta.N01 is 20%,18%and 23%lower than the conventional numerical wave models in terms of the total root mean square error(RMSE),scatter index(SI)and mean absolute error(MAE),respectively.Particularly,for significant wave height in the range of 3–5 m,the prediction accuracy of the LSTM model is improved the most remarkably,with RMSE,SI and MAE all decreasing by 24%.It is also evident that the numbers of hidden neurons,the numbers of buoys used and the time length of training samples all have impact on the prediction accuracy.However,the prediction does not necessary improve with the increase of number of hidden neurons or number of buoys used.The experiment trained by data with the longest time length is found to perform the best overall compared to other experiments with a shorter time length for training.Overall,long short-term memory neural network was proved to be a very promising method for future development and applications in wave forecasting.展开更多
Based on data from the Jilin Water Diversion Tunnels from the Songhua River(China),an improved and real-time prediction method optimized by multi-algorithm for tunnel boring machine(TBM)cutter-head torque is presented...Based on data from the Jilin Water Diversion Tunnels from the Songhua River(China),an improved and real-time prediction method optimized by multi-algorithm for tunnel boring machine(TBM)cutter-head torque is presented.Firstly,a function excluding invalid and abnormal data is established to distinguish TBM operating state,and a feature selection method based on the SelectKBest algorithm is proposed.Accordingly,ten features that are most closely related to the cutter-head torque are selected as input variables,which,in descending order of influence,include the sum of motor torque,cutter-head power,sum of motor power,sum of motor current,advance rate,cutter-head pressure,total thrust force,penetration rate,cutter-head rotational velocity,and field penetration index.Secondly,a real-time cutterhead torque prediction model’s structure is developed,based on the bidirectional long short-term memory(BLSTM)network integrating the dropout algorithm to prevent overfitting.Then,an algorithm to optimize hyperparameters of model based on Bayesian and cross-validation is proposed.Early stopping and checkpoint algorithms are integrated to optimize the training process.Finally,a BLSTMbased real-time cutter-head torque prediction model is developed,which fully utilizes the previous time-series tunneling information.The mean absolute percentage error(MAPE)of the model in the verification section is 7.3%,implying that the presented model is suitable for real-time cutter-head torque prediction.Furthermore,an incremental learning method based on the above base model is introduced to improve the adaptability of the model during the TBM tunneling.Comparison of the prediction performance between the base and incremental learning models in the same tunneling section shows that:(1)the MAPE of the predicted results of the BLSTM-based real-time cutter-head torque prediction model remains below 10%,and both the coefficient of determination(R^(2))and correlation coefficient(r)between measured and predicted values exceed 0.95;and(2)the incremental learning method is suitable for realtime cutter-head torque prediction and can effectively improve the prediction accuracy and generalization capacity of the model during the excavation process.展开更多
With the application of artificial intelligence technology in the power industry,the knowledge graph is expected to play a key role in power grid dispatch processes,intelligent maintenance,and customer service respons...With the application of artificial intelligence technology in the power industry,the knowledge graph is expected to play a key role in power grid dispatch processes,intelligent maintenance,and customer service response provision.Knowledge graphs are usually constructed based on entity recognition.Specifically,based on the mining of entity attributes and relationships,domain knowledge graphs can be constructed through knowledge fusion.In this work,the entities and characteristics of power entity recognition are analyzed,the mechanism of entity recognition is clarified,and entity recognition techniques are analyzed in the context of the power domain.Power entity recognition based on the conditional random fields (CRF) and bidirectional long short-term memory (BLSTM) models is investigated,and the two methods are comparatively analyzed.The results indicated that the CRF model,with an accuracy of 83%,can better identify the power entities compared to the BLSTM.The CRF approach can thus be applied to the entity extraction for knowledge graph construction in the power field.展开更多
This paper introduces the time-frequency analyzed long short-term memory(TF-LSTM) neural network method for jamming signal recognition over the Global Navigation Satellite System(GNSS) receiver. The method introduces ...This paper introduces the time-frequency analyzed long short-term memory(TF-LSTM) neural network method for jamming signal recognition over the Global Navigation Satellite System(GNSS) receiver. The method introduces the long shortterm memory(LSTM) neural network into the recognition algorithm and combines the time-frequency(TF) analysis for signal preprocessing. Five kinds of navigation jamming signals including white Gaussian noise(WGN), pulse jamming, sweep jamming, audio jamming, and spread spectrum jamming are used as input for training and recognition. Since the signal parameters and quantity are unknown in the actual scenario, this work builds a data set containing multiple kinds and parameters jamming to train the TF-LSTM. The performance of this method is evaluated by simulations and experiments. The method has higher recognition accuracy and better robustness than the existing methods, such as LSTM and the convolutional neural network(CNN).展开更多
An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models...An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models, this paper proposes a dynamic prediction model of landslide displacement based on singular spectrum analysis(SSA) and stack long short-term memory(SLSTM) network. The SSA is used to decompose the landslide accumulated displacement time series data into trend term and periodic term displacement subsequences. A cubic polynomial function is used to predict the trend term displacement subsequence, and the SLSTM neural network is used to predict the periodic term displacement subsequence. At the same time, the Bayesian optimization algorithm is used to determine that the SLSTM network input sequence length is 12 and the number of hidden layer nodes is 18. The SLSTM network is updated by adding predicted values to the training set to achieve dynamic displacement prediction. Finally, the accumulated landslide displacement is obtained by superimposing the predicted value of each displacement subsequence. The proposed model was verified on the Xintan landslide in Hubei Province, China. The results show that when predicting the displacement of the periodic term, the SLSTM network has higher prediction accuracy than the support vector machine(SVM) and auto regressive integrated moving average(ARIMA). The mean relative error(MRE) is reduced by 4.099% and 3.548% respectively, while the root mean square error(RMSE) is reduced by 5.830 mm and 3.854 mm respectively. It is concluded that the SLSTM network model can better simulate the dynamic characteristics of landslides.展开更多
Azimuth gamma logging while drilling(LWD)is one of the important technologies of geosteering but the information of real-time data transmission is limited and the interpretation is difficult.This study proposes a meth...Azimuth gamma logging while drilling(LWD)is one of the important technologies of geosteering but the information of real-time data transmission is limited and the interpretation is difficult.This study proposes a method of applying artificial intelligence in the LWD data interpretation to enhance the accuracy and efficiency of real-time data processing.By examining formation response characteristics of azimuth gamma ray(GR)curve,the preliminary formation change position is detected based on wavelet transform modulus maxima(WTMM)method,then the dynamic threshold is determined,and a set of contour points describing the formation boundary is obtained.The classification recognition model based on the long short-term memory(LSTM)is designed to judge the true or false of stratum information described by the contour point set to enhance the accuracy of formation identification.Finally,relative dip angle is calculated by nonlinear least square method.Interpretation of azimuth gamma data and application of real-time data processing while drilling show that the method proposed can effectively and accurately determine the formation changes,improve the accuracy of formation dip interpretation,and meet the needs of real-time LWD geosteering.展开更多
Statistics of languages are usually calculated by counting characters, words, sentences, word rankings. Some of these random variables are also the main “ingredients” of classical readability formulae. Revisiting th...Statistics of languages are usually calculated by counting characters, words, sentences, word rankings. Some of these random variables are also the main “ingredients” of classical readability formulae. Revisiting the readability formula of Italian, known as GULPEASE, shows that of the two terms that determine the readability index G—the semantic index , proportional to the number of characters per word, and the syntactic index GF, proportional to the reciprocal of the number of words per sentence—GF is dominant because GC is, in practice, constant for any author throughout seven centuries of Italian Literature. Each author can modulate the length of sentences more freely than he can do with the length of words, and in different ways from author to author. For any author, any couple of text variables can be modelled by a linear relationship y = mx, but with different slope m from author to author, except for the relationship between characters and words, which is unique for all. The most important relationship found in the paper is that between the short-term memory capacity, described by Miller’s “7 ? 2 law” (i.e., the number of “chunks” that an average person can hold in the short-term memory ranges from 5 to 9), and the word interval, a new random variable defined as the average number of words between two successive punctuation marks. The word interval can be converted into a time interval through the average reading speed. The word interval spreads in the same range as Miller’s law, and the time interval is spread in the same range of short-term memory response times. The connection between the word interval (and time interval) and short-term memory appears, at least empirically, justified and natural, however, to be further investigated. Technical and scientific writings (papers, essays, etc.) ask more to their readers because words are on the average longer, the readability index G is lower, word and time intervals are longer. Future work done on ancient languages, such as the classical Greek and Latin Literatures (or modern languages Literatures), could bring us an insight into the short-term memory required to their well-educated ancient readers.展开更多
基金National Key Research and Development Program of China (Grant No. 2022YFE0102700)National Natural Science Foundation of China (Grant No. 52102420)+2 种基金research project “Safe Da Batt” (03EMF0409A) funded by the German Federal Ministry of Digital and Transport (BMDV)China Postdoctoral Science Foundation (Grant No. 2023T160085)Sichuan Science and Technology Program (Grant No. 2024NSFSC0938)。
文摘A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.
文摘With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning and operating traffic structures.This study proposed an improved ensemble-based deep learning method to solve traffic volume prediction problems.A set of optimal hyperparameters is also applied for the suggested approach to improve the performance of the learning process.The fusion of these methodologies aims to harness ensemble empirical mode decomposition’s capacity to discern complex traffic patterns and long short-term memory’s proficiency in learning temporal relationships.Firstly,a dataset for automatic vehicle identification is obtained and utilized in the preprocessing stage of the ensemble empirical mode decomposition model.The second aspect involves predicting traffic volume using the long short-term memory algorithm.Next,the study employs a trial-and-error approach to select a set of optimal hyperparameters,including the lookback window,the number of neurons in the hidden layers,and the gradient descent optimization.Finally,the fusion of the obtained results leads to a final traffic volume prediction.The experimental results show that the proposed method outperforms other benchmarks regarding various evaluation measures,including mean absolute error,root mean squared error,mean absolute percentage error,and R-squared.The achieved R-squared value reaches an impressive 98%,while the other evaluation indices surpass the competing.These findings highlight the accuracy of traffic pattern prediction.Consequently,this offers promising prospects for enhancing transportation management systems and urban infrastructure planning.
基金Deanship of Research and Graduate Studies at King Khalid University for funding this work through Small Group Research Project under Grant Number RGP1/261/45.
文摘Breast cancer is a significant threat to the global population,affecting not only women but also a threat to the entire population.With recent advancements in digital pathology,Eosin and hematoxylin images provide enhanced clarity in examiningmicroscopic features of breast tissues based on their staining properties.Early cancer detection facilitates the quickening of the therapeutic process,thereby increasing survival rates.The analysis made by medical professionals,especially pathologists,is time-consuming and challenging,and there arises a need for automated breast cancer detection systems.The upcoming artificial intelligence platforms,especially deep learning models,play an important role in image diagnosis and prediction.Initially,the histopathology biopsy images are taken from standard data sources.Further,the gathered images are given as input to the Multi-Scale Dilated Vision Transformer,where the essential features are acquired.Subsequently,the features are subjected to the Bidirectional Long Short-Term Memory(Bi-LSTM)for classifying the breast cancer disorder.The efficacy of the model is evaluated using divergent metrics.When compared with other methods,the proposed work reveals that it offers impressive results for detection.
基金funded by the National Natural Science Foundation of China (41807285)。
文摘The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning models have some problems,such as poor nonlinear performance,local optimum and incomplete factors feature extraction.These issues can affect the accuracy of slope stability prediction.Therefore,a deep learning algorithm called Long short-term memory(LSTM)has been innovatively proposed to predict slope stability.Taking the Ganzhou City in China as the study area,the landslide inventory and their characteristics of geotechnical parameters,slope height and slope angle are analyzed.Based on these characteristics,typical soil slopes are constructed using the Geo-Studio software.Five control factors affecting slope stability,including slope height,slope angle,internal friction angle,cohesion and volumetric weight,are selected to form different slope and construct model input variables.Then,the limit equilibrium method is used to calculate the stability coefficients of these typical soil slopes under different control factors.Each slope stability coefficient and its corresponding control factors is a slope sample.As a result,a total of 2160 training samples and 450 testing samples are constructed.These sample sets are imported into LSTM for modelling and compared with the support vector machine(SVM),random forest(RF)and convo-lutional neural network(CNN).The results show that the LSTM overcomes the problem that the commonly used machine learning models have difficulty extracting global features.Furthermore,LSTM has a better prediction performance for slope stability compared to SVM,RF and CNN models.
文摘A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.
文摘There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement and time series of a landslide.The second one is the dynamic evolution of a landslide,which could not be feasibly simulated simply by traditional prediction models.In this paper,a dynamic model of displacement prediction is introduced for composite landslides based on a combination of empirical mode decomposition with soft screening stop criteria(SSSC-EMD)and deep bidirectional long short-term memory(DBi-LSTM)neural network.In the proposed model,the time series analysis and SSSC-EMD are used to decompose the observed accumulated displacements of a slope into three components,viz.trend displacement,periodic displacement,and random displacement.Then,by analyzing the evolution pattern of a landslide and its key factors triggering landslides,appropriate influencing factors are selected for each displacement component,and DBi-LSTM neural network to carry out multi-datadriven dynamic prediction for each displacement component.An accumulated displacement prediction has been obtained by a summation of each component.For accuracy verification and engineering practicability of the model,field observations from two known landslides in China,the Xintan landslide and the Bazimen landslide were collected for comparison and evaluation.The case study verified that the model proposed in this paper can better characterize the"stepwise"deformation characteristics of a slope.As compared with long short-term memory(LSTM)neural network,support vector machine(SVM),and autoregressive integrated moving average(ARIMA)model,DBi-LSTM neural network has higher accuracy in predicting the periodic displacement of slope deformation,with the mean absolute percentage error reduced by 3.063%,14.913%,and 13.960%respectively,and the root mean square error reduced by 1.951 mm,8.954 mm and 7.790 mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide new insight for practical landslide prevention and control engineering.
基金National Key R&D Program of China(No.2020YFB1707700)。
文摘The problems in equipment fault detection include data dimension explosion,computational complexity,low detection accuracy,etc.To solve these problems,a device anomaly detection algorithm based on enhanced long short-term memory(LSTM)is proposed.The algorithm first reduces the dimensionality of the device sensor data by principal component analysis(PCA),extracts the strongly correlated variable data among the multidimensional sensor data with the lowest possible information loss,and then uses the enhanced stacked LSTM to predict the extracted temporal data,thus improving the accuracy of anomaly detection.To improve the efficiency of the anomaly detection,a genetic algorithm(GA)is used to adjust the magnitude of the enhancements made by the LSTM model.The validation of the actual data from the pumps shows that the algorithm has significantly improved the recall rate and the detection speed of device anomaly detection,with the recall rate of 97.07%,which indicates that the algorithm is effective and efficient for device anomaly detection in the actual production environment.
文摘We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any two contiguous interpunctions I<sub>p</sub>, because this parameter can model how the human mind memorizes “chunks” of information. Since I<sub>P</sub> can be calculated for any alphabetical text, we can perform experiments—otherwise impossible— with ancient readers by studying the literary works they used to read. The “experiments” compare the I<sub>P</sub> of texts of a language/translation to those of another language/translation by measuring the minimum average probability of finding joint readers (those who can read both texts because of similar short-term memory capacity) and by defining an “overlap index”. We also define the population of universal readers, people who can read any New Testament text in any language. Future work is vast, with many research tracks, because alphabetical literatures are very large and allow many experiments, such as comparing authors, translations or even texts written by artificial intelligence tools.
基金the Project of the Central Government Guiding Local Science and Technology under Grant Number ZYYD2022B18the Institutional Ethics Committee of Affiliated Cancer Hospital of Xinjiang Medical University(No.K-2019001).
文摘Background:This study was designed to investigate the feasibility of tumor-infiltrating immune cells with different phenotypic characteristics for predicting short-term clinical responses in patients with locally advanced cervical cancer(LACC).Methods:Thirty-four patients who received concurrent chemoradiotherapy and twenty-one patients who merely underwent radiotherapy were enrolled in this study.We retrospectively analyzed the T cell markers(i.e.,CD3,CD4,CD8),memory markers(i.e.,CD45,CCR7),and differentiation markers(i.e.,CD27)in the peripheral blood and tumor tissues of patients with LACC before treatment based on flow cytometry.We also analyzed the relationship of T cell subsets between peripheral blood and tumor tissues,and their correlation with complete response or partial response.Results:The percentage of central memory CD8^(+)TCM(CD8^(+)CD45RA^(−)CD27^(+)CCR7^(+))cells in LACC patients was significantly lower than that of the control group.The percentage of CD8^(+)TN in the peripheral blood of LACC patients was significantly higher than that of tumor tissues.CD8^(+)TEM in the peripheral blood was significantly lower than that of tumor tissues.The percentage of CD8^(+)TN and CD8^(+)TCM in human papillomavirus(HPV)positive samples was significantly higher than that of HPV-negative samples.Similarly,the percentage of CD8^(+)TCM in tumor tissues was significantly higher in cancer tissue samples with lymph nodes compared with those without.Conclusion:A higher proportion of CD4^(+)TCM and a lower proportion of CD8^(+)TN in the tumor microenvironment of LACC may contribute to the therapy response prediction.
文摘Mental workload plays a vital role in cognitive impairment. The impairment refers to a person’s difficulty in remembering, receiving new information, learning new things, concentrating, or making decisions that seriously affect everyday life. In this paper, the simultaneous capacity (SIMKAP) experiment-based EEG workload analysis was presented using 45 subjects for multitasking mental workload estimation with subject wise attention loss calculation as well as short term memory loss measurement. Using an open access preprocessed EEG dataset, Discrete wavelet transforms (DWT) was utilized for feature extraction and Minimum redundancy and maximum relevancy (MRMR) technique was used to select most relevance features. Wavelet decomposition technique was also used for decomposing EEG signals into five sub bands. Fourteen statistical features were calculated from each sub band signal to form a 5 × 14 window size. The Neural Network (Narrow) classification algorithm was used to classify dataset for low and high workload conditions and comparison was made using some other machine learning models. The results show the classifier’s accuracy of 86.7%, precision of 84.4%, F1 score of 86.33%, and recall of 88.37% that crosses the state-of-the art methodologies in the literature. This prediction is expected to greatly facilitate the improved way in memory and attention loss impairments assessment.
基金supported by grants from the Ministerio de Economia y Competitividad(BFU2013-43458-R)Junta de Andalucia(P12-CTS-1694 and Proyexcel-00422)to ZUK。
文摘Memory deficit,which is often associated with aging and many psychiatric,neurological,and neurodegenerative diseases,has been a challenging issue for treatment.Up till now,all potential drug candidates have failed to produce satisfa ctory effects.Therefore,in the search for a solution,we found that a treatment with the gene corresponding to the RGS14414protein in visual area V2,a brain area connected with brain circuits of the ventral stream and the medial temporal lobe,which is crucial for object recognition memory(ORM),can induce enhancement of ORM.In this study,we demonstrated that the same treatment with RGS14414in visual area V2,which is relatively unaffected in neurodegenerative diseases such as Alzheimer s disease,produced longlasting enhancement of ORM in young animals and prevent ORM deficits in rodent models of aging and Alzheimer’s disease.Furthermore,we found that the prevention of memory deficits was mediated through the upregulation of neuronal arbo rization and spine density,as well as an increase in brain-derived neurotrophic factor(BDNF).A knockdown of BDNF gene in RGS14414-treated aging rats and Alzheimer s disease model mice caused complete loss in the upregulation of neuronal structural plasticity and in the prevention of ORM deficits.These findings suggest that BDNF-mediated neuronal structural plasticity in area V2 is crucial in the prevention of memory deficits in RGS14414-treated rodent models of aging and Alzheimer’s disease.Therefore,our findings of RGS14414gene-mediated activation of neuronal circuits in visual area V2 have therapeutic relevance in the treatment of memory deficits.
基金supported by the Ministry of Trade,Industry & Energy(MOTIE,Korea) under Industrial Technology Innovation Program (No.10063424,'development of distant speech recognition and multi-task dialog processing technologies for in-door conversational robots')
文摘A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.
基金The National Key R&D Program of China under contract No.2016YFC1402103
文摘To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with sea surface wind and wave heights as training samples.The prediction performance of the model is evaluated,and the error analysis shows that when using the same set of numerically predicted sea surface wind as input,the prediction error produced by the proposed LSTM model at Sta.N01 is 20%,18%and 23%lower than the conventional numerical wave models in terms of the total root mean square error(RMSE),scatter index(SI)and mean absolute error(MAE),respectively.Particularly,for significant wave height in the range of 3–5 m,the prediction accuracy of the LSTM model is improved the most remarkably,with RMSE,SI and MAE all decreasing by 24%.It is also evident that the numbers of hidden neurons,the numbers of buoys used and the time length of training samples all have impact on the prediction accuracy.However,the prediction does not necessary improve with the increase of number of hidden neurons or number of buoys used.The experiment trained by data with the longest time length is found to perform the best overall compared to other experiments with a shorter time length for training.Overall,long short-term memory neural network was proved to be a very promising method for future development and applications in wave forecasting.
基金financially supported by the National Natural Science Foundation of China (Grant Nos. 52074258, 41941018, and U21A20153)
文摘Based on data from the Jilin Water Diversion Tunnels from the Songhua River(China),an improved and real-time prediction method optimized by multi-algorithm for tunnel boring machine(TBM)cutter-head torque is presented.Firstly,a function excluding invalid and abnormal data is established to distinguish TBM operating state,and a feature selection method based on the SelectKBest algorithm is proposed.Accordingly,ten features that are most closely related to the cutter-head torque are selected as input variables,which,in descending order of influence,include the sum of motor torque,cutter-head power,sum of motor power,sum of motor current,advance rate,cutter-head pressure,total thrust force,penetration rate,cutter-head rotational velocity,and field penetration index.Secondly,a real-time cutterhead torque prediction model’s structure is developed,based on the bidirectional long short-term memory(BLSTM)network integrating the dropout algorithm to prevent overfitting.Then,an algorithm to optimize hyperparameters of model based on Bayesian and cross-validation is proposed.Early stopping and checkpoint algorithms are integrated to optimize the training process.Finally,a BLSTMbased real-time cutter-head torque prediction model is developed,which fully utilizes the previous time-series tunneling information.The mean absolute percentage error(MAPE)of the model in the verification section is 7.3%,implying that the presented model is suitable for real-time cutter-head torque prediction.Furthermore,an incremental learning method based on the above base model is introduced to improve the adaptability of the model during the TBM tunneling.Comparison of the prediction performance between the base and incremental learning models in the same tunneling section shows that:(1)the MAPE of the predicted results of the BLSTM-based real-time cutter-head torque prediction model remains below 10%,and both the coefficient of determination(R^(2))and correlation coefficient(r)between measured and predicted values exceed 0.95;and(2)the incremental learning method is suitable for realtime cutter-head torque prediction and can effectively improve the prediction accuracy and generalization capacity of the model during the excavation process.
基金supported by Science and Technology Project of State Grid Corporation(Research and Application of Intelligent Energy Meter Quality Analysis and Evaluation Technology Based on Full Chain Data)
文摘With the application of artificial intelligence technology in the power industry,the knowledge graph is expected to play a key role in power grid dispatch processes,intelligent maintenance,and customer service response provision.Knowledge graphs are usually constructed based on entity recognition.Specifically,based on the mining of entity attributes and relationships,domain knowledge graphs can be constructed through knowledge fusion.In this work,the entities and characteristics of power entity recognition are analyzed,the mechanism of entity recognition is clarified,and entity recognition techniques are analyzed in the context of the power domain.Power entity recognition based on the conditional random fields (CRF) and bidirectional long short-term memory (BLSTM) models is investigated,and the two methods are comparatively analyzed.The results indicated that the CRF model,with an accuracy of 83%,can better identify the power entities compared to the BLSTM.The CRF approach can thus be applied to the entity extraction for knowledge graph construction in the power field.
基金supported by the National Natural Science Foundation of China (62003354)。
文摘This paper introduces the time-frequency analyzed long short-term memory(TF-LSTM) neural network method for jamming signal recognition over the Global Navigation Satellite System(GNSS) receiver. The method introduces the long shortterm memory(LSTM) neural network into the recognition algorithm and combines the time-frequency(TF) analysis for signal preprocessing. Five kinds of navigation jamming signals including white Gaussian noise(WGN), pulse jamming, sweep jamming, audio jamming, and spread spectrum jamming are used as input for training and recognition. Since the signal parameters and quantity are unknown in the actual scenario, this work builds a data set containing multiple kinds and parameters jamming to train the TF-LSTM. The performance of this method is evaluated by simulations and experiments. The method has higher recognition accuracy and better robustness than the existing methods, such as LSTM and the convolutional neural network(CNN).
基金supported by the Natural Science Foundation of Shaanxi Province under Grant 2019JQ206in part by the Science and Technology Department of Shaanxi Province under Grant 2020CGXNG-009in part by the Education Department of Shaanxi Province under Grant 17JK0346。
文摘An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models, this paper proposes a dynamic prediction model of landslide displacement based on singular spectrum analysis(SSA) and stack long short-term memory(SLSTM) network. The SSA is used to decompose the landslide accumulated displacement time series data into trend term and periodic term displacement subsequences. A cubic polynomial function is used to predict the trend term displacement subsequence, and the SLSTM neural network is used to predict the periodic term displacement subsequence. At the same time, the Bayesian optimization algorithm is used to determine that the SLSTM network input sequence length is 12 and the number of hidden layer nodes is 18. The SLSTM network is updated by adding predicted values to the training set to achieve dynamic displacement prediction. Finally, the accumulated landslide displacement is obtained by superimposing the predicted value of each displacement subsequence. The proposed model was verified on the Xintan landslide in Hubei Province, China. The results show that when predicting the displacement of the periodic term, the SLSTM network has higher prediction accuracy than the support vector machine(SVM) and auto regressive integrated moving average(ARIMA). The mean relative error(MRE) is reduced by 4.099% and 3.548% respectively, while the root mean square error(RMSE) is reduced by 5.830 mm and 3.854 mm respectively. It is concluded that the SLSTM network model can better simulate the dynamic characteristics of landslides.
基金Supported by the PetroChina Major Scientific and Technological Project(ZD2019-183-006)Fundamental Scientific Research Fund of Central Universities(20CX05017A)China National Science and Technology Major Project(2016ZX05021-001)。
文摘Azimuth gamma logging while drilling(LWD)is one of the important technologies of geosteering but the information of real-time data transmission is limited and the interpretation is difficult.This study proposes a method of applying artificial intelligence in the LWD data interpretation to enhance the accuracy and efficiency of real-time data processing.By examining formation response characteristics of azimuth gamma ray(GR)curve,the preliminary formation change position is detected based on wavelet transform modulus maxima(WTMM)method,then the dynamic threshold is determined,and a set of contour points describing the formation boundary is obtained.The classification recognition model based on the long short-term memory(LSTM)is designed to judge the true or false of stratum information described by the contour point set to enhance the accuracy of formation identification.Finally,relative dip angle is calculated by nonlinear least square method.Interpretation of azimuth gamma data and application of real-time data processing while drilling show that the method proposed can effectively and accurately determine the formation changes,improve the accuracy of formation dip interpretation,and meet the needs of real-time LWD geosteering.
文摘Statistics of languages are usually calculated by counting characters, words, sentences, word rankings. Some of these random variables are also the main “ingredients” of classical readability formulae. Revisiting the readability formula of Italian, known as GULPEASE, shows that of the two terms that determine the readability index G—the semantic index , proportional to the number of characters per word, and the syntactic index GF, proportional to the reciprocal of the number of words per sentence—GF is dominant because GC is, in practice, constant for any author throughout seven centuries of Italian Literature. Each author can modulate the length of sentences more freely than he can do with the length of words, and in different ways from author to author. For any author, any couple of text variables can be modelled by a linear relationship y = mx, but with different slope m from author to author, except for the relationship between characters and words, which is unique for all. The most important relationship found in the paper is that between the short-term memory capacity, described by Miller’s “7 ? 2 law” (i.e., the number of “chunks” that an average person can hold in the short-term memory ranges from 5 to 9), and the word interval, a new random variable defined as the average number of words between two successive punctuation marks. The word interval can be converted into a time interval through the average reading speed. The word interval spreads in the same range as Miller’s law, and the time interval is spread in the same range of short-term memory response times. The connection between the word interval (and time interval) and short-term memory appears, at least empirically, justified and natural, however, to be further investigated. Technical and scientific writings (papers, essays, etc.) ask more to their readers because words are on the average longer, the readability index G is lower, word and time intervals are longer. Future work done on ancient languages, such as the classical Greek and Latin Literatures (or modern languages Literatures), could bring us an insight into the short-term memory required to their well-educated ancient readers.