A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan....A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.展开更多
Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for India...Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.展开更多
A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an e...A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an end-to-end OCR system that does both localization and recognition and serves as a single unit to automate payable document processing such as cheques and cash disbursement.For text localization,the maximally stable extremal region is used,which extracts a word or digit chunk from an invoice.This chunk is later passed to the deep learning model,which performs text recognition.The deep learning model utilizes both convolution neural networks and long short-term memory(LSTM).The convolution layer is used for extracting features,which are fed to the LSTM.The model integrates feature extraction,modeling sequence,and transcription into a unified network.It handles the sequences of unconstrained lengths,independent of the character segmentation or horizontal scale normalization.Furthermore,it applies to both the lexicon-free and lexicon-based text recognition,and finally,it produces a comparatively smaller model,which can be implemented in practical applications.The overall superior performance in the experimental evaluation demonstrates the usefulness of the proposed model.The model is thus generic and can be used for other similar recognition scenarios.展开更多
As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical...As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical workers and patients because of its ability to assist in the diagnosis of diseases.Therefore,the research of real-time diagnosis and classification algorithms for arrhythmia can help to improve the diagnostic efficiency of diseases.In this paper,we design an automatic arrhythmia classification algorithm model based on Convolutional Neural Network(CNN)and Encoder-Decoder model.The model uses Long Short-Term Memory(LSTM)to consider the influence of time series features on classification results.Simultaneously,it is trained and tested by the MIT-BIH arrhythmia database.Besides,Generative Adversarial Networks(GAN)is adopted as a method of data equalization for solving data imbalance problem.The simulation results show that for the inter-patient arrhythmia classification,the hybrid model combining CNN and Encoder-Decoder model has the best classification accuracy,of which the accuracy can reach 94.05%.Especially,it has a better advantage for the classification effect of supraventricular ectopic beats(class S)and fusion beats(class F).展开更多
Low dynamic range(LDR)images captured by consumer cameras have a limited luminance range.As the conventional method for generating high dynamic range(HDR)images involves merging multiple-exposure LDR images of the sam...Low dynamic range(LDR)images captured by consumer cameras have a limited luminance range.As the conventional method for generating high dynamic range(HDR)images involves merging multiple-exposure LDR images of the same scene(assuming a stationary scene),we introduce a learning-based model for single-image HDR reconstruction.An input LDR image is sequentially segmented into the local region maps based on the cumulative histogram of the input brightness distribution.Using the local region maps,SParam-Net estimates the parameters of an inverse tone mapping function to generate a pseudo-HDR image.We process the segmented region maps as the input sequences on long short-term memory.Finally,a fast super-resolution convolutional neural network is used for HDR image reconstruction.The proposed method was trained and tested on datasets including HDR-Real,LDR-HDR-pair,and HDR-Eye.The experimental results revealed that HDR images can be generated more reliably than using contemporary end-to-end approaches.展开更多
Speech enhancement is the task of taking a noisy speech input and pro-ducing an enhanced speech output.In recent years,the need for speech enhance-ment has been increased due to challenges that occurred in various app...Speech enhancement is the task of taking a noisy speech input and pro-ducing an enhanced speech output.In recent years,the need for speech enhance-ment has been increased due to challenges that occurred in various applications such as hearing aids,Automatic Speech Recognition(ASR),and mobile speech communication systems.Most of the Speech Enhancement research work has been carried out for English,Chinese,and other European languages.Only a few research works involve speech enhancement in Indian regional Languages.In this paper,we propose a two-fold architecture to perform speech enhancement for Tamil speech signal based on convolutional recurrent neural network(CRN)that addresses the speech enhancement in a real-time single channel or track of sound created by the speaker.In thefirst stage mask based long short-term mem-ory(LSTM)is used for noise suppression along with loss function and in the sec-ond stage,Convolutional Encoder-Decoder(CED)is used for speech restoration.The proposed model is evaluated on various speaker and noisy environments like Babble noise,car noise,and white Gaussian noise.The proposed CRN model improves speech quality by 0.1 points when compared with the LSTM base model and also CRN requires fewer parameters for training.The performance of the pro-posed model is outstanding even in low Signal to Noise Ratio(SNR).展开更多
Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and a...Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.展开更多
Accurately predicting motion responses is a crucial component of the design process for floating offshore structures.This study introduces a hybrid model that integrates a convolutional neural network(CNN),a bidirecti...Accurately predicting motion responses is a crucial component of the design process for floating offshore structures.This study introduces a hybrid model that integrates a convolutional neural network(CNN),a bidirectional long short-term memory(BiLSTM)neural network,and an attention mechanism for forecasting the short-term motion responses of a semisubmersible.First,the motions are processed through the CNN for feature extraction.The extracted features are subsequently utilized by the BiLSTM network to forecast future motions.To enhance the predictive capability of the neural networks,an attention mechanism is integrated.In addition to the hybrid model,the BiLSTM is independently employed to forecast the motion responses of the semi-submersible,serving as benchmark results for comparison.Furthermore,both the 1D and 2D convolutions are conducted to check the influence of the convolutional dimensionality on the predicted results.The results demonstrate that the hybrid 1D CNN-BiLSTM network with an attention mechanism outperforms all other models in accurately predicting motion responses.展开更多
Electrodeposition in electrochemical cells is one of the leading causes of its performance deterioration. The prediction of electrodeposition growth demands a good understanding of the complex physics involved, which ...Electrodeposition in electrochemical cells is one of the leading causes of its performance deterioration. The prediction of electrodeposition growth demands a good understanding of the complex physics involved, which can lead to the fabrication of a probabilistic mathematical model. As an alternative, a convolutional Long shortterm memory architecture-based image analysis approach is presented herein. This technique can predict the electrodeposition growth of the electrolytes, without prior detailed knowledge of the system. The captured images of the electrodeposition from the experiments are used to train and test the model. A comparison between the expected output image and predicted image on a pixel level, percentage mean squared error, absolute percentage error, and pattern density of the electrodeposit are investigated to assess the model accuracy. The randomness of the electrodeposition growth is outlined by investigating the fractal dimension and the interfacial length of the electrodeposits. The trained model predictions show a significant promise between all the experimentally obtained relevant parameters with the predicted one. It is expected that this deep learning-based approach for predicting random electrodeposition growth will be of immense help for designing and optimizing the relevant experimental scheme in near future without performing multiple experiments.展开更多
Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning netwo...Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.展开更多
In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits...In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits of energy storage in the process of participating in the power market,this paper takes energy storage scheduling as merely one factor affecting short-term power load,which affects short-term load time series along with time-of-use price,holidays,and temperature.A deep learning network is used to predict the short-term load,a convolutional neural network(CNN)is used to extract the features,and a long short-term memory(LSTM)network is used to learn the temporal characteristics of the load value,which can effectively improve prediction accuracy.Taking the load data of a certain region as an example,the CNN-LSTM prediction model is compared with the single LSTM prediction model.The experimental results show that the CNN-LSTM deep learning network with the participation of energy storage in dispatching can have high prediction accuracy for short-term power load forecasting.展开更多
A precise and timely forecast of short-term rail transit passenger flow provides data support for traffic management and operation,assisting rail operators in efficiently allocating resources and timely relieving pres...A precise and timely forecast of short-term rail transit passenger flow provides data support for traffic management and operation,assisting rail operators in efficiently allocating resources and timely relieving pressure on passenger safety and operation.First,the passenger flow sequence models in the study are broken down using VMD for noise reduction.The objective environment features are then added to the characteristic factors that affect the passenger flow.The target station serves as an additional spatial feature and is mined concurrently using the KNN algorithm.It is shown that the hybrid model VMD-CLSMT has a higher prediction accuracy,by setting BP,CNN,and LSTM reference experiments.All models’second order prediction effects are superior to their first order effects,showing that the residual network can significantly raise model prediction accuracy.Additionally,it confirms the efficacy of supplementary and objective environmental features.展开更多
The identification of individuals through ear images is a prominent area of study in the biometric sector.Facial recognition systems have faced challenges during the COVID-19 pandemic due to mask-wearing,prompting the...The identification of individuals through ear images is a prominent area of study in the biometric sector.Facial recognition systems have faced challenges during the COVID-19 pandemic due to mask-wearing,prompting the exploration of supplementary biometric measures such as ear biometrics.The research proposes a Deep Learning(DL)framework,termed DeepBio,using ear biometrics for human identification.It employs two DL models and five datasets,including IIT Delhi(IITD-I and IITD-II),annotated web images(AWI),mathematical analysis of images(AMI),and EARVN1.Data augmentation techniques such as flipping,translation,and Gaussian noise are applied to enhance model performance and mitigate overfitting.Feature extraction and human identification are conducted using a hybrid approach combining Convolutional Neural Networks(CNN)and Bidirectional Long Short-Term Memory(Bi-LSTM).The DeepBio framework achieves high recognition rates of 97.97%,99.37%,98.57%,94.5%,and 96.87%on the respective datasets.Comparative analysis with existing techniques demonstrates improvements of 0.41%,0.47%,12%,and 9.75%on IITD-II,AMI,AWE,and EARVN1 datasets,respectively.展开更多
Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the da...Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the data from a single monitoring point and neglect the spatial relationships between multiple monitoring points.Besides,most models lack flexibility in providing predictions for multiple days after monitoring activity.This study proposes a sequence-to-sequence(seq2seq)two-dimensional(2D)convolutional long short-term memory neural network(S2SCL2D)for predicting the spatiotemporal wall deflections induced by deep excavations.The model utilizes the data from all monitoring points on the entire wall and extracts spatiotemporal features from data by combining the 2D convolutional layers and long short-term memory(LSTM)layers.The S2SCL2D model achieves a long-term prediction of wall deflections through a recursive seq2seq structure.The excavation depth,which has a significant impact on wall deflections,is also considered using a feature fusion method.An excavation project in Hangzhou,China,is used to illustrate the proposed model.The results demonstrate that the S2SCL2D model has superior prediction accuracy and robustness than that of the LSTM and S2SCL1D(one-dimensional)models.The prediction model demonstrates a strong generalizability when applied to an adjacent excavation.Based on the long-term prediction results,practitioners can plan and allocate resources in advance to address the potential engineering issues.展开更多
Geochemical survey data analysis is recognized as an implemented and feasible way for lithological mapping to assist mineral exploration.With respect to available approaches,recent methodological advances have focused...Geochemical survey data analysis is recognized as an implemented and feasible way for lithological mapping to assist mineral exploration.With respect to available approaches,recent methodological advances have focused on deep learning algorithms which provide access to learn and extract information directly from geochemical survey data through multi-level networks and outputting end-to-end classification.Accordingly,this study developed a lithological mapping framework with the joint application of a convolutional neural network(CNN)and a long short-term memory(LSTM).The CNN-LSTM model is dominant in correlation extraction from CNN layers and coupling interaction learning from LSTM layers.This hybrid approach was demonstrated by mapping leucogranites in the Himalayan orogen based on stream sediment geochemical survey data,where the targeted leucogranite was expected to be potential resources of rare metals such as Li,Be,and W mineralization.Three comparative case studies were carried out from both visual and quantitative perspectives to illustrate the superiority of the proposed model.A guided spatial distribution map of leucogranites in the Himalayan orogen,divided into high-,moderate-,and low-potential areas,was delineated by the success rate curve,which further improves the efficiency for identifying unmapped leucogranites through geological mapping.In light of these results,this study provides an alternative solution for lithologic mapping using geochemical survey data at a regional scale and reduces the risk for decision making associated with mineral exploration.展开更多
Electric motor-driven systems are core components across industries,yet they’re susceptible to bearing faults.Manual fault diagnosis poses safety risks and economic instability,necessitating an automated approach.Thi...Electric motor-driven systems are core components across industries,yet they’re susceptible to bearing faults.Manual fault diagnosis poses safety risks and economic instability,necessitating an automated approach.This study proposes FTCNNLSTM(Fine-Tuned TabNet Convolutional Neural Network Long Short-Term Memory),an algorithm combining Convolutional Neural Networks,Long Short-Term Memory Networks,and Attentive Interpretable Tabular Learning.The model preprocesses the CWRU(Case Western Reserve University)bearing dataset using segmentation,normalization,feature scaling,and label encoding.Its architecture comprises multiple 1D Convolutional layers,batch normalization,max-pooling,and LSTM blocks with dropout,followed by batch normalization,dense layers,and appropriate activation and loss functions.Fine-tuning techniques prevent over-fitting.Evaluations were conducted on 10 fault classes from the CWRU dataset.FTCNNLSTM was benchmarked against four approaches:CNN,LSTM,CNN-LSTM with random forest,and CNN-LSTM with gradient boosting,all using 460 instances.The FTCNNLSTM model,augmented with TabNet,achieved 96%accuracy,outperforming other methods.This establishes it as a reliable and effective approach for automating bearing fault detection in electric motor-driven systems.展开更多
The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning al...The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning algorithms after artificial feature extraction.However,guaranteeing the effectiveness of the extracted features is difficult.The current trend focuses on using a convolution neural network to automatically extract features for classification.This method can be used to extract signal spatial features automatically through a convolution kernel;however,infrasound signals contain not only spatial information but also temporal information when used as a time series.These extracted temporal features are also crucial.If only a convolution neural network is used,then the time dependence of the infrasound sequence will be missed.Using long short-term memory networks can compensate for the missing time-series features but induces spatial feature information loss of the infrasound signal.A multiscale squeeze excitation–convolution neural network–bidirectional long short-term memory network infrasound event classification fusion model is proposed in this study to address these problems.This model automatically extracted temporal and spatial features,adaptively selected features,and also realized the fusion of the two types of features.Experimental results showed that the classification accuracy of the model was more than 98%,thus verifying the effectiveness and superiority of the proposed model.展开更多
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear...Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.展开更多
Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown pr...Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection.展开更多
When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ...When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves.展开更多
基金National Key Research and Development Program of China (Grant No. 2022YFE0102700)National Natural Science Foundation of China (Grant No. 52102420)+2 种基金research project “Safe Da Batt” (03EMF0409A) funded by the German Federal Ministry of Digital and Transport (BMDV)China Postdoctoral Science Foundation (Grant No. 2023T160085)Sichuan Science and Technology Program (Grant No. 2024NSFSC0938)。
文摘A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.
文摘Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.
基金Researchers would like to thank the Deanship of Scientific Research,Qassim University,for funding publication of this project.
文摘A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an end-to-end OCR system that does both localization and recognition and serves as a single unit to automate payable document processing such as cheques and cash disbursement.For text localization,the maximally stable extremal region is used,which extracts a word or digit chunk from an invoice.This chunk is later passed to the deep learning model,which performs text recognition.The deep learning model utilizes both convolution neural networks and long short-term memory(LSTM).The convolution layer is used for extracting features,which are fed to the LSTM.The model integrates feature extraction,modeling sequence,and transcription into a unified network.It handles the sequences of unconstrained lengths,independent of the character segmentation or horizontal scale normalization.Furthermore,it applies to both the lexicon-free and lexicon-based text recognition,and finally,it produces a comparatively smaller model,which can be implemented in practical applications.The overall superior performance in the experimental evaluation demonstrates the usefulness of the proposed model.The model is thus generic and can be used for other similar recognition scenarios.
基金Fundamental Research Funds for the Central Universities(Grant No.FRF-TP-19-006A3).
文摘As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical workers and patients because of its ability to assist in the diagnosis of diseases.Therefore,the research of real-time diagnosis and classification algorithms for arrhythmia can help to improve the diagnostic efficiency of diseases.In this paper,we design an automatic arrhythmia classification algorithm model based on Convolutional Neural Network(CNN)and Encoder-Decoder model.The model uses Long Short-Term Memory(LSTM)to consider the influence of time series features on classification results.Simultaneously,it is trained and tested by the MIT-BIH arrhythmia database.Besides,Generative Adversarial Networks(GAN)is adopted as a method of data equalization for solving data imbalance problem.The simulation results show that for the inter-patient arrhythmia classification,the hybrid model combining CNN and Encoder-Decoder model has the best classification accuracy,of which the accuracy can reach 94.05%.Especially,it has a better advantage for the classification effect of supraventricular ectopic beats(class S)and fusion beats(class F).
基金This study was supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2018R1D1A1B07049932).
文摘Low dynamic range(LDR)images captured by consumer cameras have a limited luminance range.As the conventional method for generating high dynamic range(HDR)images involves merging multiple-exposure LDR images of the same scene(assuming a stationary scene),we introduce a learning-based model for single-image HDR reconstruction.An input LDR image is sequentially segmented into the local region maps based on the cumulative histogram of the input brightness distribution.Using the local region maps,SParam-Net estimates the parameters of an inverse tone mapping function to generate a pseudo-HDR image.We process the segmented region maps as the input sequences on long short-term memory.Finally,a fast super-resolution convolutional neural network is used for HDR image reconstruction.The proposed method was trained and tested on datasets including HDR-Real,LDR-HDR-pair,and HDR-Eye.The experimental results revealed that HDR images can be generated more reliably than using contemporary end-to-end approaches.
文摘Speech enhancement is the task of taking a noisy speech input and pro-ducing an enhanced speech output.In recent years,the need for speech enhance-ment has been increased due to challenges that occurred in various applications such as hearing aids,Automatic Speech Recognition(ASR),and mobile speech communication systems.Most of the Speech Enhancement research work has been carried out for English,Chinese,and other European languages.Only a few research works involve speech enhancement in Indian regional Languages.In this paper,we propose a two-fold architecture to perform speech enhancement for Tamil speech signal based on convolutional recurrent neural network(CRN)that addresses the speech enhancement in a real-time single channel or track of sound created by the speaker.In thefirst stage mask based long short-term mem-ory(LSTM)is used for noise suppression along with loss function and in the sec-ond stage,Convolutional Encoder-Decoder(CED)is used for speech restoration.The proposed model is evaluated on various speaker and noisy environments like Babble noise,car noise,and white Gaussian noise.The proposed CRN model improves speech quality by 0.1 points when compared with the LSTM base model and also CRN requires fewer parameters for training.The performance of the pro-posed model is outstanding even in low Signal to Noise Ratio(SNR).
基金supported in part by the National Natural Science Foundation of China under Grant 62203468in part by the Technological Research and Development Program of China State Railway Group Co.,Ltd.under Grant Q2023X011+1 种基金in part by the Young Elite Scientist Sponsorship Program by China Association for Science and Technology(CAST)under Grant 2022QNRC001in part by the Youth Talent Program Supported by China Railway Society,and in part by the Research Program of China Academy of Railway Sciences Corporation Limited under Grant 2023YJ112.
文摘Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.
基金The first author gratefully acknowledges the National Natural Science Foundation of China(Grant No.52301322)the Jiangsu Provincial Natural Science Foundation(Grant No.BK20220653)+1 种基金The second author gratefully acknowledges the National Science Fund for Distinguished Young Scholars(Grant No.52025112)the Key Projects of the National Natural Science Foundation of China(Grant No.52331011).
文摘Accurately predicting motion responses is a crucial component of the design process for floating offshore structures.This study introduces a hybrid model that integrates a convolutional neural network(CNN),a bidirectional long short-term memory(BiLSTM)neural network,and an attention mechanism for forecasting the short-term motion responses of a semisubmersible.First,the motions are processed through the CNN for feature extraction.The extracted features are subsequently utilized by the BiLSTM network to forecast future motions.To enhance the predictive capability of the neural networks,an attention mechanism is integrated.In addition to the hybrid model,the BiLSTM is independently employed to forecast the motion responses of the semi-submersible,serving as benchmark results for comparison.Furthermore,both the 1D and 2D convolutions are conducted to check the influence of the convolutional dimensionality on the predicted results.The results demonstrate that the hybrid 1D CNN-BiLSTM network with an attention mechanism outperforms all other models in accurately predicting motion responses.
文摘Electrodeposition in electrochemical cells is one of the leading causes of its performance deterioration. The prediction of electrodeposition growth demands a good understanding of the complex physics involved, which can lead to the fabrication of a probabilistic mathematical model. As an alternative, a convolutional Long shortterm memory architecture-based image analysis approach is presented herein. This technique can predict the electrodeposition growth of the electrolytes, without prior detailed knowledge of the system. The captured images of the electrodeposition from the experiments are used to train and test the model. A comparison between the expected output image and predicted image on a pixel level, percentage mean squared error, absolute percentage error, and pattern density of the electrodeposit are investigated to assess the model accuracy. The randomness of the electrodeposition growth is outlined by investigating the fractal dimension and the interfacial length of the electrodeposits. The trained model predictions show a significant promise between all the experimentally obtained relevant parameters with the predicted one. It is expected that this deep learning-based approach for predicting random electrodeposition growth will be of immense help for designing and optimizing the relevant experimental scheme in near future without performing multiple experiments.
文摘Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.
基金supported by a State Grid Zhejiang Electric Power Co.,Ltd.Economic and Technical Research Institute Project(Key Technologies and Empirical Research of Diversified Integrated Operation of User-Side Energy Storage in Power Market Environment,No.5211JY19000W)supported by the National Natural Science Foundation of China(Research on Power Market Management to Promote Large-Scale New Energy Consumption,No.71804045).
文摘In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits of energy storage in the process of participating in the power market,this paper takes energy storage scheduling as merely one factor affecting short-term power load,which affects short-term load time series along with time-of-use price,holidays,and temperature.A deep learning network is used to predict the short-term load,a convolutional neural network(CNN)is used to extract the features,and a long short-term memory(LSTM)network is used to learn the temporal characteristics of the load value,which can effectively improve prediction accuracy.Taking the load data of a certain region as an example,the CNN-LSTM prediction model is compared with the single LSTM prediction model.The experimental results show that the CNN-LSTM deep learning network with the participation of energy storage in dispatching can have high prediction accuracy for short-term power load forecasting.
基金the Major Projects of the National Social Science Fund in China(21&ZD127).
文摘A precise and timely forecast of short-term rail transit passenger flow provides data support for traffic management and operation,assisting rail operators in efficiently allocating resources and timely relieving pressure on passenger safety and operation.First,the passenger flow sequence models in the study are broken down using VMD for noise reduction.The objective environment features are then added to the characteristic factors that affect the passenger flow.The target station serves as an additional spatial feature and is mined concurrently using the KNN algorithm.It is shown that the hybrid model VMD-CLSMT has a higher prediction accuracy,by setting BP,CNN,and LSTM reference experiments.All models’second order prediction effects are superior to their first order effects,showing that the residual network can significantly raise model prediction accuracy.Additionally,it confirms the efficacy of supplementary and objective environmental features.
文摘The identification of individuals through ear images is a prominent area of study in the biometric sector.Facial recognition systems have faced challenges during the COVID-19 pandemic due to mask-wearing,prompting the exploration of supplementary biometric measures such as ear biometrics.The research proposes a Deep Learning(DL)framework,termed DeepBio,using ear biometrics for human identification.It employs two DL models and five datasets,including IIT Delhi(IITD-I and IITD-II),annotated web images(AWI),mathematical analysis of images(AMI),and EARVN1.Data augmentation techniques such as flipping,translation,and Gaussian noise are applied to enhance model performance and mitigate overfitting.Feature extraction and human identification are conducted using a hybrid approach combining Convolutional Neural Networks(CNN)and Bidirectional Long Short-Term Memory(Bi-LSTM).The DeepBio framework achieves high recognition rates of 97.97%,99.37%,98.57%,94.5%,and 96.87%on the respective datasets.Comparative analysis with existing techniques demonstrates improvements of 0.41%,0.47%,12%,and 9.75%on IITD-II,AMI,AWE,and EARVN1 datasets,respectively.
基金supported by the National Natural Science Foundation of China(Grant No.42307218)the Foundation of Key Laboratory of Soft Soils and Geoenvironmental Engineering(Zhejiang University),Ministry of Education(Grant No.2022P08)the Natural Science Foundation of Zhejiang Province(Grant No.LTZ21E080001).
文摘Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the data from a single monitoring point and neglect the spatial relationships between multiple monitoring points.Besides,most models lack flexibility in providing predictions for multiple days after monitoring activity.This study proposes a sequence-to-sequence(seq2seq)two-dimensional(2D)convolutional long short-term memory neural network(S2SCL2D)for predicting the spatiotemporal wall deflections induced by deep excavations.The model utilizes the data from all monitoring points on the entire wall and extracts spatiotemporal features from data by combining the 2D convolutional layers and long short-term memory(LSTM)layers.The S2SCL2D model achieves a long-term prediction of wall deflections through a recursive seq2seq structure.The excavation depth,which has a significant impact on wall deflections,is also considered using a feature fusion method.An excavation project in Hangzhou,China,is used to illustrate the proposed model.The results demonstrate that the S2SCL2D model has superior prediction accuracy and robustness than that of the LSTM and S2SCL1D(one-dimensional)models.The prediction model demonstrates a strong generalizability when applied to an adjacent excavation.Based on the long-term prediction results,practitioners can plan and allocate resources in advance to address the potential engineering issues.
基金supported by the National Natural Science Foundation of China (Nos.41972303 and 42102332)the Natural Science Foundation of Hubei Province (China) (Nos.2023AFA001 and 2023AFD232).
文摘Geochemical survey data analysis is recognized as an implemented and feasible way for lithological mapping to assist mineral exploration.With respect to available approaches,recent methodological advances have focused on deep learning algorithms which provide access to learn and extract information directly from geochemical survey data through multi-level networks and outputting end-to-end classification.Accordingly,this study developed a lithological mapping framework with the joint application of a convolutional neural network(CNN)and a long short-term memory(LSTM).The CNN-LSTM model is dominant in correlation extraction from CNN layers and coupling interaction learning from LSTM layers.This hybrid approach was demonstrated by mapping leucogranites in the Himalayan orogen based on stream sediment geochemical survey data,where the targeted leucogranite was expected to be potential resources of rare metals such as Li,Be,and W mineralization.Three comparative case studies were carried out from both visual and quantitative perspectives to illustrate the superiority of the proposed model.A guided spatial distribution map of leucogranites in the Himalayan orogen,divided into high-,moderate-,and low-potential areas,was delineated by the success rate curve,which further improves the efficiency for identifying unmapped leucogranites through geological mapping.In light of these results,this study provides an alternative solution for lithologic mapping using geochemical survey data at a regional scale and reduces the risk for decision making associated with mineral exploration.
基金supported by King Abdulaziz University,Deanship of Scientific Research,Jeddah,Saudi Arabia under grant no. (GWV-8053-2022).
文摘Electric motor-driven systems are core components across industries,yet they’re susceptible to bearing faults.Manual fault diagnosis poses safety risks and economic instability,necessitating an automated approach.This study proposes FTCNNLSTM(Fine-Tuned TabNet Convolutional Neural Network Long Short-Term Memory),an algorithm combining Convolutional Neural Networks,Long Short-Term Memory Networks,and Attentive Interpretable Tabular Learning.The model preprocesses the CWRU(Case Western Reserve University)bearing dataset using segmentation,normalization,feature scaling,and label encoding.Its architecture comprises multiple 1D Convolutional layers,batch normalization,max-pooling,and LSTM blocks with dropout,followed by batch normalization,dense layers,and appropriate activation and loss functions.Fine-tuning techniques prevent over-fitting.Evaluations were conducted on 10 fault classes from the CWRU dataset.FTCNNLSTM was benchmarked against four approaches:CNN,LSTM,CNN-LSTM with random forest,and CNN-LSTM with gradient boosting,all using 460 instances.The FTCNNLSTM model,augmented with TabNet,achieved 96%accuracy,outperforming other methods.This establishes it as a reliable and effective approach for automating bearing fault detection in electric motor-driven systems.
基金supported by the Shaanxi Province Natural Science Basic Research Plan Project(2023-JC-YB-244).
文摘The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning algorithms after artificial feature extraction.However,guaranteeing the effectiveness of the extracted features is difficult.The current trend focuses on using a convolution neural network to automatically extract features for classification.This method can be used to extract signal spatial features automatically through a convolution kernel;however,infrasound signals contain not only spatial information but also temporal information when used as a time series.These extracted temporal features are also crucial.If only a convolution neural network is used,then the time dependence of the infrasound sequence will be missed.Using long short-term memory networks can compensate for the missing time-series features but induces spatial feature information loss of the infrasound signal.A multiscale squeeze excitation–convolution neural network–bidirectional long short-term memory network infrasound event classification fusion model is proposed in this study to address these problems.This model automatically extracted temporal and spatial features,adaptively selected features,and also realized the fusion of the two types of features.Experimental results showed that the classification accuracy of the model was more than 98%,thus verifying the effectiveness and superiority of the proposed model.
基金funded by the Natural Science Foundation of Fujian Province,China (Grant No.2022J05291)Xiamen Scientific Research Funding for Overseas Chinese Scholars.
文摘Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.
文摘Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection.
文摘When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves.