期刊文献+
共找到67篇文章
< 1 2 4 >
每页显示 20 50 100
State-of-health estimation for fast-charging lithium-ion batteries based on a short charge curve using graph convolutional and long short-term memory networks
1
作者 Yvxin He Zhongwei Deng +4 位作者 Jue Chen Weihan Li Jingjing Zhou Fei Xiang Xiaosong Hu 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2024年第11期1-11,共11页
A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan.... A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively. 展开更多
关键词 Lithium-ion battery State of health estimation Feature extraction Graph convolutional network long short-term memory network
下载PDF
Audiovisual speech recognition based on a deep convolutional neural network
2
作者 Shashidhar Rudregowda Sudarshan Patilkulkarni +2 位作者 Vinayakumar Ravi Gururaj H.L. Moez Krichen 《Data Science and Management》 2024年第1期25-34,共10页
Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for India... Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively. 展开更多
关键词 Audiovisual speech recognition Custom dataset 1D convolution neural network(CNN) Deep CNN(DCNN) long short-term memory(LSTM) LIPREADING Dlib Mel-frequency cepstral coefficient(MFCC)
下载PDF
Recurrent Convolutional Neural Network MSER-Based Approach for Payable Document Processing 被引量:1
3
作者 Suliman Aladhadh Hidayat Ur Rehman +1 位作者 Ali Mustafa Qamar Rehan Ullah Khan 《Computers, Materials & Continua》 SCIE EI 2021年第12期3399-3411,共13页
A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an e... A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an end-to-end OCR system that does both localization and recognition and serves as a single unit to automate payable document processing such as cheques and cash disbursement.For text localization,the maximally stable extremal region is used,which extracts a word or digit chunk from an invoice.This chunk is later passed to the deep learning model,which performs text recognition.The deep learning model utilizes both convolution neural networks and long short-term memory(LSTM).The convolution layer is used for extracting features,which are fed to the LSTM.The model integrates feature extraction,modeling sequence,and transcription into a unified network.It handles the sequences of unconstrained lengths,independent of the character segmentation or horizontal scale normalization.Furthermore,it applies to both the lexicon-free and lexicon-based text recognition,and finally,it produces a comparatively smaller model,which can be implemented in practical applications.The overall superior performance in the experimental evaluation demonstrates the usefulness of the proposed model.The model is thus generic and can be used for other similar recognition scenarios. 展开更多
关键词 Character recognition text spotting long short-term memory recurrent convolutional neural networks
下载PDF
Classification of Arrhythmia Based on Convolutional Neural Networks and Encoder-Decoder Model
4
作者 Jian Liu Xiaodong Xia +2 位作者 Chunyang Han Jiao Hui Jim Feng 《Computers, Materials & Continua》 SCIE EI 2022年第10期265-278,共14页
As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical... As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical workers and patients because of its ability to assist in the diagnosis of diseases.Therefore,the research of real-time diagnosis and classification algorithms for arrhythmia can help to improve the diagnostic efficiency of diseases.In this paper,we design an automatic arrhythmia classification algorithm model based on Convolutional Neural Network(CNN)and Encoder-Decoder model.The model uses Long Short-Term Memory(LSTM)to consider the influence of time series features on classification results.Simultaneously,it is trained and tested by the MIT-BIH arrhythmia database.Besides,Generative Adversarial Networks(GAN)is adopted as a method of data equalization for solving data imbalance problem.The simulation results show that for the inter-patient arrhythmia classification,the hybrid model combining CNN and Encoder-Decoder model has the best classification accuracy,of which the accuracy can reach 94.05%.Especially,it has a better advantage for the classification effect of supraventricular ectopic beats(class S)and fusion beats(class F). 展开更多
关键词 ELECTROENCEPHALOGRAPHY convolutional neural network long short-term memory encoder-decoder model generative adversarial network
下载PDF
Use of Local Region Maps on Convolutional LSTM for Single-Image HDR Reconstruction
5
作者 Seungwook Oh GyeongIk Shin Hyunki Hong 《Computers, Materials & Continua》 SCIE EI 2022年第6期4555-4572,共18页
Low dynamic range(LDR)images captured by consumer cameras have a limited luminance range.As the conventional method for generating high dynamic range(HDR)images involves merging multiple-exposure LDR images of the sam... Low dynamic range(LDR)images captured by consumer cameras have a limited luminance range.As the conventional method for generating high dynamic range(HDR)images involves merging multiple-exposure LDR images of the same scene(assuming a stationary scene),we introduce a learning-based model for single-image HDR reconstruction.An input LDR image is sequentially segmented into the local region maps based on the cumulative histogram of the input brightness distribution.Using the local region maps,SParam-Net estimates the parameters of an inverse tone mapping function to generate a pseudo-HDR image.We process the segmented region maps as the input sequences on long short-term memory.Finally,a fast super-resolution convolutional neural network is used for HDR image reconstruction.The proposed method was trained and tested on datasets including HDR-Real,LDR-HDR-pair,and HDR-Eye.The experimental results revealed that HDR images can be generated more reliably than using contemporary end-to-end approaches. 展开更多
关键词 Low dynamic range high dynamic range deep learning convolutional long short-term memory inverse tone mapping function
下载PDF
Real-Time Speech Enhancement Based on Convolutional Recurrent Neural Network
6
作者 S.Girirajan A.Pandian 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期1987-2001,共15页
Speech enhancement is the task of taking a noisy speech input and pro-ducing an enhanced speech output.In recent years,the need for speech enhance-ment has been increased due to challenges that occurred in various app... Speech enhancement is the task of taking a noisy speech input and pro-ducing an enhanced speech output.In recent years,the need for speech enhance-ment has been increased due to challenges that occurred in various applications such as hearing aids,Automatic Speech Recognition(ASR),and mobile speech communication systems.Most of the Speech Enhancement research work has been carried out for English,Chinese,and other European languages.Only a few research works involve speech enhancement in Indian regional Languages.In this paper,we propose a two-fold architecture to perform speech enhancement for Tamil speech signal based on convolutional recurrent neural network(CRN)that addresses the speech enhancement in a real-time single channel or track of sound created by the speaker.In thefirst stage mask based long short-term mem-ory(LSTM)is used for noise suppression along with loss function and in the sec-ond stage,Convolutional Encoder-Decoder(CED)is used for speech restoration.The proposed model is evaluated on various speaker and noisy environments like Babble noise,car noise,and white Gaussian noise.The proposed CRN model improves speech quality by 0.1 points when compared with the LSTM base model and also CRN requires fewer parameters for training.The performance of the pro-posed model is outstanding even in low Signal to Noise Ratio(SNR). 展开更多
关键词 Speech enhancement convolutional encoder-decoder long short-term memory noise suppression speech restoration
下载PDF
Short-term train arrival delay prediction:a data-driven approach
7
作者 Qingyun Fu Shuxin Ding +3 位作者 Tao Zhang Rongsheng Wang Ping Hu Cunlai Pu 《Railway Sciences》 2024年第4期514-529,共16页
Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and a... Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance. 展开更多
关键词 Train delay prediction Intelligent dispatching command Deep learning convolutional neural network long short-term memory Attention mechanism
下载PDF
Study of A Hybrid Deep Learning Method for Forecasting the Short-Term Motion Responses of A Semi-Submersible
8
作者 XU Sheng JI Chun-yan 《China Ocean Engineering》 CSCD 2024年第6期917-931,共15页
Accurately predicting motion responses is a crucial component of the design process for floating offshore structures.This study introduces a hybrid model that integrates a convolutional neural network(CNN),a bidirecti... Accurately predicting motion responses is a crucial component of the design process for floating offshore structures.This study introduces a hybrid model that integrates a convolutional neural network(CNN),a bidirectional long short-term memory(BiLSTM)neural network,and an attention mechanism for forecasting the short-term motion responses of a semisubmersible.First,the motions are processed through the CNN for feature extraction.The extracted features are subsequently utilized by the BiLSTM network to forecast future motions.To enhance the predictive capability of the neural networks,an attention mechanism is integrated.In addition to the hybrid model,the BiLSTM is independently employed to forecast the motion responses of the semi-submersible,serving as benchmark results for comparison.Furthermore,both the 1D and 2D convolutions are conducted to check the influence of the convolutional dimensionality on the predicted results.The results demonstrate that the hybrid 1D CNN-BiLSTM network with an attention mechanism outperforms all other models in accurately predicting motion responses. 展开更多
关键词 short-term motion responses convolutional neural network bidirectional long short-term memory neural network attention mechanism hybrid model multi-step prediction SEMI-SUBMERSIBLE
下载PDF
Visualization-based prediction of dendritic copper growth in electrochemical cells using convolutional long short-term memory 被引量:1
9
作者 Roshan Kumar Trina Dhara +1 位作者 Han Hu Monojit Chakraborty 《Energy and AI》 2022年第4期149-160,共12页
Electrodeposition in electrochemical cells is one of the leading causes of its performance deterioration. The prediction of electrodeposition growth demands a good understanding of the complex physics involved, which ... Electrodeposition in electrochemical cells is one of the leading causes of its performance deterioration. The prediction of electrodeposition growth demands a good understanding of the complex physics involved, which can lead to the fabrication of a probabilistic mathematical model. As an alternative, a convolutional Long shortterm memory architecture-based image analysis approach is presented herein. This technique can predict the electrodeposition growth of the electrolytes, without prior detailed knowledge of the system. The captured images of the electrodeposition from the experiments are used to train and test the model. A comparison between the expected output image and predicted image on a pixel level, percentage mean squared error, absolute percentage error, and pattern density of the electrodeposit are investigated to assess the model accuracy. The randomness of the electrodeposition growth is outlined by investigating the fractal dimension and the interfacial length of the electrodeposits. The trained model predictions show a significant promise between all the experimentally obtained relevant parameters with the predicted one. It is expected that this deep learning-based approach for predicting random electrodeposition growth will be of immense help for designing and optimizing the relevant experimental scheme in near future without performing multiple experiments. 展开更多
关键词 ELECTRODEPOSITION Electrochemical cell Deep learning Data-driven modelling convolutional long short-term memory
原文传递
Dynamic Hand Gesture Recognition Based on Short-Term Sampling Neural Networks 被引量:12
10
作者 Wenjin Zhang Jiacun Wang Fangping Lan 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第1期110-120,共11页
Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning netwo... Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures. 展开更多
关键词 convolutional neural network(ConvNet) hand gesture recognition long short-term memory(LSTM)network short-term sampling transfer learning
下载PDF
Deep Learning Network for Energy Storage Scheduling in Power Market Environment Short-Term Load Forecasting Model
11
作者 Yunlei Zhang RuifengCao +3 位作者 Danhuang Dong Sha Peng RuoyunDu Xiaomin Xu 《Energy Engineering》 EI 2022年第5期1829-1841,共13页
In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits... In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits of energy storage in the process of participating in the power market,this paper takes energy storage scheduling as merely one factor affecting short-term power load,which affects short-term load time series along with time-of-use price,holidays,and temperature.A deep learning network is used to predict the short-term load,a convolutional neural network(CNN)is used to extract the features,and a long short-term memory(LSTM)network is used to learn the temporal characteristics of the load value,which can effectively improve prediction accuracy.Taking the load data of a certain region as an example,the CNN-LSTM prediction model is compared with the single LSTM prediction model.The experimental results show that the CNN-LSTM deep learning network with the participation of energy storage in dispatching can have high prediction accuracy for short-term power load forecasting. 展开更多
关键词 Energy storage scheduling short-term load forecasting deep learning network convolutional neural network CNN long and short term memory network LTSM
下载PDF
Hybrid Model for Short-Term Passenger Flow Prediction in Rail Transit
12
作者 Yinghua Song Hairong Lyu Wei Zhang 《Journal on Big Data》 2023年第1期19-40,共22页
A precise and timely forecast of short-term rail transit passenger flow provides data support for traffic management and operation,assisting rail operators in efficiently allocating resources and timely relieving pres... A precise and timely forecast of short-term rail transit passenger flow provides data support for traffic management and operation,assisting rail operators in efficiently allocating resources and timely relieving pressure on passenger safety and operation.First,the passenger flow sequence models in the study are broken down using VMD for noise reduction.The objective environment features are then added to the characteristic factors that affect the passenger flow.The target station serves as an additional spatial feature and is mined concurrently using the KNN algorithm.It is shown that the hybrid model VMD-CLSMT has a higher prediction accuracy,by setting BP,CNN,and LSTM reference experiments.All models’second order prediction effects are superior to their first order effects,showing that the residual network can significantly raise model prediction accuracy.Additionally,it confirms the efficacy of supplementary and objective environmental features. 展开更多
关键词 short-term passenger flow forecast variational mode decomposition long and short-term memory convolutional neural network residual network
下载PDF
DeepBio:A Deep CNN and Bi-LSTM Learning for Person Identification Using Ear Biometrics 被引量:1
13
作者 Anshul Mahajan Sunil K.Singla 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第11期1623-1649,共27页
The identification of individuals through ear images is a prominent area of study in the biometric sector.Facial recognition systems have faced challenges during the COVID-19 pandemic due to mask-wearing,prompting the... The identification of individuals through ear images is a prominent area of study in the biometric sector.Facial recognition systems have faced challenges during the COVID-19 pandemic due to mask-wearing,prompting the exploration of supplementary biometric measures such as ear biometrics.The research proposes a Deep Learning(DL)framework,termed DeepBio,using ear biometrics for human identification.It employs two DL models and five datasets,including IIT Delhi(IITD-I and IITD-II),annotated web images(AWI),mathematical analysis of images(AMI),and EARVN1.Data augmentation techniques such as flipping,translation,and Gaussian noise are applied to enhance model performance and mitigate overfitting.Feature extraction and human identification are conducted using a hybrid approach combining Convolutional Neural Networks(CNN)and Bidirectional Long Short-Term Memory(Bi-LSTM).The DeepBio framework achieves high recognition rates of 97.97%,99.37%,98.57%,94.5%,and 96.87%on the respective datasets.Comparative analysis with existing techniques demonstrates improvements of 0.41%,0.47%,12%,and 9.75%on IITD-II,AMI,AWE,and EARVN1 datasets,respectively. 展开更多
关键词 Data augmentation convolutional neural network bidirectional long short-term memory deep learning ear biometrics
下载PDF
A spatiotemporal deep learning method for excavation-induced wall deflections 被引量:1
14
作者 Yuanqin Tao Shaoxiang Zeng +3 位作者 Honglei Sun Yuanqiang Cai Jinzhang Zhang Xiaodong Pan 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第8期3327-3338,共12页
Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the da... Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the data from a single monitoring point and neglect the spatial relationships between multiple monitoring points.Besides,most models lack flexibility in providing predictions for multiple days after monitoring activity.This study proposes a sequence-to-sequence(seq2seq)two-dimensional(2D)convolutional long short-term memory neural network(S2SCL2D)for predicting the spatiotemporal wall deflections induced by deep excavations.The model utilizes the data from all monitoring points on the entire wall and extracts spatiotemporal features from data by combining the 2D convolutional layers and long short-term memory(LSTM)layers.The S2SCL2D model achieves a long-term prediction of wall deflections through a recursive seq2seq structure.The excavation depth,which has a significant impact on wall deflections,is also considered using a feature fusion method.An excavation project in Hangzhou,China,is used to illustrate the proposed model.The results demonstrate that the S2SCL2D model has superior prediction accuracy and robustness than that of the LSTM and S2SCL1D(one-dimensional)models.The prediction model demonstrates a strong generalizability when applied to an adjacent excavation.Based on the long-term prediction results,practitioners can plan and allocate resources in advance to address the potential engineering issues. 展开更多
关键词 Braced excavation Wall deflections Deep learning convolutional layer long short-term memory(LSTM) Sequence to sequence(seq2seq)
下载PDF
Leucogranite mapping via convolutional recurrent neural networks and geochemical survey data in the Himalayan orogen
15
作者 Ziye Wang Tong Li Renguang Zuo 《Geoscience Frontiers》 SCIE CAS CSCD 2024年第1期175-186,共12页
Geochemical survey data analysis is recognized as an implemented and feasible way for lithological mapping to assist mineral exploration.With respect to available approaches,recent methodological advances have focused... Geochemical survey data analysis is recognized as an implemented and feasible way for lithological mapping to assist mineral exploration.With respect to available approaches,recent methodological advances have focused on deep learning algorithms which provide access to learn and extract information directly from geochemical survey data through multi-level networks and outputting end-to-end classification.Accordingly,this study developed a lithological mapping framework with the joint application of a convolutional neural network(CNN)and a long short-term memory(LSTM).The CNN-LSTM model is dominant in correlation extraction from CNN layers and coupling interaction learning from LSTM layers.This hybrid approach was demonstrated by mapping leucogranites in the Himalayan orogen based on stream sediment geochemical survey data,where the targeted leucogranite was expected to be potential resources of rare metals such as Li,Be,and W mineralization.Three comparative case studies were carried out from both visual and quantitative perspectives to illustrate the superiority of the proposed model.A guided spatial distribution map of leucogranites in the Himalayan orogen,divided into high-,moderate-,and low-potential areas,was delineated by the success rate curve,which further improves the efficiency for identifying unmapped leucogranites through geological mapping.In light of these results,this study provides an alternative solution for lithologic mapping using geochemical survey data at a regional scale and reduces the risk for decision making associated with mineral exploration. 展开更多
关键词 Lithological mapping Deep learning convolutional neural network long short-term memory LEUCOGRANITES
原文传递
Optimizing Bearing Fault Detection:CNN-LSTM with Attentive TabNet for Electric Motor Systems
16
作者 Alaa U.Khawaja Ahmad Shaf +4 位作者 Faisal Al Thobiani Tariq Ali Muhammad Irfan Aqib Rehman Pirzada Unza Shahkeel 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第12期2399-2420,共22页
Electric motor-driven systems are core components across industries,yet they’re susceptible to bearing faults.Manual fault diagnosis poses safety risks and economic instability,necessitating an automated approach.Thi... Electric motor-driven systems are core components across industries,yet they’re susceptible to bearing faults.Manual fault diagnosis poses safety risks and economic instability,necessitating an automated approach.This study proposes FTCNNLSTM(Fine-Tuned TabNet Convolutional Neural Network Long Short-Term Memory),an algorithm combining Convolutional Neural Networks,Long Short-Term Memory Networks,and Attentive Interpretable Tabular Learning.The model preprocesses the CWRU(Case Western Reserve University)bearing dataset using segmentation,normalization,feature scaling,and label encoding.Its architecture comprises multiple 1D Convolutional layers,batch normalization,max-pooling,and LSTM blocks with dropout,followed by batch normalization,dense layers,and appropriate activation and loss functions.Fine-tuning techniques prevent over-fitting.Evaluations were conducted on 10 fault classes from the CWRU dataset.FTCNNLSTM was benchmarked against four approaches:CNN,LSTM,CNN-LSTM with random forest,and CNN-LSTM with gradient boosting,all using 460 instances.The FTCNNLSTM model,augmented with TabNet,achieved 96%accuracy,outperforming other methods.This establishes it as a reliable and effective approach for automating bearing fault detection in electric motor-driven systems. 展开更多
关键词 Electric motor-driven systems bearing faults AUTOMATION fine tunned convolutional neural network long short-term memory fault detection
下载PDF
Infrasound Event Classification Fusion Model Based on Multiscale SE-CNN and BiLSTM
17
作者 Hongru Li Xihai Li +3 位作者 Xiaofeng Tan Chao Niu Jihao Liu Tianyou Liu 《Applied Geophysics》 SCIE CSCD 2024年第3期579-592,620,共15页
The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning al... The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning algorithms after artificial feature extraction.However,guaranteeing the effectiveness of the extracted features is difficult.The current trend focuses on using a convolution neural network to automatically extract features for classification.This method can be used to extract signal spatial features automatically through a convolution kernel;however,infrasound signals contain not only spatial information but also temporal information when used as a time series.These extracted temporal features are also crucial.If only a convolution neural network is used,then the time dependence of the infrasound sequence will be missed.Using long short-term memory networks can compensate for the missing time-series features but induces spatial feature information loss of the infrasound signal.A multiscale squeeze excitation–convolution neural network–bidirectional long short-term memory network infrasound event classification fusion model is proposed in this study to address these problems.This model automatically extracted temporal and spatial features,adaptively selected features,and also realized the fusion of the two types of features.Experimental results showed that the classification accuracy of the model was more than 98%,thus verifying the effectiveness and superiority of the proposed model. 展开更多
关键词 infrasound classification channel attention convolution neural network bidirectional long short-term memory network multiscale feature fusion
下载PDF
Deep Learning for Financial Time Series Prediction:A State-of-the-Art Review of Standalone and HybridModels
18
作者 Weisi Chen Walayat Hussain +1 位作者 Francesco Cauteruccio Xu Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期187-224,共38页
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear... Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions. 展开更多
关键词 Financial time series prediction convolutional neural network long short-term memory deep learning attention mechanism FINANCE
下载PDF
Credit Card Fraud Detection Using Improved Deep Learning Models
19
作者 Sumaya S.Sulaiman Ibraheem Nadher Sarab M.Hameed 《Computers, Materials & Continua》 SCIE EI 2024年第1期1049-1069,共21页
Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown pr... Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection. 展开更多
关键词 Card fraud detection hyperparameter tuning deep learning autoencoder convolution neural network long short-term memory RESAMPLING
下载PDF
A multi-source information fusion layer counting method for penetration fuze based on TCN-LSTM
20
作者 Yili Wang Changsheng Li Xiaofeng Wang 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第3期463-474,共12页
When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ... When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves. 展开更多
关键词 Penetration fuze Temporal convolutional network(TCN) long short-term memory(LSTM) Layer counting Multi-source fusion
下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部