期刊文献+
共找到263篇文章
< 1 2 14 >
每页显示 20 50 100
Landslide displacement prediction based on optimized empirical mode decomposition and deep bidirectional long short-term memory network
1
作者 ZHANG Ming-yue HAN Yang +1 位作者 YANG Ping WANG Cong-ling 《Journal of Mountain Science》 SCIE CSCD 2023年第3期637-656,共20页
There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement an... There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement and time series of a landslide.The second one is the dynamic evolution of a landslide,which could not be feasibly simulated simply by traditional prediction models.In this paper,a dynamic model of displacement prediction is introduced for composite landslides based on a combination of empirical mode decomposition with soft screening stop criteria(SSSC-EMD)and deep bidirectional long short-term memory(DBi-LSTM)neural network.In the proposed model,the time series analysis and SSSC-EMD are used to decompose the observed accumulated displacements of a slope into three components,viz.trend displacement,periodic displacement,and random displacement.Then,by analyzing the evolution pattern of a landslide and its key factors triggering landslides,appropriate influencing factors are selected for each displacement component,and DBi-LSTM neural network to carry out multi-datadriven dynamic prediction for each displacement component.An accumulated displacement prediction has been obtained by a summation of each component.For accuracy verification and engineering practicability of the model,field observations from two known landslides in China,the Xintan landslide and the Bazimen landslide were collected for comparison and evaluation.The case study verified that the model proposed in this paper can better characterize the"stepwise"deformation characteristics of a slope.As compared with long short-term memory(LSTM)neural network,support vector machine(SVM),and autoregressive integrated moving average(ARIMA)model,DBi-LSTM neural network has higher accuracy in predicting the periodic displacement of slope deformation,with the mean absolute percentage error reduced by 3.063%,14.913%,and 13.960%respectively,and the root mean square error reduced by 1.951 mm,8.954 mm and 7.790 mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide new insight for practical landslide prevention and control engineering. 展开更多
关键词 Landslide displacement Empirical mode decomposition Soft screening stop criteria deep bidirectional long short-term memory neural network Xintan landslide Bazimen landslide
下载PDF
Recurrent Convolutional Neural Network MSER-Based Approach for Payable Document Processing 被引量:1
2
作者 Suliman Aladhadh Hidayat Ur Rehman +1 位作者 Ali Mustafa Qamar Rehan Ullah Khan 《Computers, Materials & Continua》 SCIE EI 2021年第12期3399-3411,共13页
A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an e... A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an end-to-end OCR system that does both localization and recognition and serves as a single unit to automate payable document processing such as cheques and cash disbursement.For text localization,the maximally stable extremal region is used,which extracts a word or digit chunk from an invoice.This chunk is later passed to the deep learning model,which performs text recognition.The deep learning model utilizes both convolution neural networks and long short-term memory(LSTM).The convolution layer is used for extracting features,which are fed to the LSTM.The model integrates feature extraction,modeling sequence,and transcription into a unified network.It handles the sequences of unconstrained lengths,independent of the character segmentation or horizontal scale normalization.Furthermore,it applies to both the lexicon-free and lexicon-based text recognition,and finally,it produces a comparatively smaller model,which can be implemented in practical applications.The overall superior performance in the experimental evaluation demonstrates the usefulness of the proposed model.The model is thus generic and can be used for other similar recognition scenarios. 展开更多
关键词 Character recognition text spotting long short-term memory recurrent convolutional neural networks
下载PDF
Classification of Arrhythmia Based on Convolutional Neural Networks and Encoder-Decoder Model
3
作者 Jian Liu Xiaodong Xia +2 位作者 Chunyang Han Jiao Hui Jim Feng 《Computers, Materials & Continua》 SCIE EI 2022年第10期265-278,共14页
As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical... As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical workers and patients because of its ability to assist in the diagnosis of diseases.Therefore,the research of real-time diagnosis and classification algorithms for arrhythmia can help to improve the diagnostic efficiency of diseases.In this paper,we design an automatic arrhythmia classification algorithm model based on Convolutional Neural Network(CNN)and Encoder-Decoder model.The model uses Long Short-Term Memory(LSTM)to consider the influence of time series features on classification results.Simultaneously,it is trained and tested by the MIT-BIH arrhythmia database.Besides,Generative Adversarial Networks(GAN)is adopted as a method of data equalization for solving data imbalance problem.The simulation results show that for the inter-patient arrhythmia classification,the hybrid model combining CNN and Encoder-Decoder model has the best classification accuracy,of which the accuracy can reach 94.05%.Especially,it has a better advantage for the classification effect of supraventricular ectopic beats(class S)and fusion beats(class F). 展开更多
关键词 ELECTROENCEPHALOGRAPHY convolutional neural network long short-term memory encoder-decoder model generative adversarial network
下载PDF
Dynamic Hand Gesture Recognition Based on Short-Term Sampling Neural Networks 被引量:12
4
作者 Wenjin Zhang Jiacun Wang Fangping Lan 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第1期110-120,共11页
Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning netwo... Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures. 展开更多
关键词 convolutional neural network(ConvNet) hand gesture recognition long short-term memory(LSTM)network short-term sampling transfer learning
下载PDF
Prediction of Leakage from an Axial Piston Pump Slipper with Circular Dimples Using Deep Neural Networks 被引量:2
5
作者 Ozkan Ozmen Cem Sinanoglu +1 位作者 Abdullah Caliskan Hasan Badem 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2020年第2期111-121,共11页
Oil leakage between the slipper and swash plate of an axial piston pump has a significant effect on the efficiency of the pump.Therefore,it is extremely important that any leakage can be predicted.This study investiga... Oil leakage between the slipper and swash plate of an axial piston pump has a significant effect on the efficiency of the pump.Therefore,it is extremely important that any leakage can be predicted.This study investigates the leakage,oil film thickness,and pocket pressure values of a slipper with circular dimples under different working conditions.The results reveal that flat slippers suffer less leakage than those with textured surfaces.Also,a deep learning-based framework is proposed for modeling the slipper behavior.This framework is a long short-term memory-based deep neural network,which has been extremely successful in predicting time series.The model is compared with four conventional machine learning methods.In addition,statistical analyses and comparisons confirm the superiority of the proposed model. 展开更多
关键词 Slipper LEAKAGE Circular dimpled long short-term memory deep neural network
下载PDF
Leucogranite mapping via convolutional recurrent neural networks and geochemical survey data in the Himalayan orogen
6
作者 Ziye Wang Tong Li Renguang Zuo 《Geoscience Frontiers》 SCIE CAS CSCD 2024年第1期175-186,共12页
Geochemical survey data analysis is recognized as an implemented and feasible way for lithological mapping to assist mineral exploration.With respect to available approaches,recent methodological advances have focused... Geochemical survey data analysis is recognized as an implemented and feasible way for lithological mapping to assist mineral exploration.With respect to available approaches,recent methodological advances have focused on deep learning algorithms which provide access to learn and extract information directly from geochemical survey data through multi-level networks and outputting end-to-end classification.Accordingly,this study developed a lithological mapping framework with the joint application of a convolutional neural network(CNN)and a long short-term memory(LSTM).The CNN-LSTM model is dominant in correlation extraction from CNN layers and coupling interaction learning from LSTM layers.This hybrid approach was demonstrated by mapping leucogranites in the Himalayan orogen based on stream sediment geochemical survey data,where the targeted leucogranite was expected to be potential resources of rare metals such as Li,Be,and W mineralization.Three comparative case studies were carried out from both visual and quantitative perspectives to illustrate the superiority of the proposed model.A guided spatial distribution map of leucogranites in the Himalayan orogen,divided into high-,moderate-,and low-potential areas,was delineated by the success rate curve,which further improves the efficiency for identifying unmapped leucogranites through geological mapping.In light of these results,this study provides an alternative solution for lithologic mapping using geochemical survey data at a regional scale and reduces the risk for decision making associated with mineral exploration. 展开更多
关键词 Lithological mapping deep learning convolutional neural network long short-term memory LEUCOGRANITES
原文传递
Deep Learning for Financial Time Series Prediction:A State-of-the-Art Review of Standalone and HybridModels
7
作者 Weisi Chen Walayat Hussain +1 位作者 Francesco Cauteruccio Xu Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期187-224,共38页
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear... Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions. 展开更多
关键词 Financial time series prediction convolutional neural network long short-term memory deep learning attention mechanism FINANCE
下载PDF
Credit Card Fraud Detection Using Improved Deep Learning Models
8
作者 Sumaya S.Sulaiman Ibraheem Nadher Sarab M.Hameed 《Computers, Materials & Continua》 SCIE EI 2024年第1期1049-1069,共21页
Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown pr... Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection. 展开更多
关键词 Card fraud detection hyperparameter tuning deep learning autoencoder convolution neural network long short-term memory RESAMPLING
下载PDF
Real-time UAV path planning based on LSTM network
9
作者 ZHANG Jiandong GUO Yukun +3 位作者 ZHENG Lihui YANG Qiming SHI Guoqing WU Yong 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第2期374-385,共12页
To address the shortcomings of single-step decision making in the existing deep reinforcement learning based unmanned aerial vehicle(UAV)real-time path planning problem,a real-time UAV path planning algorithm based on... To address the shortcomings of single-step decision making in the existing deep reinforcement learning based unmanned aerial vehicle(UAV)real-time path planning problem,a real-time UAV path planning algorithm based on long shortterm memory(RPP-LSTM)network is proposed,which combines the memory characteristics of recurrent neural network(RNN)and the deep reinforcement learning algorithm.LSTM networks are used in this algorithm as Q-value networks for the deep Q network(DQN)algorithm,which makes the decision of the Q-value network has some memory.Thanks to LSTM network,the Q-value network can use the previous environmental information and action information which effectively avoids the problem of single-step decision considering only the current environment.Besides,the algorithm proposes a hierarchical reward and punishment function for the specific problem of UAV real-time path planning,so that the UAV can more reasonably perform path planning.Simulation verification shows that compared with the traditional feed-forward neural network(FNN)based UAV autonomous path planning algorithm,the RPP-LSTM proposed in this paper can adapt to more complex environments and has significantly improved robustness and accuracy when performing UAV real-time path planning. 展开更多
关键词 deep Q network path planning neural network unmanned aerial vehicle(UAV) long short-term memory(LSTM)
下载PDF
Short-term train arrival delay prediction:a data-driven approach
10
作者 Qingyun Fu Shuxin Ding +3 位作者 Tao Zhang Rongsheng Wang Ping Hu Cunlai Pu 《Railway Sciences》 2024年第4期514-529,共16页
Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and a... Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance. 展开更多
关键词 Train delay prediction Intelligent dispatching command deep learning convolutional neural network long short-term memory Attention mechanism
下载PDF
Deep Learning Network for Energy Storage Scheduling in Power Market Environment Short-Term Load Forecasting Model
11
作者 Yunlei Zhang RuifengCao +3 位作者 Danhuang Dong Sha Peng RuoyunDu Xiaomin Xu 《Energy Engineering》 EI 2022年第5期1829-1841,共13页
In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits... In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits of energy storage in the process of participating in the power market,this paper takes energy storage scheduling as merely one factor affecting short-term power load,which affects short-term load time series along with time-of-use price,holidays,and temperature.A deep learning network is used to predict the short-term load,a convolutional neural network(CNN)is used to extract the features,and a long short-term memory(LSTM)network is used to learn the temporal characteristics of the load value,which can effectively improve prediction accuracy.Taking the load data of a certain region as an example,the CNN-LSTM prediction model is compared with the single LSTM prediction model.The experimental results show that the CNN-LSTM deep learning network with the participation of energy storage in dispatching can have high prediction accuracy for short-term power load forecasting. 展开更多
关键词 Energy storage scheduling short-term load forecasting deep learning network convolutional neural network CNN long and short term memory network LTSM
下载PDF
Deep-fake video detection approaches using convolutional–recurrent neural networks
12
作者 Shraddha Suratkar Sayali Bhiungade +3 位作者 Jui Pitale Komal Soni Tushar Badgujar Faruk Kazi 《Journal of Control and Decision》 EI 2023年第2期198-214,共17页
Deep-Fake is an emerging technology used in synthetic media which manipulates individuals in existing images and videos with someone else’s likeness.This paper presents the comparative study of different deep neural ... Deep-Fake is an emerging technology used in synthetic media which manipulates individuals in existing images and videos with someone else’s likeness.This paper presents the comparative study of different deep neural networks employed for Deep-Fake video detection.In the model,the features from the training data are extracted with the intended Convolution Neural Network model to form feature vectors which are further analysed using a dense layer,a Long Short-Term Memoryand Gated Recurrent by adopting transfer learning with fine tuning for training the models.The model is evaluated to detect Artificial Intelligence based Deep fakes images and videos using benchmark datasets.Comparative analysis shows that the detections are majorly biased towards domain of the dataset but there is a noteworthy improvement in the model performance parameters by using Transfer Learning whereas Convolutional-Recurrent Neural Network has benefits in sequence detection. 展开更多
关键词 deep-FAKES convolution neural network(CNN) Generator Adversarial network(GAN) Auto encoders Recurrent neural network(RNN) long short-term Memory(LSTM)
原文传递
Practical Options for Adopting Recurrent Neural Network and Its Variants on Remaining Useful Life Prediction 被引量:1
13
作者 Youdao Wang Yifan Zhao Sri Addepalli 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2021年第3期32-51,共20页
The remaining useful life(RUL)of a system is generally predicted by utilising the data collected from the sensors that continuously monitor different indicators.Recently,different deep learning(DL)techniques have been... The remaining useful life(RUL)of a system is generally predicted by utilising the data collected from the sensors that continuously monitor different indicators.Recently,different deep learning(DL)techniques have been used for RUL prediction and achieved great success.Because the data is often time-sequential,recurrent neural network(RNN)has attracted significant interests due to its efficiency in dealing with such data.This paper systematically reviews RNN and its variants for RUL prediction,with a specific focus on understanding how different components(e.g.,types of optimisers and activation functions)or parameters(e.g.,sequence length,neuron quantities)affect their performance.After that,a case study using the well-studied NASA’s C-MAPSS dataset is presented to quantitatively evaluate the influence of various state-of-the-art RNN structures on the RUL prediction performance.The result suggests that the variant methods usually perform better than the original RNN,and among which,Bi-directional Long Short-Term Memory generally has the best performance in terms of stability,precision and accuracy.Certain model structures may fail to produce valid RUL prediction result due to the gradient vanishing or gradient exploring problem if the parameters are not chosen appropriately.It is concluded that parameter tuning is a crucial step to achieve optimal prediction performance. 展开更多
关键词 Remaining useful life prediction deep learning Recurrent neural network long short-term memory Bi-directional long short-term memory Gated recurrent unit
下载PDF
Time Series Forecasting with Multiple Deep Learners: Selection from a Bayesian Network
14
作者 Shusuke Kobayashi Susumu Shirayama 《Journal of Data Analysis and Information Processing》 2017年第3期115-130,共16页
Considering the recent developments in deep learning, it has become increasingly important to verify what methods are valid for the prediction of multivariate time-series data. In this study, we propose a novel method... Considering the recent developments in deep learning, it has become increasingly important to verify what methods are valid for the prediction of multivariate time-series data. In this study, we propose a novel method of time-series prediction employing multiple deep learners combined with a Bayesian network where training data is divided into clusters using K-means clustering. We decided how many clusters are the best for K-means with the Bayesian information criteria. Depending on each cluster, the multiple deep learners are trained. We used three types of deep learners: deep neural network (DNN), recurrent neural network (RNN), and long short-term memory (LSTM). A naive Bayes classifier is used to determine which deep learner is in charge of predicting a particular time-series. Our proposed method will be applied to a set of financial time-series data, the Nikkei Average Stock price, to assess the accuracy of the predictions made. Compared with the conventional method of employing a single deep learner to acquire all the data, it is demonstrated by our proposed method that F-value and accuracy are improved. 展开更多
关键词 Time-Series Data deep LEARNING Bayesian network RECURRENT neural network long short-term Memory Ensemble LEARNING K-Means
下载PDF
Deep Learning Applied to Computational Mechanics:A Comprehensive Review,State of the Art,and the Classics 被引量:1
15
作者 Loc Vu-Quoc Alexander Humer 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第11期1069-1343,共275页
Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularl... Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example. 展开更多
关键词 deep learning breakthroughs network architectures backpropagation stochastic optimization methods from classic to modern recurrent neural networks long short-term memory gated recurrent unit attention transformer kernel machines Gaussian processes libraries Physics-Informed neural networks state-of-the-art history limitations challenges Applications to computational mechanics Finite-element matrix integration improved Gauss quadrature Multiscale geomechanics fluid-filled porous media Fluid mechanics turbulence proper orthogonal decomposition Nonlinear-manifold model-order reduction autoencoder hyper-reduction using gappy data control of large deformable beam
下载PDF
Recognition of mortar pumpability via computer vision and deep learning
16
作者 Hao-Zhe Feng Hong-Yang Yu +2 位作者 Wen-Yong Wang Wen-Xuan Wang Ming-Qian Du 《Journal of Electronic Science and Technology》 EI CAS CSCD 2023年第3期73-81,共9页
The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional con... The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional convolutional neural network(3D CNN)with a 2-dimensional convolutional long short-term memory network(ConvLSTM2D)to automatically classify the mortar pumpability.Experiment results show that the proposed model has an accuracy rate of 100%with a fast convergence speed,based on the dataset organized by collecting the corresponding mortar image sequences.This work demonstrates the feasibility of using computer vision and deep learning for mortar pumpability classification. 展开更多
关键词 Classification Computer vision deep learning PUMPABILITY 2-dimensional convolutional long short-term memory network (ConvLSTM2D) 3-dimensional convolutional neural network(3D CNN)
下载PDF
Deep Bimodal Fusion Approach for Apparent Personality Analysis
17
作者 Saman Riaz Ali Arshad +1 位作者 Shahab S.Band Amir Mosavi 《Computers, Materials & Continua》 SCIE EI 2023年第4期2301-2312,共12页
Personality distinguishes individuals’ patterns of feeling, thinking,and behaving. Predicting personality from small video series is an excitingresearch area in computer vision. The majority of the existing research ... Personality distinguishes individuals’ patterns of feeling, thinking,and behaving. Predicting personality from small video series is an excitingresearch area in computer vision. The majority of the existing research concludespreliminary results to get immense knowledge from visual and Audio(sound) modality. To overcome the deficiency, we proposed the Deep BimodalFusion (DBF) approach to predict five traits of personality-agreeableness,extraversion, openness, conscientiousness and neuroticism. In the proposedframework, regarding visual modality, the modified convolution neural networks(CNN), more specifically Descriptor Aggregator Model (DAN) areused to attain significant visual modality. The proposed model extracts audiorepresentations for greater efficiency to construct the long short-termmemory(LSTM) for the audio modality. Moreover, employing modality-based neuralnetworks allows this framework to independently determine the traits beforecombining them with weighted fusion to achieve a conclusive prediction of thegiven traits. The proposed approach attains the optimal mean accuracy score,which is 0.9183. It is achieved based on the average of five personality traitsand is thus better than previously proposed frameworks. 展开更多
关键词 Apparent personality analysis deep bimodal fusion convolutional neural network long short-term memory bimodal information fusion approach
下载PDF
Adaptive Deep Learning Model for Software Bug Detection and Classification
18
作者 S.Sivapurnima D.Manjula 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期1233-1248,共16页
Software is unavoidable in software development and maintenance.In literature,many methods are discussed which fails to achieve efficient software bug detection and classification.In this paper,efficient Adaptive Deep... Software is unavoidable in software development and maintenance.In literature,many methods are discussed which fails to achieve efficient software bug detection and classification.In this paper,efficient Adaptive Deep Learning Model(ADLM)is developed for automatic duplicate bug report detection and classification process.The proposed ADLM is a combination of Conditional Random Fields decoding with Long Short-Term Memory(CRF-LSTM)and Dingo Optimizer(DO).In the CRF,the DO can be consumed to choose the efficient weight value in network.The proposed automatic bug report detection is proceeding with three stages like pre-processing,feature extraction in addition bug detection with classification.Initially,the bug report input dataset is gathered from the online source system.In the pre-processing phase,the unwanted information from the input data are removed by using cleaning text,convert data types and null value replacement.The pre-processed data is sent into the feature extraction phase.In the feature extraction phase,the four types of feature extraction method are utilized such as contextual,categorical,temporal and textual.Finally,the features are sent to the proposed ADLM for automatic duplication bug report detection and classification.The proposed methodology is proceeding with two phases such as training and testing phases.Based on the working process,the bugs are detected and classified from the input data.The projected technique is assessed by analyzing performance metrics such as accuracy,precision,Recall,F_Measure and kappa. 展开更多
关键词 Software bug detection classification PRE-PROCESSING feature extraction deep belief neural network long short-term memory
下载PDF
Effective and Efficient Video Compression by the Deep Learning Techniques
19
作者 Karthick Panneerselvam K.Mahesh +1 位作者 V.L.Helen Josephine A.Ranjith Kumar 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期1047-1061,共15页
Deep learning has reached many successes in Video Processing.Video has become a growing important part of our daily digital interactions.The advancement of better resolution content and the large volume offers serious... Deep learning has reached many successes in Video Processing.Video has become a growing important part of our daily digital interactions.The advancement of better resolution content and the large volume offers serious challenges to the goal of receiving,distributing,compressing and revealing highquality video content.In this paper we propose a novel Effective and Efficient video compression by the Deep Learning framework based on the flask,which creatively combines the Deep Learning Techniques on Convolutional Neural Networks(CNN)and Generative Adversarial Networks(GAN).The video compression method involves the layers are divided into different groups for data processing,using CNN to remove the duplicate frames,repeating the single image instead of the duplicate images by recognizing and detecting minute changes using GAN and recorded with Long Short-Term Memory(LSTM).Instead of the complete image,the small changes generated using GAN are substituted,which helps with frame-level compression.Pixel wise comparison is performed using K-nearest Neighbours(KNN)over the frame,clustered with K-means and Singular Value Decomposition(SVD)is applied for every frame in the video for all three colour channels[Red,Green,Blue]to decrease the dimension of the utility matrix[R,G,B]by extracting its latent factors.Video frames are packed with parameters with the aid of a codec and converted to video format and the results are compared with the original video.Repeated experiments on several videos with different sizes,duration,Frames per second(FPS),and quality results demonstrated a significant resampling rate.On normal,the outcome delivered had around a 10%deviation in quality and over half in size when contrasted,and the original video. 展开更多
关键词 convolutional neural networks(CNN) generative adversarial network(GAN) singular value decomposition(SVD) K-nearest neighbours(KNN) stochastic gradient descent(SGD) long short-term memory(LSTM)
下载PDF
Dynamic Resource Allocation in LTE Radio Access Network Using Machine Learning Techniques
20
作者 Eric Michel Deussom Djomadji Ivan Basile Kabiena +2 位作者 Valery Nkemeni Ayrton Garcia Belinga À Njere Michael Ekonde Sone 《Journal of Computer and Communications》 2023年第6期73-93,共21页
Current LTE networks are experiencing significant growth in the number of users worldwide. The use of data services for online browsing, e-learning, online meetings and initiatives such as smart cities means that subs... Current LTE networks are experiencing significant growth in the number of users worldwide. The use of data services for online browsing, e-learning, online meetings and initiatives such as smart cities means that subscribers stay connected for long periods, thereby saturating a number of signalling resources. One of such resources is the Radio Resource Connected (RRC) parameter, which is allocated to eNodeBs with the aim of limiting the number of connected simultaneously in the network. The fixed allocation of this parameter means that, depending on the traffic at different times of the day and the geographical position, some eNodeBs are saturated with RRC resources (overused) while others have unused RRC resources. However, as these resources are limited, there is the problem of their underutilization (non-optimal utilization of resources at the eNodeB level) due to static allocation (manual configuration of resources). The objective of this paper is to design an efficient machine learning model that will take as input some key performance indices (KPIs) like traffic data, RRC, simultaneous users, etc., for each eNodeB per hour and per day and accurately predict the number of needed RRC resources that will be dynamically allocated to them in order to avoid traffic and financial losses to the mobile network operator. To reach this target, three machine learning algorithms have been studied namely: linear regression, convolutional neural networks and long short-term memory (LSTM) to train three models and evaluate them. The model trained with the LSTM algorithm gave the best performance with 97% accuracy and was therefore implemented in the proposed solution for RRC resource allocation. An interconnection architecture is also proposed to embed the proposed solution into the Operation and maintenance network of a mobile network operator. In this way, the proposed solution can contribute to developing and expanding the concept of Self Organizing Network (SON) used in 4G and 5G networks. 展开更多
关键词 RRC Resources 4G network Linear Regression convolutional neural networks long short-term Memory PRECISION
下载PDF
上一页 1 2 14 下一页 到第
使用帮助 返回顶部