期刊文献+
共找到38,119篇文章
< 1 2 250 >
每页显示 20 50 100
Extended Deep Learning Algorithm for Improved Brain Tumor Diagnosis System
1
作者 M.Adimoolam K.Maithili +7 位作者 N.M.Balamurugan R.Rajkumar S.Leelavathy Raju Kannadasan Mohd Anul Haq Ilyas Khan ElSayed M.Tag El Din Arfat Ahmad Khan 《Intelligent Automation & Soft Computing》 2024年第1期33-55,共23页
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st... At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated. 展开更多
关键词 Brain tumor extended deep learning algorithm convolution neural network tumor detection deep learning
下载PDF
Enhancing Deep Learning Soil Moisture Forecasting Models by Integrating Physics-based Models 被引量:1
2
作者 Lu LI Yongjiu DAI +5 位作者 Zhongwang WEI Wei SHANGGUAN Nan WEI Yonggen ZHANG Qingliang LI Xian-Xiang LI 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1326-1341,共16页
Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient... Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient representation of land-surface processes.In addition to PB models,deep learning(DL)models have been widely used in SM predictions recently.However,few pure DL models have notably high success rates due to lacking physical information.Thus,we developed hybrid models to effectively integrate the outputs of PB models into DL models to improve SM predictions.To this end,we first developed a hybrid model based on the attention mechanism to take advantage of PB models at each forecast time scale(attention model).We further built an ensemble model that combined the advantages of different hybrid schemes(ensemble model).We utilized SM forecasts from the Global Forecast System to enhance the convolutional long short-term memory(ConvLSTM)model for 1–16 days of SM predictions.The performances of the proposed hybrid models were investigated and compared with two existing hybrid models.The results showed that the attention model could leverage benefits of PB models and achieved the best predictability of drought events among the different hybrid models.Moreover,the ensemble model performed best among all hybrid models at all forecast time scales and different soil conditions.It is highlighted that the ensemble model outperformed the pure DL model over 79.5%of in situ stations for 16-day predictions.These findings suggest that our proposed hybrid models can adequately exploit the benefits of PB model outputs to aid DL models in making SM predictions. 展开更多
关键词 soil moisture forecasting hybrid model deep learning ConvLSTM attention mechanism
下载PDF
A Deep Learning Approach for Forecasting Thunderstorm Gusts in the Beijing–Tianjin–Hebei Region 被引量:1
3
作者 Yunqing LIU Lu YANG +3 位作者 Mingxuan CHEN Linye SONG Lei HAN Jingfeng XU 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1342-1363,共22页
Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly b... Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China. 展开更多
关键词 thunderstorm gusts deep learning weather forecasting convolutional neural network TRANSFORMER
下载PDF
ST-LSTM-SA:A New Ocean Sound Velocity Field Prediction Model Based on Deep Learning 被引量:1
4
作者 Hanxiao YUAN Yang LIU +3 位作者 Qiuhua TANG Jie LI Guanxu CHEN Wuxu CAI 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1364-1378,共15页
The scarcity of in-situ ocean observations poses a challenge for real-time information acquisition in the ocean.Among the crucial hydroacoustic environmental parameters,ocean sound velocity exhibits significant spatia... The scarcity of in-situ ocean observations poses a challenge for real-time information acquisition in the ocean.Among the crucial hydroacoustic environmental parameters,ocean sound velocity exhibits significant spatial and temporal variability and it is highly relevant to oceanic research.In this study,we propose a new data-driven approach,leveraging deep learning techniques,for the prediction of sound velocity fields(SVFs).Our novel spatiotemporal prediction model,STLSTM-SA,combines Spatiotemporal Long Short-Term Memory(ST-LSTM) with a self-attention mechanism to enable accurate and real-time prediction of SVFs.To circumvent the limited amount of observational data,we employ transfer learning by first training the model using reanalysis datasets,followed by fine-tuning it using in-situ analysis data to obtain the final prediction model.By utilizing the historical 12-month SVFs as input,our model predicts the SVFs for the subsequent three months.We compare the performance of five models:Artificial Neural Networks(ANN),Long ShortTerm Memory(LSTM),Convolutional LSTM(ConvLSTM),ST-LSTM,and our proposed ST-LSTM-SA model in a test experiment spanning 2019 to 2022.Our results demonstrate that the ST-LSTM-SA model significantly improves the prediction accuracy and stability of sound velocity in both temporal and spatial dimensions.The ST-LSTM-SA model not only accurately predicts the ocean sound velocity field(SVF),but also provides valuable insights for spatiotemporal prediction of other oceanic environmental variables. 展开更多
关键词 sound velocity field spatiotemporal prediction deep learning self-allention
下载PDF
Deep learning for joint channel estimation and feedback in massive MIMO systems 被引量:1
5
作者 Jiajia Guo Tong Chen +3 位作者 Shi Jin Geoffrey Ye Li Xin Wang Xiaolin Hou 《Digital Communications and Networks》 SCIE CSCD 2024年第1期83-93,共11页
The great potentials of massive Multiple-Input Multiple-Output(MIMO)in Frequency Division Duplex(FDD)mode can be fully exploited when the downlink Channel State Information(CSI)is available at base stations.However,th... The great potentials of massive Multiple-Input Multiple-Output(MIMO)in Frequency Division Duplex(FDD)mode can be fully exploited when the downlink Channel State Information(CSI)is available at base stations.However,the accurate CsI is difficult to obtain due to the large amount of feedback overhead caused by massive antennas.In this paper,we propose a deep learning based joint channel estimation and feedback framework,which comprehensively realizes the estimation,compression,and reconstruction of downlink channels in FDD massive MIMO systems.Two networks are constructed to perform estimation and feedback explicitly and implicitly.The explicit network adopts a multi-Signal-to-Noise-Ratios(SNRs)technique to obtain a single trained channel estimation subnet that works well with different SNRs and employs a deep residual network to reconstruct the channels,while the implicit network directly compresses pilots and sends them back to reduce network parameters.Quantization module is also designed to generate data-bearing bitstreams.Simulation results show that the two proposed networks exhibit excellent performance of reconstruction and are robust to different environments and quantization errors. 展开更多
关键词 Channel estimation CSI feedback deep learning Massive MIMO FDD
下载PDF
A credibility-aware swarm-federated deep learning framework in internet of vehicles 被引量:1
6
作者 Zhe Wang Xinhang Li +2 位作者 Tianhao Wu Chen Xu Lin Zhang 《Digital Communications and Networks》 SCIE CSCD 2024年第1期150-157,共8页
Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead... Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead and data privacy risks.The recently proposed Swarm Learning(SL)provides a decentralized machine learning approach for unit edge computing and blockchain-based coordination.A Swarm-Federated Deep Learning framework in the IoV system(IoV-SFDL)that integrates SL into the FDL framework is proposed in this paper.The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL,then aggregates the global FDL model among different SL groups with a credibility weights prediction algorithm.Extensive experimental results show that compared with the baseline frameworks,the proposed IoV-SFDL framework reduces the overhead of client-to-server communication by 16.72%,while the model performance improves by about 5.02%for the same training iterations. 展开更多
关键词 Swarm learning Federated deep learning Internet of vehicles PRIVACY EFFICIENCY
下载PDF
Assessments of Data-Driven Deep Learning Models on One-Month Predictions of Pan-Arctic Sea Ice Thickness 被引量:1
7
作者 Chentao SONG Jiang ZHU Xichen LI 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1379-1390,共12页
In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,ma... In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,mainly due to the limited time coverage of observations and reanalysis data.Meanwhile,deep learning predictions of sea ice thickness(SIT)have yet to receive ample attention.In this study,two data-driven deep learning(DL)models are built based on the ConvLSTM and fully convolutional U-net(FC-Unet)algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations.These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved.Through comprehensive assessments of prediction skills by season and region,the results suggest that using a broader set of CMIP6 data for transfer learning,as well as incorporating multiple climate variables as predictors,contribute to better prediction results,although both DL models can effectively predict the spatiotemporal features of SIT anomalies.Regarding the predicted SIT anomalies of the FC-Unet model,the spatial correlations with reanalysis reach an average level of 89%over all months,while the temporal anomaly correlation coefficients are close to unity in most cases.The models also demonstrate robust performances in predicting SIT and SIE during extreme events.The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions,aiding climate change research and real-time business applications. 展开更多
关键词 Arctic sea ice thickness deep learning spatiotemporal sequence prediction transfer learning
下载PDF
Automatic detection of small bowel lesions with different bleeding risks based on deep learning models 被引量:1
8
作者 Rui-Ya Zhang Peng-Peng Qiang +5 位作者 Ling-Jun Cai Tao Li Yan Qin Yu Zhang Yi-Qing Zhao Jun-Ping Wang 《World Journal of Gastroenterology》 SCIE CAS 2024年第2期170-183,共14页
BACKGROUND Deep learning provides an efficient automatic image recognition method for small bowel(SB)capsule endoscopy(CE)that can assist physicians in diagnosis.However,the existing deep learning models present some ... BACKGROUND Deep learning provides an efficient automatic image recognition method for small bowel(SB)capsule endoscopy(CE)that can assist physicians in diagnosis.However,the existing deep learning models present some unresolved challenges.AIM To propose a novel and effective classification and detection model to automatically identify various SB lesions and their bleeding risks,and label the lesions accurately so as to enhance the diagnostic efficiency of physicians and the ability to identify high-risk bleeding groups.METHODS The proposed model represents a two-stage method that combined image classification with object detection.First,we utilized the improved ResNet-50 classification model to classify endoscopic images into SB lesion images,normal SB mucosa images,and invalid images.Then,the improved YOLO-V5 detection model was utilized to detect the type of lesion and its risk of bleeding,and the location of the lesion was marked.We constructed training and testing sets and compared model-assisted reading with physician reading.RESULTS The accuracy of the model constructed in this study reached 98.96%,which was higher than the accuracy of other systems using only a single module.The sensitivity,specificity,and accuracy of the model-assisted reading detection of all images were 99.17%,99.92%,and 99.86%,which were significantly higher than those of the endoscopists’diagnoses.The image processing time of the model was 48 ms/image,and the image processing time of the physicians was 0.40±0.24 s/image(P<0.001).CONCLUSION The deep learning model of image classification combined with object detection exhibits a satisfactory diagnostic effect on a variety of SB lesions and their bleeding risks in CE images,which enhances the diagnostic efficiency of physicians and improves the ability of physicians to identify high-risk bleeding groups. 展开更多
关键词 Artificial intelligence deep learning Capsule endoscopy Image classification Object detection Bleeding risk
下载PDF
A gated recurrent unit model to predict Poisson’s ratio using deep learning 被引量:1
9
作者 Fahd Saeed Alakbari Mysara Eissa Mohyaldinn +4 位作者 Mohammed Abdalla Ayoub Ibnelwaleed A.Hussein Ali Samer Muhsan Syahrir Ridha Abdullah Abduljabbar Salih 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第1期123-135,共13页
Static Poisson’s ratio(vs)is crucial for determining geomechanical properties in petroleum applications,namely sand production.Some models have been used to predict vs;however,the published models were limited to spe... Static Poisson’s ratio(vs)is crucial for determining geomechanical properties in petroleum applications,namely sand production.Some models have been used to predict vs;however,the published models were limited to specific data ranges with an average absolute percentage relative error(AAPRE)of more than 10%.The published gated recurrent unit(GRU)models do not consider trend analysis to show physical behaviors.In this study,we aim to develop a GRU model using trend analysis and three inputs for predicting n s based on a broad range of data,n s(value of 0.1627-0.4492),bulk formation density(RHOB)(0.315-2.994 g/mL),compressional time(DTc)(44.43-186.9 μs/ft),and shear time(DTs)(72.9-341.2μ s/ft).The GRU model was evaluated using different approaches,including statistical error an-alyses.The GRU model showed the proper trends,and the model data ranges were wider than previous ones.The GRU model has the largest correlation coefficient(R)of 0.967 and the lowest AAPRE,average percent relative error(APRE),root mean square error(RMSE),and standard deviation(SD)of 3.228%,1.054%,4.389,and 0.013,respectively,compared to other models.The GRU model has a high accuracy for the different datasets:training,validation,testing,and the whole datasets with R and AAPRE values were 0.981 and 2.601%,0.966 and 3.274%,0.967 and 3.228%,and 0.977 and 2.861%,respectively.The group error analyses of all inputs show that the GRU model has less than 5% AAPRE for all input ranges,which is superior to other models that have different AAPRE values of more than 10% at various ranges of inputs. 展开更多
关键词 Static Poisson’s ratio deep learning Gated recurrent unit(GRU) Sand control Trend analysis Geomechanical properties
下载PDF
A spatiotemporal deep learning method for excavation-induced wall deflections 被引量:1
10
作者 Yuanqin Tao Shaoxiang Zeng +3 位作者 Honglei Sun Yuanqiang Cai Jinzhang Zhang Xiaodong Pan 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第8期3327-3338,共12页
Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the da... Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the data from a single monitoring point and neglect the spatial relationships between multiple monitoring points.Besides,most models lack flexibility in providing predictions for multiple days after monitoring activity.This study proposes a sequence-to-sequence(seq2seq)two-dimensional(2D)convolutional long short-term memory neural network(S2SCL2D)for predicting the spatiotemporal wall deflections induced by deep excavations.The model utilizes the data from all monitoring points on the entire wall and extracts spatiotemporal features from data by combining the 2D convolutional layers and long short-term memory(LSTM)layers.The S2SCL2D model achieves a long-term prediction of wall deflections through a recursive seq2seq structure.The excavation depth,which has a significant impact on wall deflections,is also considered using a feature fusion method.An excavation project in Hangzhou,China,is used to illustrate the proposed model.The results demonstrate that the S2SCL2D model has superior prediction accuracy and robustness than that of the LSTM and S2SCL1D(one-dimensional)models.The prediction model demonstrates a strong generalizability when applied to an adjacent excavation.Based on the long-term prediction results,practitioners can plan and allocate resources in advance to address the potential engineering issues. 展开更多
关键词 Braced excavation Wall deflections deep learning Convolutional layer Long short-term memory(LSTM) Sequence to sequence(seq2seq)
下载PDF
Flood Velocity Prediction Using Deep Learning Approach 被引量:1
11
作者 LUO Shaohua DING Linfang +2 位作者 TEKLE Gebretsadik Mulubirhan BRULAND Oddbjørn FAN Hongchao 《Journal of Geodesy and Geoinformation Science》 CSCD 2024年第1期59-73,共15页
Floods are one of the most serious natural disasters that can cause huge societal and economic losses.Extensive research has been conducted on topics like flood monitoring,prediction,and loss estimation.In these resea... Floods are one of the most serious natural disasters that can cause huge societal and economic losses.Extensive research has been conducted on topics like flood monitoring,prediction,and loss estimation.In these research fields,flood velocity plays a crucial role and is an important factor that influences the reliability of the outcomes.Traditional methods rely on physical models for flood simulation and prediction and could generate accurate results but often take a long time.Deep learning technology has recently shown significant potential in the same field,especially in terms of efficiency,helping to overcome the time-consuming associated with traditional methods.This study explores the potential of deep learning models in predicting flood velocity.More specifically,we use a Multi-Layer Perceptron(MLP)model,a specific type of Artificial Neural Networks(ANNs),to predict the velocity in the test area of the Lundesokna River in Norway with diverse terrain conditions.Geographic data and flood velocity simulated based on the physical hydraulic model are used in the study for the pre-training,optimization,and testing of the MLP model.Our experiment indicates that the MLP model has the potential to predict flood velocity in diverse terrain conditions of the river with acceptable accuracy against simulated velocity results but with a significant decrease in training time and testing time.Meanwhile,we discuss the limitations for the improvement in future work. 展开更多
关键词 flood velocity prediction geographic data MLP deep learning
下载PDF
Early identification of stroke through deep learning with multi-modal human speech and movement data
12
作者 Zijun Ou Haitao Wang +9 位作者 Bin Zhang Haobang Liang Bei Hu Longlong Ren Yanjuan Liu Yuhu Zhang Chengbo Dai Hejun Wu Weifeng Li Xin Li 《Neural Regeneration Research》 SCIE CAS 2025年第1期234-241,共8页
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are... Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting. 展开更多
关键词 artificial intelligence deep learning DIAGNOSIS early detection FAST SCREENING STROKE
下载PDF
A Case Study Applying Mesoscience to Deep Learning
13
作者 Li Guo Fanyong Meng +4 位作者 Pengfei Qin Zhaojie Xia Qi Chang Jianhua Chen Jinghai Li 《Engineering》 SCIE EI CAS CSCD 2024年第8期84-93,共10页
In this paper,we propose mesoscience-guided deep learning(MGDL),a deep learning modeling approach guided by mesoscience,to study complex systems.When establishing sample dataset based on the same system evolution data... In this paper,we propose mesoscience-guided deep learning(MGDL),a deep learning modeling approach guided by mesoscience,to study complex systems.When establishing sample dataset based on the same system evolution data,different from the operation of conventional deep learning method,MGDL introduces the treatment of the dominant mechanisms of complex system and interactions between them according to the principle of compromise in competition(CIC)in mesoscience.Mesoscience constraints are then integrated into the loss function to guide the deep learning training.Two methods are proposed for the addition of mesoscience constraints.The physical interpretability of the model-training process is improved by MGDL because guidance and constraints based on physical principles are provided.MGDL was evaluated using a bubbling bed modeling case and compared with traditional techniques.With a much smaller training dataset,the results indicate that mesoscience-constraint-based model training has distinct advantages in terms of convergence stability and prediction accuracy,and it can be widely applied to various neural network configurations.The MGDL approach proposed in this paper is a novel method for utilizing the physical background information during deep learning model training.Further exploration of MGDL will be continued in the future. 展开更多
关键词 Mesoscience deep learning Complex system Gas-solid system Bubbling bed
下载PDF
Modeling Geometrically Nonlinear FG Plates: A Fast and Accurate Alternative to IGA Method Based on Deep Learning
14
作者 Se Li Tiantang Yu Tinh Quoc Bui 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2793-2808,共16页
Isogeometric analysis (IGA) is known to showadvanced features compared to traditional finite element approaches.Using IGA one may accurately obtain the geometrically nonlinear bending behavior of plates with functiona... Isogeometric analysis (IGA) is known to showadvanced features compared to traditional finite element approaches.Using IGA one may accurately obtain the geometrically nonlinear bending behavior of plates with functionalgrading (FG). However, the procedure is usually complex and often is time-consuming. We thus put forward adeep learning method to model the geometrically nonlinear bending behavior of FG plates, bypassing the complexIGA simulation process. A long bidirectional short-term memory (BLSTM) recurrent neural network is trainedusing the load and gradient index as inputs and the displacement responses as outputs. The nonlinear relationshipbetween the outputs and the inputs is constructed usingmachine learning so that the displacements can be directlyestimated by the deep learning network. To provide enough training data, we use S-FSDT Von-Karman IGA andobtain the displacement responses for different loads and gradient indexes. Results show that the recognition erroris low, and demonstrate the feasibility of deep learning technique as a fast and accurate alternative to IGA formodeling the geometrically nonlinear bending behavior of FG plates. 展开更多
关键词 FG plates geometric nonlinearity deep learning BLSTM IGA S-FSDT
下载PDF
An Efficient Modelling of Oversampling with Optimal Deep Learning Enabled Anomaly Detection in Streaming Data
15
作者 R.Rajakumar S.Sathiya Devi 《China Communications》 SCIE CSCD 2024年第5期249-260,共12页
Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL... Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets. 展开更多
关键词 anomaly detection deep learning hyperparameter optimization OVERSAMPLING SMOTE streaming data
下载PDF
Classification of Sailboat Tell Tail Based on Deep Learning
16
作者 CHANG Xiaofeng YU Jintao +3 位作者 GAO Ying DING Hongchen LIU Yulong YU Huaming 《Journal of Ocean University of China》 SCIE CAS CSCD 2024年第3期710-720,共11页
The tell tail is usually placed on the triangular sail to display the running state of the air flow on the sail surface.It is of great significance to make accurate judgement on the drift of the tell tail of the sailb... The tell tail is usually placed on the triangular sail to display the running state of the air flow on the sail surface.It is of great significance to make accurate judgement on the drift of the tell tail of the sailboat during sailing for the best sailing effect.Normally it is difficult for sailors to keep an eye for a long time on the tell sail for accurate judging its changes,affected by strong sunlight and visual fatigue.In this case,we adopt computer vision technology in hope of helping the sailors judge the changes of the tell tail in ease with ease.This paper proposes for the first time a method to classify sailboat tell tails based on deep learning and an expert guidance system,supported by a sailboat tell tail classification data set on the expert guidance system of interpreting the tell tails states in different sea wind conditions,including the feature extraction performance.Considering the expression capabilities that vary with the computational features in different visual tasks,the paper focuses on five tell tail computing features,which are recoded by an automatic encoder and classified by a SVM classifier.All experimental samples were randomly divided into five groups,and four groups were selected from each group as the training set to train the classifier.The remaining one group was used as the test set for testing.The highest resolution value of the ResNet network was 80.26%.To achieve better operational results on the basis of deep computing features obtained through the ResNet network in the experiments.The method can be used to assist the sailors in making better judgement about the tell tail changes during sailing. 展开更多
关键词 tell tail sailboat CLASSIFICATION deep learning
下载PDF
Working condition recognition of sucker rod pumping system based on 4-segment time-frequency signature matrix and deep learning
17
作者 Yun-Peng He Hai-Bo Cheng +4 位作者 Peng Zeng Chuan-Zhi Zang Qing-Wei Dong Guang-Xi Wan Xiao-Ting Dong 《Petroleum Science》 SCIE EI CAS CSCD 2024年第1期641-653,共13页
High-precision and real-time diagnosis of sucker rod pumping system(SRPS)is important for quickly mastering oil well operations.Deep learning-based method for classifying the dynamometer card(DC)of oil wells is an eff... High-precision and real-time diagnosis of sucker rod pumping system(SRPS)is important for quickly mastering oil well operations.Deep learning-based method for classifying the dynamometer card(DC)of oil wells is an efficient diagnosis method.However,the input of the DC as a two-dimensional image into the deep learning framework suffers from low feature utilization and high computational effort.Additionally,different SRPSs in an oil field have various system parameters,and the same SRPS generates different DCs at different moments.Thus,there is heterogeneity in field data,which can dramatically impair the diagnostic accuracy.To solve the above problems,a working condition recognition method based on 4-segment time-frequency signature matrix(4S-TFSM)and deep learning is presented in this paper.First,the 4-segment time-frequency signature(4S-TFS)method that can reduce the computing power requirements is proposed for feature extraction of DC data.Subsequently,the 4S-TFSM is constructed by relative normalization and matrix calculation to synthesize the features of multiple data and solve the problem of data heterogeneity.Finally,a convolutional neural network(CNN),one of the deep learning frameworks,is used to determine the functioning conditions based on the 4S-TFSM.Experiments on field data verify that the proposed diagnostic method based on 4S-TFSM and CNN(4S-TFSM-CNN)can significantly improve the accuracy of working condition recognition with lower computational cost.To the best of our knowledge,this is the first work to discuss the effect of data heterogeneity on the working condition recognition performance of SRPS. 展开更多
关键词 Sucker-rod pumping system Dynamometer card Working condition recognition deep learning Time-frequency signature Time-frequency signature matrix
下载PDF
Social Media-Based Surveillance Systems for Health Informatics Using Machine and Deep Learning Techniques:A Comprehensive Review and Open Challenges
18
作者 Samina Amin Muhammad Ali Zeb +3 位作者 Hani Alshahrani Mohammed Hamdi Mohammad Alsulami Asadullah Shaikh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1167-1202,共36页
Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM... Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM-based surveillance methods for early epidemic outbreaks and the role of ML and DL in enhancing their performance.Since,every year,a large amount of data related to epidemic outbreaks,particularly Twitter data is generated by SM.This paper outlines the theme of SM analysis for tracking health-related issues and detecting epidemic outbreaks in SM,along with the ML and DL techniques that have been configured for the detection of epidemic outbreaks.DL has emerged as a promising ML technique that adaptsmultiple layers of representations or features of the data and yields state-of-the-art extrapolation results.In recent years,along with the success of ML and DL in many other application domains,both ML and DL are also popularly used in SM analysis.This paper aims to provide an overview of epidemic outbreaks in SM and then outlines a comprehensive analysis of ML and DL approaches and their existing applications in SM analysis.Finally,this review serves the purpose of offering suggestions,ideas,and proposals,along with highlighting the ongoing challenges in the field of early outbreak detection that still need to be addressed. 展开更多
关键词 Social media EPIDEMIC machine learning deep learning health informatics PANDEMIC
下载PDF
A Deep Learning Approach for Landmines Detection Based on Airborne Magnetometry Imaging and Edge Computing
19
作者 Ahmed Barnawi Krishan Kumar +2 位作者 Neeraj Kumar Bander Alzahrani Amal Almansour 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期2117-2137,共21页
Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties repo... Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties reported worldwide annually.Therefore,there is a pressing need to employ diverse landmine detection techniques for their removal.One effective approach for landmine detection is UAV(Unmanned Aerial Vehicle)based AirborneMagnetometry,which identifies magnetic anomalies in the local terrestrial magnetic field.It can generate a contour plot or heat map that visually represents the magnetic field strength.Despite the effectiveness of this approach,landmine removal remains a challenging and resource-intensive task,fraughtwith risks.Edge computing,on the other hand,can play a crucial role in critical drone monitoring applications like landmine detection.By processing data locally on a nearby edge server,edge computing can reduce communication latency and bandwidth requirements,allowing real-time analysis of magnetic field data.It enables faster decision-making and more efficient landmine detection,potentially saving lives and minimizing the risks involved in the process.Furthermore,edge computing can provide enhanced security and privacy by keeping sensitive data close to the source,reducing the chances of data exposure during transmission.This paper introduces the MAGnetometry Imaging based Classification System(MAGICS),a fully automated UAV-based system designed for landmine and buried object detection and localization.We have developed an efficient deep learning-based strategy for automatic image classification using magnetometry dataset traces.By simulating the proposal in various network scenarios,we have successfully detected landmine signatures present in themagnetometry images.The trained models exhibit significant performance improvements,achieving a maximum mean average precision value of 97.8%. 展开更多
关键词 CNN deep learning landmine detection MAGNETOMETER mean average precision UAV
下载PDF
Delineating homogeneous domains of fractured rocks using topological manifolds and deep learning
20
作者 Yongqiang Liu Jianping Chen +3 位作者 Fujun Zhou Jiewei Zhan Wanglai Xu Jianhua Yan 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第8期2996-3013,共18页
Determining homogeneous domains statistically is helpful for engineering geological modeling and rock mass stability evaluation.In this text,a technique that can integrate lithology,geotechnical and structural informa... Determining homogeneous domains statistically is helpful for engineering geological modeling and rock mass stability evaluation.In this text,a technique that can integrate lithology,geotechnical and structural information is proposed to delineate homogeneous domains.This technique is then applied to a high and steep slope along a road.First,geological and geotechnical domains were described based on lithology,faults,and shear zones.Next,topological manifolds were used to eliminate the incompatibility between orientations and other parameters(i.e.trace length and roughness)so that the data concerning various properties of each discontinuity can be matched and characterized in the same Euclidean space.Thus,the influence of implicit combined effect in between parameter sequences on the homogeneous domains could be considered.Deep learning technique was employed to quantify abstract features of the characterization images of discontinuity properties,and to assess the similarity of rock mass structures.The results show that the technique can effectively distinguish structural variations and outperform conventional methods.It can handle multisource engineering geological information and multiple discontinuity parameters.This technique can also minimize the interference of human factors and delineate homogeneous domains based on orientations or multi-parameter with arbitrary distributions to satisfy different engineering requirements. 展开更多
关键词 Homogeneous domain Geological domain Geotechnical domain Structural domain Topological manifold deep learning
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部