期刊文献+
共找到113,459篇文章
< 1 2 250 >
每页显示 20 50 100
基于DeepLabv3+的船体结构腐蚀检测方法
1
作者 向林浩 方昊昱 +2 位作者 周健 张瑜 李位星 《船海工程》 北大核心 2024年第2期30-34,共5页
利用图像识别方法对无人机、机器人所采集的实时图像开展船体结构腐蚀检测,可有效提高检验检测效率和数字化、智能化水平,具有极大的应用价值和潜力,将改变传统的船体结构检验检测方式。提出一种基于DeepLabv3+的船体结构腐蚀检测模型,... 利用图像识别方法对无人机、机器人所采集的实时图像开展船体结构腐蚀检测,可有效提高检验检测效率和数字化、智能化水平,具有极大的应用价值和潜力,将改变传统的船体结构检验检测方式。提出一种基于DeepLabv3+的船体结构腐蚀检测模型,通过收集图像样本并进行三种腐蚀类别的分割标注,基于DeepLabv3+语义分割模型进行网络的训练,预测图片中腐蚀的像素点类别和区域,模型在测试集的精准率达到52.92%,证明了使用DeepLabv3+检测船体腐蚀缺陷的可行性。 展开更多
关键词 船体结构 腐蚀检测 深度学习 deepLabv3+
下载PDF
Optimizing Deep Learning for Computer-Aided Diagnosis of Lung Diseases: An Automated Method Combining Evolutionary Algorithm, Transfer Learning, and Model Compression
2
作者 Hassen Louati Ali Louati +1 位作者 Elham Kariri Slim Bechikh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2519-2547,共29页
Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w... Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures. 展开更多
关键词 Computer-aided diagnosis deep learning evolutionary algorithms deep compression transfer learning
下载PDF
Hyperspectral image super resolution using deep internal and self-supervised learning
3
作者 Zhe Liu Xian-Hua Han 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期128-141,共14页
By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral... By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods. 展开更多
关键词 computer vision deep learning deep neural networks HYPERSPECTRAL image enhancement
下载PDF
Dendritic Deep Learning for Medical Segmentation
4
作者 Zhipeng Liu Zhiming Zhang +3 位作者 Zhenyu Lei Masaaki Omura Rong-Long Wang Shangce Gao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第3期803-805,共3页
Dear Editor,This letter presents a novel segmentation approach that leverages dendritic neurons to tackle the challenges of medical imaging segmentation.In this study,we enhance the segmentation accuracy based on a Se... Dear Editor,This letter presents a novel segmentation approach that leverages dendritic neurons to tackle the challenges of medical imaging segmentation.In this study,we enhance the segmentation accuracy based on a SegNet variant including an encoder-decoder structure,an upsampling index,and a deep supervision method.Furthermore,we introduce a dendritic neuron-based convolutional block to enable nonlinear feature mapping,thereby further improving the effectiveness of our approach. 展开更多
关键词 thereby deep enable
下载PDF
MAUN:Memory-Augmented Deep Unfolding Network for Hyperspectral Image Reconstruction
5
作者 Qian Hu Jiayi Ma +2 位作者 Yuan Gao Junjun Jiang Yixuan Yuan 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第5期1139-1150,共12页
Spectral compressive imaging has emerged as a powerful technique to collect the 3D spectral information as 2D measurements.The algorithm for restoring the original 3D hyperspectral images(HSIs)from compressive measure... Spectral compressive imaging has emerged as a powerful technique to collect the 3D spectral information as 2D measurements.The algorithm for restoring the original 3D hyperspectral images(HSIs)from compressive measurements is pivotal in the imaging process.Early approaches painstakingly designed networks to directly map compressive measurements to HSIs,resulting in the lack of interpretability without exploiting the imaging priors.While some recent works have introduced the deep unfolding framework for explainable reconstruction,the performance of these methods is still limited by the weak information transmission between iterative stages.In this paper,we propose a Memory-Augmented deep Unfolding Network,termed MAUN,for explainable and accurate HSI reconstruction.Specifically,MAUN implements a novel CNN scheme to facilitate a better extrapolation step of the fast iterative shrinkage-thresholding algorithm,introducing an extra momentum incorporation step for each iteration to alleviate the information loss.Moreover,to exploit the high correlation of intermediate images from neighboring iterations,we customize a cross-stage transformer(CSFormer)as the deep denoiser to simultaneously capture self-similarity from both in-stage and cross-stage features,which is the first attempt to model the long-distance dependencies between iteration stages.Extensive experiments demonstrate that the proposed MAUN is superior to other state-of-the-art methods both visually and metrically.Our code is publicly available at https://github.com/HuQ1an/MAUN. 展开更多
关键词 deep FOLDING ITERATION
下载PDF
Constrained Multi-Objective Optimization With Deep Reinforcement Learning Assisted Operator Selection
6
作者 Fei Ming Wenyin Gong +1 位作者 Ling Wang Yaochu Jin 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期919-931,共13页
Solving constrained multi-objective optimization problems with evolutionary algorithms has attracted considerable attention.Various constrained multi-objective optimization evolutionary algorithms(CMOEAs)have been dev... Solving constrained multi-objective optimization problems with evolutionary algorithms has attracted considerable attention.Various constrained multi-objective optimization evolutionary algorithms(CMOEAs)have been developed with the use of different algorithmic strategies,evolutionary operators,and constraint-handling techniques.The performance of CMOEAs may be heavily dependent on the operators used,however,it is usually difficult to select suitable operators for the problem at hand.Hence,improving operator selection is promising and necessary for CMOEAs.This work proposes an online operator selection framework assisted by Deep Reinforcement Learning.The dynamics of the population,including convergence,diversity,and feasibility,are regarded as the state;the candidate operators are considered as actions;and the improvement of the population state is treated as the reward.By using a Q-network to learn a policy to estimate the Q-values of all actions,the proposed approach can adaptively select an operator that maximizes the improvement of the population according to the current state and thereby improve the algorithmic performance.The framework is embedded into four popular CMOEAs and assessed on 42 benchmark problems.The experimental results reveal that the proposed Deep Reinforcement Learning-assisted operator selection significantly improves the performance of these CMOEAs and the resulting algorithm obtains better versatility compared to nine state-of-the-art CMOEAs. 展开更多
关键词 Constrained multi-objective optimization deep Qlearning deep reinforcement learning(DRL) evolutionary algorithms evolutionary operator selection
下载PDF
Machine Learning Techniques Using Deep Instinctive Encoder-Based Feature Extraction for Optimized Breast Cancer Detection
7
作者 Vaishnawi Priyadarshni Sanjay Kumar Sharma +2 位作者 Mohammad Khalid Imam Rahmani Baijnath Kaushik Rania Almajalid 《Computers, Materials & Continua》 SCIE EI 2024年第2期2441-2468,共28页
Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s li... Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s lives.Developing an efficient technology-based detection system can lead to non-destructive and preliminary cancer detection techniques.This paper proposes a comprehensive framework that can effectively diagnose cancerous cells from benign cells using the Curated Breast Imaging Subset of the Digital Database for Screening Mammography(CBIS-DDSM)data set.The novelty of the proposed framework lies in the integration of various techniques,where the fusion of deep learning(DL),traditional machine learning(ML)techniques,and enhanced classification models have been deployed using the curated dataset.The analysis outcome proves that the proposed enhanced RF(ERF),enhanced DT(EDT)and enhanced LR(ELR)models for BC detection outperformed most of the existing models with impressive results. 展开更多
关键词 Autoencoder breast cancer deep neural network convolutional neural network image processing machine learning deep learning
下载PDF
ASLP-DL—A Novel Approach Employing Lightweight Deep Learning Framework for Optimizing Accident Severity Level Prediction
8
作者 Saba Awan Zahid Mehmood 《Computers, Materials & Continua》 SCIE EI 2024年第2期2535-2555,共21页
Highway safety researchers focus on crash injury severity,utilizing deep learning—specifically,deep neural networks(DNN),deep convolutional neural networks(D-CNN),and deep recurrent neural networks(D-RNN)—as the pre... Highway safety researchers focus on crash injury severity,utilizing deep learning—specifically,deep neural networks(DNN),deep convolutional neural networks(D-CNN),and deep recurrent neural networks(D-RNN)—as the preferred method for modeling accident severity.Deep learning’s strength lies in handling intricate relation-ships within extensive datasets,making it popular for accident severity level(ASL)prediction and classification.Despite prior success,there is a need for an efficient system recognizing ASL in diverse road conditions.To address this,we present an innovative Accident Severity Level Prediction Deep Learning(ASLP-DL)framework,incorporating DNN,D-CNN,and D-RNN models fine-tuned through iterative hyperparameter selection with Stochastic Gradient Descent.The framework optimizes hidden layers and integrates data augmentation,Gaussian noise,and dropout regularization for improved generalization.Sensitivity and factor contribution analyses identify influential predictors.Evaluated on three diverse crash record databases—NCDB 2018–2019,UK 2015–2020,and US 2016–2021—the D-RNN model excels with an ACC score of 89.0281%,a Roc Area of 0.751,an F-estimate of 0.941,and a Kappa score of 0.0629 over the NCDB dataset.The proposed framework consistently outperforms traditional methods,existing machine learning,and deep learning techniques. 展开更多
关键词 Injury SEVERITY PREDICTION deep learning feature
下载PDF
DeepOCL:A deep neural network for Object Constraint Language generation from unrestricted nature language
9
作者 Yilong Yang Yibo Liu +3 位作者 Tianshu Bao Weiru Wang Nan Niu Yongfeng Yin 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期250-263,共14页
Object Constraint Language(OCL)is one kind of lightweight formal specification,which is widely used for software verification and validation in NASA and Object Management Group projects.Although OCL provides a simple ... Object Constraint Language(OCL)is one kind of lightweight formal specification,which is widely used for software verification and validation in NASA and Object Management Group projects.Although OCL provides a simple expressive syntax,it is hard for the developers to write correctly due to lacking knowledge of the mathematical foundations of the first-order logic,which is approximately half accurate at the first stage of devel-opment.A deep neural network named DeepOCL is proposed,which takes the unre-stricted natural language as inputs and automatically outputs the best-scored OCL candidates without requiring a domain conceptual model that is compulsively required in existing rule-based generation approaches.To demonstrate the validity of our proposed approach,ablation experiments were conducted on a new sentence-aligned dataset named OCLPairs.The experiments show that the proposed DeepOCL can achieve state of the art for OCL statement generation,scored 74.30 on BLEU,and greatly outperformed experienced developers by 35.19%.The proposed approach is the first deep learning approach to generate the OCL expression from the natural language.It can be further developed as a CASE tool for the software industry. 展开更多
关键词 deep learning OCL software engineering
下载PDF
Survey of Indoor Localization Based on Deep Learning
10
作者 Khaldon Azzam Kordi Mardeni Roslee +3 位作者 Mohamad Yusoff Alias Abdulraqeb Alhammadi Athar Waseem Anwar Faizd Osman 《Computers, Materials & Continua》 SCIE EI 2024年第5期3261-3298,共38页
This study comprehensively examines the current state of deep learning (DL) usage in indoor positioning.It emphasizes the significance and efficiency of convolutional neural networks (CNNs) and recurrent neuralnetwork... This study comprehensively examines the current state of deep learning (DL) usage in indoor positioning.It emphasizes the significance and efficiency of convolutional neural networks (CNNs) and recurrent neuralnetworks (RNNs). Unlike prior studies focused on single sensor modalities like Wi-Fi or Bluetooth, this researchexplores the integration of multiple sensor modalities (e.g.,Wi-Fi, Bluetooth, Ultra-Wideband, ZigBee) to expandindoor localization methods, particularly in obstructed environments. It addresses the challenge of precise objectlocalization, introducing a novel hybrid DL approach using received signal information (RSI), Received SignalStrength (RSS), and Channel State Information (CSI) data to enhance accuracy and stability. Moreover, thestudy introduces a device-free indoor localization algorithm, offering a significant advancement with potentialobject or individual tracking applications. It recognizes the increasing importance of indoor positioning forlocation-based services. It anticipates future developments while acknowledging challenges such as multipathinterference, noise, data standardization, and scarcity of labeled data. This research contributes significantly toindoor localization technology, offering adaptability, device independence, and multifaceted DL-based solutionsfor real-world challenges and future advancements. Thus, the proposed work addresses challenges in objectlocalization precision and introduces a novel hybrid deep learning approach, contributing to advancing locationcentricservices.While deep learning-based indoor localization techniques have improved accuracy, challenges likedata noise, standardization, and availability of training data persist. However, ongoing developments are expectedto enhance indoor positioning systems to meet real-world demands. 展开更多
关键词 deep learning indoor localization wireless-based localization
下载PDF
UAV-Assisted Dynamic Avatar Task Migration for Vehicular Metaverse Services: A Multi-Agent Deep Reinforcement Learning Approach
11
作者 Jiawen Kang Junlong Chen +6 位作者 Minrui Xu Zehui Xiong Yutao Jiao Luchao Han Dusit Niyato Yongju Tong Shengli Xie 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第2期430-445,共16页
Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metavers... Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses. 展开更多
关键词 AVATAR blockchain metaverses multi-agent deep reinforcement learning transformer UAVS
下载PDF
Modeling Geometrically Nonlinear FG Plates: A Fast and Accurate Alternative to IGA Method Based on Deep Learning
12
作者 Se Li Tiantang Yu Tinh Quoc Bui 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2793-2808,共16页
Isogeometric analysis (IGA) is known to showadvanced features compared to traditional finite element approaches.Using IGA one may accurately obtain the geometrically nonlinear bending behavior of plates with functiona... Isogeometric analysis (IGA) is known to showadvanced features compared to traditional finite element approaches.Using IGA one may accurately obtain the geometrically nonlinear bending behavior of plates with functionalgrading (FG). However, the procedure is usually complex and often is time-consuming. We thus put forward adeep learning method to model the geometrically nonlinear bending behavior of FG plates, bypassing the complexIGA simulation process. A long bidirectional short-term memory (BLSTM) recurrent neural network is trainedusing the load and gradient index as inputs and the displacement responses as outputs. The nonlinear relationshipbetween the outputs and the inputs is constructed usingmachine learning so that the displacements can be directlyestimated by the deep learning network. To provide enough training data, we use S-FSDT Von-Karman IGA andobtain the displacement responses for different loads and gradient indexes. Results show that the recognition erroris low, and demonstrate the feasibility of deep learning technique as a fast and accurate alternative to IGA formodeling the geometrically nonlinear bending behavior of FG plates. 展开更多
关键词 FG plates geometric nonlinearity deep learning BLSTM IGA S-FSDT
下载PDF
QoS Routing Optimization Based on Deep Reinforcement Learning in SDN
13
作者 Yu Song Xusheng Qian +2 位作者 Nan Zhang WeiWang Ao Xiong 《Computers, Materials & Continua》 SCIE EI 2024年第5期3007-3021,共15页
To enhance the efficiency and expediency of issuing e-licenses within the power sector, we must confront thechallenge of managing the surging demand for data traffic. Within this realm, the network imposes stringentQu... To enhance the efficiency and expediency of issuing e-licenses within the power sector, we must confront thechallenge of managing the surging demand for data traffic. Within this realm, the network imposes stringentQuality of Service (QoS) requirements, revealing the inadequacies of traditional routing allocation mechanismsin accommodating such extensive data flows. In response to the imperative of handling a substantial influx of datarequests promptly and alleviating the constraints of existing technologies and network congestion, we present anarchitecture forQoS routing optimizationwith in SoftwareDefinedNetwork (SDN), leveraging deep reinforcementlearning. This innovative approach entails the separation of SDN control and transmission functionalities, centralizingcontrol over data forwardingwhile integrating deep reinforcement learning for informed routing decisions. Byfactoring in considerations such as delay, bandwidth, jitter rate, and packet loss rate, we design a reward function toguide theDeepDeterministic PolicyGradient (DDPG) algorithmin learning the optimal routing strategy to furnishsuperior QoS provision. In our empirical investigations, we juxtapose the performance of Deep ReinforcementLearning (DRL) against that of Shortest Path (SP) algorithms in terms of data packet transmission delay. Theexperimental simulation results show that our proposed algorithm has significant efficacy in reducing networkdelay and improving the overall transmission efficiency, which is superior to the traditional methods. 展开更多
关键词 deep reinforcement learning SDN route optimization QOS
下载PDF
Deep neural network-enabled battery open-circuit voltage estimation based on partial charging data
14
作者 Ziyou Zhou Yonggang Liu +2 位作者 Chengming Zhang Weixiang Shen Rui Xiong 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2024年第3期120-132,I0005,共14页
Battery management systems(BMSs) play a vital role in ensuring efficient and reliable operations of lithium-ion batteries.The main function of the BMSs is to estimate battery states and diagnose battery health using b... Battery management systems(BMSs) play a vital role in ensuring efficient and reliable operations of lithium-ion batteries.The main function of the BMSs is to estimate battery states and diagnose battery health using battery open-circuit voltage(OCV).However,acquiring the complete OCV data online can be a challenging endeavor due to the time-consuming measurement process or the need for specific operating conditions required by OCV estimation models.In addressing these concerns,this study introduces a deep neural network-combined framework for accurate and robust OCV estimation,utilizing partial daily charging data.We incorporate a generative deep learning model to extract aging-related features from data and generate high-fidelity OCV curves.Correlation analysis is employed to identify the optimal partial charging data,optimizing the OCV estimation precision while preserving exceptional flexibility.The validation results,using data from nickel-cobalt-magnesium(NCM) batteries,illustrate the accurate estimation of the complete OCV-capacity curve,with an average root mean square errors(RMSE) of less than 3 mAh.Achieving this level of precision for OCV estimation requires only around 50 s collection of partial charging data.Further validations on diverse battery types operating under various conditions confirm the effectiveness of our proposed method.Additional cases of precise health diagnosis based on OCV highlight the significance of conducting online OCV estimation.Our method provides a flexible approach to achieve complete OCV estimation and holds promise for generalization to other tasks in BMSs. 展开更多
关键词 Lithium-ion battery Open-circuit voltage Health diagnosis deep learning
下载PDF
Securing Cloud-Encrypted Data:Detecting Ransomware-as-a-Service(RaaS)Attacks through Deep Learning Ensemble
15
作者 Amardeep Singh Hamad Ali Abosaq +5 位作者 Saad Arif Zohaib Mushtaq Muhammad Irfan Ghulam Abbas Arshad Ali Alanoud AlMazroa 《Computers, Materials & Continua》 SCIE EI 2024年第4期857-873,共17页
Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and ... Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and everpresent threat is Ransomware-as-a-Service(RaaS)assaults,which enable even individuals with minimal technical knowledge to conduct ransomware operations.This study provides a new approach for RaaS attack detection which uses an ensemble of deep learning models.For this purpose,the network intrusion detection dataset“UNSWNB15”from the Intelligent Security Group of the University of New South Wales,Australia is analyzed.In the initial phase,the rectified linear unit-,scaled exponential linear unit-,and exponential linear unit-based three separate Multi-Layer Perceptron(MLP)models are developed.Later,using the combined predictive power of these three MLPs,the RansoDetect Fusion ensemble model is introduced in the suggested methodology.The proposed ensemble technique outperforms previous studieswith impressive performance metrics results,including 98.79%accuracy and recall,98.85%precision,and 98.80%F1-score.The empirical results of this study validate the ensemble model’s ability to improve cybersecurity defenses by showing that it outperforms individual MLPmodels.In expanding the field of cybersecurity strategy,this research highlights the significance of combined deep learning models in strengthening intrusion detection systems against sophisticated cyber threats. 展开更多
关键词 Cloud encryption RAAS ENSEMBLE threat detection deep learning CYBERSECURITY
下载PDF
Cybernet Model:A New Deep Learning Model for Cyber DDoS Attacks Detection and Recognition
16
作者 Azar Abid Salih Maiwan Bahjat Abdulrazaq 《Computers, Materials & Continua》 SCIE EI 2024年第1期1275-1295,共21页
Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being... Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being one of the most crucial due to their rapid cyberattack detection capabilities on networks and hosts.The capabilities of DL in feature learning and analyzing extensive data volumes lead to the recognition of network traffic patterns.This study presents novel lightweight DL models,known as Cybernet models,for the detection and recognition of various cyber Distributed Denial of Service(DDoS)attacks.These models were constructed to have a reasonable number of learnable parameters,i.e.,less than 225,000,hence the name“lightweight.”This not only helps reduce the number of computations required but also results in faster training and inference times.Additionally,these models were designed to extract features in parallel from 1D Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM),which makes them unique compared to earlier existing architectures and results in better performance measures.To validate their robustness and effectiveness,they were tested on the CIC-DDoS2019 dataset,which is an imbalanced and large dataset that contains different types of DDoS attacks.Experimental results revealed that bothmodels yielded promising results,with 99.99% for the detectionmodel and 99.76% for the recognition model in terms of accuracy,precision,recall,and F1 score.Furthermore,they outperformed the existing state-of-the-art models proposed for the same task.Thus,the proposed models can be used in cyber security research domains to successfully identify different types of attacks with a high detection and recognition rate. 展开更多
关键词 deep learning CNN LSTM Cybernet model DDoS recognition
下载PDF
ThyroidNet:A Deep Learning Network for Localization and Classification of Thyroid Nodules
17
作者 Lu Chen Huaqiang Chen +6 位作者 Zhikai Pan Sheng Xu Guangsheng Lai Shuwen Chen Shuihua Wang Xiaodong Gu Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期361-382,共22页
Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on... Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules.First,we propose the multitask TransUnet,which combines the TransUnet encoder and decoder with multitask learning.Second,we propose the DualLoss function,tailored to the thyroid nodule localization and classification tasks.It balances the learning of the localization and classification tasks to help improve the model’s generalization ability.Third,we introduce strategies for augmenting the data.Finally,we submit a novel deep learning model,ThyroidNet,to accurately detect thyroid nodules.Results:ThyroidNet was evaluated on private datasets and was comparable to other existing methods,including U-Net and TransUnet.Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules.It achieved improved accuracy of 3.9%and 1.5%,respectively.Conclusion:ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks.Future research directions include optimization of the model structure,expansion of the dataset size,reduction of computational complexity and memory requirements,and exploration of additional applications of ThyroidNet in medical image analysis. 展开更多
关键词 ThyroidNet deep learning TransUnet multitask learning medical image analysis
下载PDF
Transparent and Accurate COVID-19 Diagnosis:Integrating Explainable AI with Advanced Deep Learning in CT Imaging
18
作者 Mohammad Mehedi Hassan Salman A.AlQahtani +1 位作者 Mabrook S.AlRakhami Ahmed Zohier Elhendi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3101-3123,共23页
In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.De... In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.Despite its potential,deep learning’s“black box”nature has been a major impediment to its broader acceptance in clinical environments,where transparency in decision-making is imperative.To bridge this gap,our research integrates Explainable AI(XAI)techniques,specifically the Local Interpretable Model-Agnostic Explanations(LIME)method,with advanced deep learning models.This integration forms a sophisticated and transparent framework for COVID-19 identification,enhancing the capability of standard Convolutional Neural Network(CNN)models through transfer learning and data augmentation.Our approach leverages the refined DenseNet201 architecture for superior feature extraction and employs data augmentation strategies to foster robust model generalization.The pivotal element of our methodology is the use of LIME,which demystifies the AI decision-making process,providing clinicians with clear,interpretable insights into the AI’s reasoning.This unique combination of an optimized Deep Neural Network(DNN)with LIME not only elevates the precision in detecting COVID-19 cases but also equips healthcare professionals with a deeper understanding of the diagnostic process.Our method,validated on the SARS-COV-2 CT-Scan dataset,demonstrates exceptional diagnostic accuracy,with performance metrics that reinforce its potential for seamless integration into modern healthcare systems.This innovative approach marks a significant advancement in creating explainable and trustworthy AI tools for medical decisionmaking in the ongoing battle against COVID-19. 展开更多
关键词 Explainable AI COVID-19 CT images deep learning
下载PDF
Deep learning-based inpainting of saturation artifacts in optical coherence tomography images
19
作者 Muyun Hu Zhuoqun Yuan +2 位作者 Di Yang Jingzhu Zhao Yanmei Liang 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第3期1-10,共10页
Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts ... Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images.We proposed a deep learning-based inpainting method of saturation artifacts in this paper.The generation mechanism of saturation artifacts was analyzed,and experimental and simulated datasets were built based on the mechanism.Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs.The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility,strong generalization,and robustness. 展开更多
关键词 Optical coherence tomography saturation artifacts deep learning image inpainting.
下载PDF
Exploring Deep Learning Methods for Computer Vision Applications across Multiple Sectors:Challenges and Future Trends
20
作者 Narayanan Ganesh Rajendran Shankar +3 位作者 Miroslav Mahdal Janakiraman SenthilMurugan Jasgurpreet Singh Chohan Kanak Kalita 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期103-141,共39页
Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than ot... Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than other traditional machine learning(ML)methods inCV.DL techniques can produce state-of-the-art results for difficult CV problems like picture categorization,object detection,and face recognition.In this review,a structured discussion on the history,methods,and applications of DL methods to CV problems is presented.The sector-wise presentation of applications in this papermay be particularly useful for researchers in niche fields who have limited or introductory knowledge of DL methods and CV.This review will provide readers with context and examples of how these techniques can be applied to specific areas.A curated list of popular datasets and a brief description of them are also included for the benefit of readers. 展开更多
关键词 Neural network machine vision classification object detection deep learning
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部