期刊文献+
共找到49篇文章
< 1 2 3 >
每页显示 20 50 100
Social Media-Based Surveillance Systems for Health Informatics Using Machine and Deep Learning Techniques:A Comprehensive Review and Open Challenges
1
作者 Samina Amin Muhammad Ali Zeb +3 位作者 Hani Alshahrani Mohammed Hamdi Mohammad Alsulami Asadullah Shaikh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1167-1202,共36页
Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM... Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM-based surveillance methods for early epidemic outbreaks and the role of ML and DL in enhancing their performance.Since,every year,a large amount of data related to epidemic outbreaks,particularly Twitter data is generated by SM.This paper outlines the theme of SM analysis for tracking health-related issues and detecting epidemic outbreaks in SM,along with the ML and DL techniques that have been configured for the detection of epidemic outbreaks.DL has emerged as a promising ML technique that adaptsmultiple layers of representations or features of the data and yields state-of-the-art extrapolation results.In recent years,along with the success of ML and DL in many other application domains,both ML and DL are also popularly used in SM analysis.This paper aims to provide an overview of epidemic outbreaks in SM and then outlines a comprehensive analysis of ML and DL approaches and their existing applications in SM analysis.Finally,this review serves the purpose of offering suggestions,ideas,and proposals,along with highlighting the ongoing challenges in the field of early outbreak detection that still need to be addressed. 展开更多
关键词 Social media EPIDEMIC machine learning deep learning health informatics PANDEMIC
下载PDF
A Review and Analysis of Localization Techniques in Underwater Wireless Sensor Networks 被引量:1
2
作者 Seema Rani Anju +6 位作者 Anupma Sangwan Krishna Kumar Kashif Nisar Tariq Rahim Soomro Ag.Asri Ag.Ibrahim Manoj Gupta Laxmi Chandand Sadiq Ali Khan 《Computers, Materials & Continua》 SCIE EI 2023年第6期5697-5715,共19页
In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in... In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in such a network is the localization of underwater nodes.Localization is required for tracking objects and detecting the target.It is also considered tagging of data where sensed contents are not found of any use without localization.This is useless for application until the position of sensed content is confirmed.This article’s major goal is to review and analyze underwater node localization to solve the localization issues in UWSN.The present paper describes various existing localization schemes and broadly categorizes these schemes as Centralized and Distributed localization schemes underwater.Also,a detailed subdivision of these localization schemes is given.Further,these localization schemes are compared from different perspectives.The detailed analysis of these schemes in terms of certain performance metrics has been discussed in this paper.At the end,the paper addresses several future directions for potential research in improving localization problems of UWSN. 展开更多
关键词 Underwater wireless sensor networks localization schemes node localization ranging algorithms estimation based prediction based
下载PDF
Effectiveness of Deep Learning Models for Brain Tumor Classification and Segmentation
3
作者 Muhammad Irfan Ahmad Shaf +6 位作者 Tariq Ali Umar Farooq Saifur Rahman Salim Nasar Faraj Mursal Mohammed Jalalah Samar M.Alqhtani Omar AlShorman 《Computers, Materials & Continua》 SCIE EI 2023年第7期711-729,共19页
A brain tumor is a mass or growth of abnormal cells in the brain.In children and adults,brain tumor is considered one of the leading causes of death.There are several types of brain tumors,including benign(non-cancero... A brain tumor is a mass or growth of abnormal cells in the brain.In children and adults,brain tumor is considered one of the leading causes of death.There are several types of brain tumors,including benign(non-cancerous)and malignant(cancerous)tumors.Diagnosing brain tumors as early as possible is essential,as this can improve the chances of successful treatment and survival.Considering this problem,we bring forth a hybrid intelligent deep learning technique that uses several pre-trained models(Resnet50,Vgg16,Vgg19,U-Net)and their integration for computer-aided detection and localization systems in brain tumors.These pre-trained and integrated deep learning models have been used on the publicly available dataset from The Cancer Genome Atlas.The dataset consists of 120 patients.The pre-trained models have been used to classify tumor or no tumor images,while integrated models are applied to segment the tumor region correctly.We have evaluated their performance in terms of loss,accuracy,intersection over union,Jaccard distance,dice coefficient,and dice coefficient loss.From pre-trained models,the U-Net model achieves higher performance than other models by obtaining 95%accuracy.In contrast,U-Net with ResNet-50 out-performs all other models from integrated pre-trained models and correctly classified and segmented the tumor region. 展开更多
关键词 Brain tumor deep learning ENSEMBLE detection healthcare
下载PDF
A U-Net-Based CNN Model for Detection and Segmentation of Brain Tumor
4
作者 Rehana Ghulam Sammar Fatima +5 位作者 Tariq Ali Nazir Ahmad Zafar Abdullah A.Asiri Hassan A.Alshamrani Samar M.Alqhtani Khlood M.Mehdar 《Computers, Materials & Continua》 SCIE EI 2023年第1期1333-1349,共17页
Human brain consists of millions of cells to control the overall structure of the human body.When these cells start behaving abnormally,then brain tumors occurred.Precise and initial stage brain tumor detection has al... Human brain consists of millions of cells to control the overall structure of the human body.When these cells start behaving abnormally,then brain tumors occurred.Precise and initial stage brain tumor detection has always been an issue in the field of medicines for medical experts.To handle this issue,various deep learning techniques for brain tumor detection and segmentation techniques have been developed,which worked on different datasets to obtain fruitful results,but the problem still exists for the initial stage of detection of brain tumors to save human lives.For this purpose,we proposed a novel U-Net-based Convolutional Neural Network(CNN)technique to detect and segmentizes the brain tumor for Magnetic Resonance Imaging(MRI).Moreover,a 2-dimensional publicly available Multimodal Brain Tumor Image Segmentation(BRATS2020)dataset with 1840 MRI images of brain tumors has been used having an image size of 240×240 pixels.After initial dataset preprocessing the proposed model is trained by dividing the dataset into three parts i.e.,testing,training,and validation process.Our model attained an accuracy value of 0.98%on the BRATS2020 dataset,which is the highest one as compared to the already existing techniques. 展开更多
关键词 U-net brain tumor magnetic resonance images convolutional neural network SEGMENTATION
下载PDF
Detection and Classification of Hemorrhages in Retinal Images
5
作者 Ghassan Ahmed Ali Thamer Mitib Ahmad Al Sariera +2 位作者 Muhammad Akram Adel Sulaiman Fekry Olayah 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期1601-1616,共16页
Damage of the blood vessels in retina due to diabetes is called diabetic retinopathy(DR).Hemorrhages is thefirst clinically visible symptoms of DR.This paper presents a new technique to extract and classify the hemorrh... Damage of the blood vessels in retina due to diabetes is called diabetic retinopathy(DR).Hemorrhages is thefirst clinically visible symptoms of DR.This paper presents a new technique to extract and classify the hemorrhages in fundus images.The normal objects such as blood vessels,fovea and optic disc inside retinal images are masked to distinguish them from hemorrhages.For masking blood vessels,thresholding that separates blood vessels and background intensity followed by a newfilter to extract the border of vessels based on orienta-tions of vessels are used.For masking optic disc,the image is divided into sub-images then the brightest window with maximum variance in intensity is selected.Then the candidate dark regions are extracted based on adaptive thresholding and top-hat morphological techniques.Features are extracted from each candidate region based on ophthalmologist selection such as color and size and pattern recognition techniques such as texture and wavelet features.Three different types of Support Vector Machine(SVM),Linear SVM,Quadratic SVM and Cubic SVM classifier are applied to classify the candidate dark regions as either hemor-rhages or healthy.The efficacy of the proposed method is demonstrated using the standard benchmark DIARETDB1 database and by comparing the results with methods in silico.The performance of the method is measured based on average sensitivity,specificity,F-score and accuracy.Experimental results show the Linear SVM classifier gives better results than Cubic SVM and Quadratic SVM with respect to sensitivity and accuracy and with respect to specificity Quadratic SVM gives better result as compared to other SVMs. 展开更多
关键词 Diabetic retinopathy HEMORRHAGES adaptive thresholding support vector machine
下载PDF
A Secure and Efficient Cluster-Based Authentication Scheme for Internet of Things(IoTs)
6
作者 Kanwal Imran Nasreen Anjum +3 位作者 Abdullah Alghamdi Asadullah Shaikh Mohammed Hamdi Saeed Mahfooz 《Computers, Materials & Continua》 SCIE EI 2022年第1期1033-1052,共20页
IPv6 over Low PowerWireless Personal Area Network(6LoWPAN)provides IP connectivity to the highly constrained nodes in the Internet of Things(IoTs).6LoWPANallows nodeswith limited battery power and storage capacity to ... IPv6 over Low PowerWireless Personal Area Network(6LoWPAN)provides IP connectivity to the highly constrained nodes in the Internet of Things(IoTs).6LoWPANallows nodeswith limited battery power and storage capacity to carry IPv6 datagrams over the lossy and error-prone radio links offered by the IEEE 802.15.4 standard,thus acting as an adoption layer between the IPv6 protocol and IEEE 802.15.4 network.The data link layer of IEEE 802.15.4 in 6LoWPAN is based on AES(Advanced Encryption Standard),but the 6LoWPANstandard lacks and has omitted the security and privacy requirements at higher layers.The sensor nodes in 6LoWPANcan join the network without requiring the authentication procedure.Therefore,from security perspectives,6LoWPAN is vulnerable to many attacks such as replay attack,Man-in-the-Middle attack,Impersonation attack,and Modification attack.This paper proposes a secure and efficient cluster-based authentication scheme(CBAS)for highly constrained sensor nodes in 6LoWPAN.In this approach,sensor nodes are organized into a cluster and communicate with the central network through a dedicated sensor node.The main objective of CBAS is to provide efficient and authentic communication among the 6LoWPAN nodes.To ensure the low signaling overhead during the registration,authentication,and handover procedures,we also introduce lightweight and efficient registration,de-registration,initial authentication,and handover procedures,when a sensor node or group of sensor nodes join or leave a cluster.Our security analysis shows that the proposed CBAS approach protects against various security attacks,including Identity Confidentiality attack,Modification attack,Replay attack,Man-in-the-middle attack,and Impersonation attack.Our simulation experiments show that CBAS has reduced the registration delay by 11%,handoff authentication delay by 32%,and signaling cost by 37%compared to the SGMS(Secure GroupMobility Scheme)and LAMS(Light-Wight Authentication&Mobility Scheme). 展开更多
关键词 IOT cyber security security attacks authentication delay handover delay signaling cost 6LoWPAN
下载PDF
Intelligent Machine Learning Enabled Retinal Blood Vessel Segmentation and Classification
7
作者 Nora Abdullah Alkhaldi Hanan T.Halawani 《Computers, Materials & Continua》 SCIE EI 2023年第1期399-414,共16页
Automated segmentation of blood vessels in retinal fundus images is essential for medical image analysis.The segmentation of retinal vessels is assumed to be essential to the progress of the decision support system fo... Automated segmentation of blood vessels in retinal fundus images is essential for medical image analysis.The segmentation of retinal vessels is assumed to be essential to the progress of the decision support system for initial analysis and treatment of retinal disease.This article develops a new Grasshopper Optimization with Fuzzy Edge Detection based Retinal Blood Vessel Segmentation and Classification(GOFED-RBVSC)model.The proposed GOFED-RBVSC model initially employs contrast enhancement process.Besides,GOAFED approach is employed to detect the edges in the retinal fundus images in which the use of GOA adjusts the membership functions.The ORB(Oriented FAST and Rotated BRIEF)feature extractor is exploited to generate feature vectors.Finally,Improved Conditional Variational Auto Encoder(ICAVE)is utilized for retinal image classification,shows the novelty of the work.The performance validation of the GOFEDRBVSC model is tested using benchmark dataset,and the comparative study highlighted the betterment of the GOFED-RBVSC model over the recent approaches. 展开更多
关键词 Edge detection blood vessel segmentation retinal fundus images image classification deep learning
下载PDF
Offshore Software Maintenance Outsourcing Process Model Validation:A Case Study Approach
8
作者 Atif Ikram Masita Abdul Jalil +3 位作者 Amir Bin Ngah Adel Sulaiman Muhammad Akram Ahmad Salman Khan 《Computers, Materials & Continua》 SCIE EI 2023年第3期5035-5048,共14页
The successful execution and management of Offshore Software Maintenance Outsourcing(OSMO)can be very beneficial for OSMO vendors and the OSMO client.Although a lot of research on software outsourcing is going on,most... The successful execution and management of Offshore Software Maintenance Outsourcing(OSMO)can be very beneficial for OSMO vendors and the OSMO client.Although a lot of research on software outsourcing is going on,most of the existing literature on offshore outsourcing deals with the outsourcing of software development only.Several frameworks have been developed focusing on guiding software systemmanagers concerning offshore software outsourcing.However,none of these studies delivered comprehensive guidelines for managing the whole process of OSMO.There is a considerable lack of research working on managing OSMO from a vendor’s perspective.Therefore,to find the best practices for managing an OSMO process,it is necessary to further investigate such complex and multifaceted phenomena from the vendor’s perspective.This study validated the preliminary OSMO process model via a case study research approach.The results showed that the OSMO process model is applicable in an industrial setting with few changes.The industrial data collected during the case study enabled this paper to extend the preliminary OSMO process model.The refined version of the OSMO processmodel has four major phases including(i)Project Assessment,(ii)SLA(iii)Execution,and(iv)Risk. 展开更多
关键词 Offshore outsourcing process model model validation vendor challenges case study
下载PDF
Robust Image Watermarking Using LWT and Stochastic Gradient Firefly Algorithm
9
作者 Sachin Sharma Meena Malik +3 位作者 Chander Prabha Amal Al-Rasheed Mona Alduailij Sultan Almakdi 《Computers, Materials & Continua》 SCIE EI 2023年第4期393-407,共15页
Watermarking of digital images is required in diversified applicationsranging from medical imaging to commercial images used over the web.Usually, the copyright information is embossed over the image in the form ofa l... Watermarking of digital images is required in diversified applicationsranging from medical imaging to commercial images used over the web.Usually, the copyright information is embossed over the image in the form ofa logo at the corner or diagonal text in the background. However, this formof visible watermarking is not suitable for a large class of applications. In allsuch cases, a hidden watermark is embedded inside the original image as proofof ownership. A large number of techniques and algorithms are proposedby researchers for invisible watermarking. In this paper, we focus on issuesthat are critical for security aspects in the most common domains like digitalphotography copyrighting, online image stores, etc. The requirements of thisclass of application include robustness (resistance to attack), blindness (directextraction without original image), high embedding capacity, high Peak Signalto Noise Ratio (PSNR), and high Structural Similarity Matrix (SSIM). Mostof these requirements are conflicting, which means that an attempt to maximizeone requirement harms the other. In this paper, a blind type of imagewatermarking scheme is proposed using Lifting Wavelet Transform (LWT)as the baseline. Using this technique, custom binary watermarks in the formof a binary string can be embedded. Hu’s Invariant moments’ coefficientsare used as a key to extract the watermark. A Stochastic variant of theFirefly algorithm (FA) is used for the optimization of the technique. Undera prespecified size of embedding data, high PSNR and SSIM are obtainedusing the Stochastic Gradient variant of the Firefly technique. The simulationis done using Matrix Laboratory (MATLAB) tool and it is shown that theproposed technique outperforms the benchmark techniques of watermarkingconsidering PSNR and SSIM as quality metrics. 展开更多
关键词 Image watermarking lifting wavelet transform discrete wavelet transform(DWT) firefly technique invariant moments
下载PDF
Efficient Energy and Delay Reduction Model for Wireless Sensor Networks
10
作者 Arslan Iftikhar M.A.Elmagzoub +4 位作者 Ansar Munir Hamad Abosaq Al Salem Mahmood ul Hassan Jarallah Alqahtani Asadullah Shaikh 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期1153-1168,共16页
In every network,delay and energy are crucial for communication and network life.In wireless sensor networks,many tiny nodes create networks with high energy consumption and compute routes for better communication.Wir... In every network,delay and energy are crucial for communication and network life.In wireless sensor networks,many tiny nodes create networks with high energy consumption and compute routes for better communication.Wireless Sensor Networks(WSN)is a very complex scenario to compute minimal delay with data aggregation and energy efficiency.In this research,we compute minimal delay and energy efficiency for improving the quality of service of any WSN.The proposed work is based on energy and distance parameters as taken dependent variables with data aggregation.Data aggregation performs on different models,namely Hybrid-Low Energy Adaptive Clustering Hierarchy(H-LEACH),Low Energy Adaptive Clustering Hierarchy(LEACH),and Multi-Aggregator-based Multi-Cast(MAMC).The main contribution of this research is to a reduction in delay and optimized energy solution,a novel hybrid model design in this research that ensures the quality of service in WSN.This model includes a whale optimization technique that involves heterogeneous functions and performs optimization to reach optimized results.For cluster head selection,Stable Election Protocol(SEP)protocol is used and Power-Efficient Gathering in Sensor Information Systems(PEGASIS)is used for driven-path in routing.Simulation results evaluate that H-LEACH provides minimal delay and energy consumption by sensor nodes.In the comparison of existing theories and our proposed method,HLEACH is providing energy and delay reduction and improvement in quality of service.MATLAB 2019 is used for simulation work. 展开更多
关键词 Data aggregation wireless sensor network energy efficiency quality of services delay reduction
下载PDF
Enhanced Adaptive Brain-Computer Interface Approach for Intelligent Assistance to Disabled Peoples
11
作者 Ali Usman Javed Ferzund +7 位作者 Ahmad Shaf Muhammad Aamir Samar Alqhtani Khlood M.Mehdar Hanan Talal Halawani Hassan A.Alshamrani Abdullah A.Asiri Muhammad Irfan 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1355-1369,共15页
Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform thei... Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform their daily tasks.In these devices,higher latency factors need to be addressed appropriately.Therefore,the main goal of this research is to implement a real-time BCI architecture with minimum latency for command actuation.The proposed architecture is capable to communicate between different modules of the system by adopting an automotive,intelligent data processing and classification approach.Neuro-sky mind wave device has been used to transfer the data to our implemented server for command propulsion.Think-Net Convolutional Neural Network(TN-CNN)architecture has been proposed to recognize the brain signals and classify them into six primary mental states for data classification.Data collection and processing are the responsibility of the central integrated server for system load minimization.Testing of implemented architecture and deep learning model shows excellent results.The proposed system integrity level was the minimum data loss and the accurate commands processing mechanism.The training and testing results are 99%and 93%for custom model implementation based on TN-CNN.The proposed real-time architecture is capable of intelligent data processing unit with fewer errors,and it will benefit assistive devices working on the local server and cloud server. 展开更多
关键词 Disable person ELECTROENCEPHALOGRAM convolutional neural network brain signal classification
下载PDF
Detection of Left Ventricular Cavity from Cardiac MRI Images Using Faster R-CNN
12
作者 Zakarya Farea Shaaf Muhammad Mahadi Abdul Jamil +3 位作者 Radzi Ambar Ahmed Abdu Alattab Anwar Ali Yahya Yousef Asiri 《Computers, Materials & Continua》 SCIE EI 2023年第1期1819-1835,共17页
The automatic localization of the left ventricle(LV)in short-axis magnetic resonance(MR)images is a required step to process cardiac images using convolutional neural networks for the extraction of a region of interes... The automatic localization of the left ventricle(LV)in short-axis magnetic resonance(MR)images is a required step to process cardiac images using convolutional neural networks for the extraction of a region of interest(ROI).The precise extraction of the LV’s ROI from cardiac MRI images is crucial for detecting heart disorders via cardiac segmentation or registration.Nevertheless,this task appears to be intricate due to the diversities in the size and shape of the LV and the scattering of surrounding tissues across different slices.Thus,this study proposed a region-based convolutional network(Faster R-CNN)for the LV localization from short-axis cardiac MRI images using a region proposal network(RPN)integrated with deep feature classification and regression.Themodel was trained using images with corresponding bounding boxes(labels)around the LV,and various experiments were applied to select the appropriate layers and set the suitable hyper-parameters.The experimental findings showthat the proposed modelwas adequate,with accuracy,precision,recall,and F1 score values of 0.91,0.94,0.95,and 0.95,respectively.This model also allows the cropping of the detected area of LV,which is vital in reducing the computational cost and time during segmentation and classification procedures.Therefore,itwould be an ideal model and clinically applicable for diagnosing cardiac diseases. 展开更多
关键词 Cardiac short-axis MRI images automatic left ventricle localization deep learning models faster R-CNN
下载PDF
Liver Ailment Prediction Using Random Forest Model
13
作者 Fazal Muhammad Bilal Khan +7 位作者 Rashid Naseem Abdullah A Asiri Hassan A Alshamrani Khalaf A Alshamrani Samar M Alqhtani Muhammad Irfan Khlood M Mehdar Hanan Talal Halawani 《Computers, Materials & Continua》 SCIE EI 2023年第1期1049-1067,共19页
Today,liver disease,or any deterioration in one’s ability to survive,is extremely common all around the world.Previous research has indicated that liver disease is more frequent in younger people than in older ones.W... Today,liver disease,or any deterioration in one’s ability to survive,is extremely common all around the world.Previous research has indicated that liver disease is more frequent in younger people than in older ones.When the liver’s capability begins to deteriorate,life can be shortened to one or two days,and early prediction of such diseases is difficult.Using several machine learning(ML)approaches,researchers analyzed a variety of models for predicting liver disorders in their early stages.As a result,this research looks at using the Random Forest(RF)classifier to diagnose the liver disease early on.The dataset was picked from the University of California,Irvine repository.RF’s accomplishments are contrasted to those of Multi-Layer Perceptron(MLP),Average One Dependency Estimator(A1DE),Support Vector Machine(SVM),Credal Decision Tree(CDT),Composite Hypercube on Iterated Random Projection(CHIRP),K-nearest neighbor(KNN),Naïve Bayes(NB),J48-Decision Tree(J48),and Forest by Penalizing Attributes(Forest-PA).Some of the assessment measures used to evaluate each classifier include Root Relative Squared Error(RRSE),Root Mean Squared Error(RMSE),accuracy,recall,precision,specificity,Matthew’s Correlation Coefficient(MCC),F-measure,and G-measure.RF has an RRSE performance of 87.6766 and an RMSE performance of 0.4328,however,its percentage accuracy is 72.1739.The widely acknowledged result of this work can be used as a starting point for subsequent research.As a result,every claim that a new model,framework,or method enhances forecastingmay be benchmarked and demonstrated. 展开更多
关键词 Liver ailment random forest machine learning
下载PDF
Automated Leukemia Screening and Sub-types Classification Using Deep Learning
14
作者 Chaudhary Hassan Abbas Gondal Muhammad Irfan +8 位作者 Sarmad Shafique Muhammad Salman Bashir Mansoor Ahmed Osama M.Alshehri Hassan H.Almasoudi Samar M.Alqhtani Mohammed M.Jalal Malik A.Altayar Khalaf F.Alsharif 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3541-3558,共18页
Leukemia is a kind of blood cancer that damages the cells in the blood and bone marrow of the human body.It produces cancerous blood cells that disturb the human’s immune system and significantly affect bone marrow’... Leukemia is a kind of blood cancer that damages the cells in the blood and bone marrow of the human body.It produces cancerous blood cells that disturb the human’s immune system and significantly affect bone marrow’s production ability to effectively create different types of blood cells like red blood cells(RBCs)and white blood cells(WBC),and platelets.Leukemia can be diagnosed manually by taking a complete blood count test of the patient’s blood,from which medical professionals can investigate the signs of leukemia cells.Furthermore,two other methods,microscopic inspection of blood smears and bone marrow aspiration,are also utilized while examining the patient for leukemia.However,all these methods are labor-intensive,slow,inaccurate,and require a lot of human experience and dedication.Different authors have proposed automated detection systems for leukemia diagnosis to overcome these limitations.They have deployed digital image processing and machine learning algorithms to classify the cells into normal and blast cells.However,these systems are more efficient,reliable,and fast than previous manual diagnosing methods.However,more work is required to classify leukemia-affected cells due to the complex characteristics of blood images and leukemia cells having much intra-class variability and inter-class similarity.In this paper,we have proposed a robust automated system to diagnose leukemia and its sub-types.We have classified ALL into its sub-types based on FAB classification,i.e.,L1,L2,and L3 types with better performance.We have achieved 96.06%accuracy for subtypes classification,which is better when compared with the state-of-the-art methodologies. 展开更多
关键词 Healthcare cancer detection deep learning convolutional neural network
下载PDF
Securing Cloud-Encrypted Data:Detecting Ransomware-as-a-Service(RaaS)Attacks through Deep Learning Ensemble
15
作者 Amardeep Singh Hamad Ali Abosaq +5 位作者 Saad Arif Zohaib Mushtaq Muhammad Irfan Ghulam Abbas Arshad Ali Alanoud AlMazroa 《Computers, Materials & Continua》 SCIE EI 2024年第4期857-873,共17页
Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and ... Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and everpresent threat is Ransomware-as-a-Service(RaaS)assaults,which enable even individuals with minimal technical knowledge to conduct ransomware operations.This study provides a new approach for RaaS attack detection which uses an ensemble of deep learning models.For this purpose,the network intrusion detection dataset“UNSWNB15”from the Intelligent Security Group of the University of New South Wales,Australia is analyzed.In the initial phase,the rectified linear unit-,scaled exponential linear unit-,and exponential linear unit-based three separate Multi-Layer Perceptron(MLP)models are developed.Later,using the combined predictive power of these three MLPs,the RansoDetect Fusion ensemble model is introduced in the suggested methodology.The proposed ensemble technique outperforms previous studieswith impressive performance metrics results,including 98.79%accuracy and recall,98.85%precision,and 98.80%F1-score.The empirical results of this study validate the ensemble model’s ability to improve cybersecurity defenses by showing that it outperforms individual MLPmodels.In expanding the field of cybersecurity strategy,this research highlights the significance of combined deep learning models in strengthening intrusion detection systems against sophisticated cyber threats. 展开更多
关键词 Cloud encryption RAAS ENSEMBLE threat detection deep learning CYBERSECURITY
下载PDF
Week Ahead Electricity Power and Price Forecasting Using Improved DenseNet-121 Method 被引量:2
16
作者 Muhammad Irfan Ali Raza +10 位作者 Faisal Althobiani Nasir Ayub Muhammad Idrees Zain Ali Kashif Rizwan Abdullah Saeed Alwadie Saleh Mohammed Ghonaim Hesham Abdushkour Saifur Rahman Omar Alshorman Samar Alqhtani 《Computers, Materials & Continua》 SCIE EI 2022年第9期4249-4265,共17页
In the Smart Grid(SG)residential environment,consumers change their power consumption routine according to the price and incentives announced by the utility,which causes the prices to deviate from the initial pattern.... In the Smart Grid(SG)residential environment,consumers change their power consumption routine according to the price and incentives announced by the utility,which causes the prices to deviate from the initial pattern.Thereby,electricity demand and price forecasting play a significant role and can help in terms of reliability and sustainability.Due to the massive amount of data,big data analytics for forecasting becomes a hot topic in the SG domain.In this paper,the changing and non-linearity of consumer consumption pattern complex data is taken as input.To minimize the computational cost and complexity of the data,the average of the feature engineering approaches includes:Recursive Feature Eliminator(RFE),Extreme Gradient Boosting(XGboost),Random Forest(RF),and are upgraded to extract the most relevant and significant features.To this end,we have proposed the DensetNet-121 network and Support Vector Machine(SVM)ensemble with Aquila Optimizer(AO)to ensure adaptability and handle the complexity of data in the classification.Further,the AO method helps to tune the parameters of DensNet(121 layers)and SVM,which achieves less training loss,computational time,minimized overfitting problems and more training/test accuracy.Performance evaluation metrics and statistical analysis validate the proposed model results are better than the benchmark schemes.Our proposed method has achieved a minimal value of the Mean Average Percentage Error(MAPE)rate i.e.,8%by DenseNet-AO and 6%by SVM-AO and the maximum accurateness rate of 92%and 95%,respectively. 展开更多
关键词 Smart grid deep neural networks consumer demand big data analytics load forecasting price forecasting
下载PDF
Machine Learning Empowered Security and Privacy Architecture for IoT Networks with the Integration of Blockchain
17
作者 Sohaib Latif M.Saad Bin Ilyas +3 位作者 Azhar Imran Hamad Ali Abosaq Abdulaziz Alzubaidi Vincent Karovic Jr. 《Intelligent Automation & Soft Computing》 2024年第2期353-379,共27页
The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes ... The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively. 展开更多
关键词 Machine learning internet of things blockchain data privacy security Industry 4.0
下载PDF
Theoretical study of reactive melt infiltration to fabricate Co-Si/C composites
18
作者 Saqib Shahzad Khurram Iqbal Zaheer Uddin 《Chinese Physics B》 SCIE EI CAS CSCD 2021年第11期434-439,共6页
Cobalt-silicon based carbon composites(Co–Si/C)have established a noteworthy consideration in recent years as a replacement for conventional materials in the automotive and aerospace industries.To achieve the composi... Cobalt-silicon based carbon composites(Co–Si/C)have established a noteworthy consideration in recent years as a replacement for conventional materials in the automotive and aerospace industries.To achieve the composite,a reactive melt infiltration process(RMI)is used,in which a melt impregnates a porous preform by capillary force.This method promises a high-volume fraction of reinforcement and can be steered in such a way to get the good“near-net”shaped components.A mathematical model is developed using reaction-formed Co–Si alloy/C composite as a prototype system for this process.The wetting behavior and contact angle are discussed;surface tension and viscosity are calculated by Wang’s and Egry’s equations,respectively.Pore radii of 5μm and 10μm are set as a reference on highly oriented pyrolytic graphite.The graphs are plotted using the model,to study some aspects of the infiltration dynamics.This highlights the possible connections among the various processes.In this attempt,the Co–Si(62.5 at.%silicon)alloy’s maximum infiltration at 5μm and 10μm radii are found as 0.05668 m at 125 s and 0.22674 m at 250 s,respectively. 展开更多
关键词 cobalt-silicon/carbon composites Co-Si alloy reactive melt infiltration(RMI) carbon preforms
下载PDF
Short Text Mining for Classifying Educational Objectives and Outcomes
19
作者 Yousef Asiri 《Computer Systems Science & Engineering》 SCIE EI 2022年第4期35-50,共16页
Most of the international accreditation bodies in engineering education(e.g.,ABET)and outcome-based educational systems have based their assess-ments on learning outcomes and program educational objectives.However,map... Most of the international accreditation bodies in engineering education(e.g.,ABET)and outcome-based educational systems have based their assess-ments on learning outcomes and program educational objectives.However,map-ping program educational objectives(PEOs)to student outcomes(SOs)is a challenging and time-consuming task,especially for a new program which is applying for ABET-EAC(American Board for Engineering and Technology the American Board for Engineering and Technology—Engineering Accreditation Commission)accreditation.In addition,ABET needs to automatically ensure that the mapping(classification)is reasonable and correct.The classification also plays a vital role in the assessment of students’learning.Since the PEOs are expressed as short text,they do not contain enough semantic meaning and information,and consequently they suffer from high sparseness,multidimensionality and the curse of dimensionality.In this work,a novel associative short text classification tech-nique is proposed to map PEOs to SOs.The datasets are extracted from 152 self-study reports(SSRs)that were produced in operational settings in an engineering program accredited by ABET-EAC.The datasets are processed and transformed into a representational form appropriate for association rule mining.The extracted rules are utilized as delegate classifiers to map PEOs to SOs.The proposed asso-ciative classification of the mapping of PEOs to SOs has shown promising results,which can simplify the classification of short text and avoid many problems caused by enriching short text based on external resources that are not related or relevant to the dataset. 展开更多
关键词 ABET accreditation association rule mining educational data mining engineering education program educational objectives student outcomes associative classification
下载PDF
Droid-IoT:Detect Android IoT Malicious Applications Using ML and Blockchain
20
作者 Hani Mohammed Alshahrani 《Computers, Materials & Continua》 SCIE EI 2022年第1期739-766,共28页
One of the most rapidly growing areas in the last few years is the Internet of Things(IoT),which has been used in widespread fields such as healthcare,smart homes,and industries.Android is one of the most popular oper... One of the most rapidly growing areas in the last few years is the Internet of Things(IoT),which has been used in widespread fields such as healthcare,smart homes,and industries.Android is one of the most popular operating systems(OS)used by IoT devices for communication and data exchange.Android OS captured more than 70 percent of the market share in 2021.Because of the popularity of the Android OS,it has been targeted by cybercriminals who have introduced a number of issues,such as stealing private information.As reported by one of the recent studies Androidmalware are developed almost every 10 s.Therefore,due to this huge exploitation an accurate and secure detection system is needed to secure the communication and data exchange in Android IoT devices.This paper introduces Droid-IoT,a collaborative framework to detect Android IoT malicious applications by using the blockchain technology.Droid-IoT consists of four main engines:(i)collaborative reporting engine,(ii)static analysis engine,(iii)detection engine,and(iv)blockchain engine.Each engine contributes to the detection and minimization of the risk of malicious applications and the reporting of any malicious activities.All features are extracted automatically fromthe inspected applications to be classified by the machine learning model and store the results into the blockchain.The performance of Droid-IoT was evaluated by analyzing more than 6000 Android applications and comparing the detection rate of Droid-IoT with the state-of-the-art tools.Droid-IoT achieved a detection rate of 97.74%with a low false positive rate by using an extreme gradient boosting(XGBoost)classifier. 展开更多
关键词 ANDROID blockchain analysis MALWARE
下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部