期刊文献+
共找到602篇文章
< 1 2 31 >
每页显示 20 50 100
An Effective and Secure Quality Assurance System for a Computer Science Program 被引量:1
1
作者 Mohammad Alkhatib 《Computer Systems Science & Engineering》 SCIE EI 2022年第6期975-995,共21页
Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ... Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ofthe quality assurance system (QAS). Current assessment methods suffer deficiencies related to accuracy and reliability, and they lack well-organized processes forcontinuous improvement planning. Moreover, the absence of automation, andintegration in QA processes forms a major obstacle towards developing efficientquality system. There is a pressing need to adopt security protocols that providerequired security services to safeguard the valuable information processed byQAS as well. This research proposes an effective methodology for LOs assessment and continuous improvement processes. The proposed approach ensuresmore accurate and reliable LOs assessment results and provides systematic wayfor utilizing those results in the continuous quality improvement. This systematicand well-specified QA processes were then utilized to model and implement automated and secure QAS that efficiently performs quality-related processes. Theproposed system adopts two security protocols that provide confidentiality, integrity, and authentication for quality data and reports. The security protocols avoidthe source repudiation, which is important in the quality reporting system. This isachieved through implementing powerful cryptographic algorithms. The QASenables efficient data collection and processing required for analysis and interpretation. It also prepares for the development of datasets that can be used in futureartificial intelligence (AI) researches to support decision making and improve thequality of academic programs. The proposed approach is implemented in a successful real case study for a computer science program. The current study servesscientific programs struggling to achieve academic accreditation, and gives rise tofully automating and integrating the QA processes and adopting modern AI andsecurity technologies to develop effective QAS. 展开更多
关键词 Quality assurance information security cryptographic algorithms education programs
下载PDF
Human-Computer Interaction Using Deep Fusion Model-Based Facial Expression Recognition System
2
作者 Saiyed Umer Ranjeet Kumar Rout +3 位作者 Shailendra Tiwari Ahmad Ali AlZubi Jazem Mutared Alanazi Kulakov Yurii 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第5期1165-1185,共21页
A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extr... A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions.To prevent overfitting,in-depth features of facial images are extracted and assigned to the proposed convolutional neural network(CNN)models.Various CNN models are then trained.Finally,the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions,i.e.,fear,disgust,anger,surprise,sadness,happiness,neutral.For experimental purposes,three benchmark datasets,i.e.,SFEW,CK+,and KDEF are utilized.The performance of the proposed systemis compared with some state-of-the-artmethods concerning each dataset.Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics.Finally,the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users. 展开更多
关键词 Deep learning facial expression emotions RECOGNITION CNN
下载PDF
Enhanced Adaptive Brain-Computer Interface Approach for Intelligent Assistance to Disabled Peoples
3
作者 Ali Usman Javed Ferzund +7 位作者 Ahmad Shaf Muhammad Aamir Samar Alqhtani Khlood M.Mehdar Hanan Talal Halawani Hassan A.Alshamrani Abdullah A.Asiri Muhammad Irfan 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1355-1369,共15页
Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform thei... Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform their daily tasks.In these devices,higher latency factors need to be addressed appropriately.Therefore,the main goal of this research is to implement a real-time BCI architecture with minimum latency for command actuation.The proposed architecture is capable to communicate between different modules of the system by adopting an automotive,intelligent data processing and classification approach.Neuro-sky mind wave device has been used to transfer the data to our implemented server for command propulsion.Think-Net Convolutional Neural Network(TN-CNN)architecture has been proposed to recognize the brain signals and classify them into six primary mental states for data classification.Data collection and processing are the responsibility of the central integrated server for system load minimization.Testing of implemented architecture and deep learning model shows excellent results.The proposed system integrity level was the minimum data loss and the accurate commands processing mechanism.The training and testing results are 99%and 93%for custom model implementation based on TN-CNN.The proposed real-time architecture is capable of intelligent data processing unit with fewer errors,and it will benefit assistive devices working on the local server and cloud server. 展开更多
关键词 Disable person ELECTROENCEPHALOGRAM convolutional neural network brain signal classification
下载PDF
Modeling of Computer Virus Propagation with Fuzzy Parameters
4
作者 Reemah M.Alhebshi Nauman Ahmed +6 位作者 Dumitru Baleanu Umbreen Fatima Fazal Dayan Muhammad Rafiq Ali Raza Muhammad Ozair Ahmad Emad E.Mahmoud 《Computers, Materials & Continua》 SCIE EI 2023年第3期5663-5678,共16页
Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.T... Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.To understand the dynamics of the virus propagation in a better way,a computer virus spread model with fuzzy parameters is presented in this work.It is assumed that all infected computers do not have the same contribution to the virus transmission process and each computer has a different degree of infectivity,which depends on the quantity of virus.Considering this,the parametersβandγbeing functions of the computer virus load,are considered fuzzy numbers.Using fuzzy theory helps us understand the spread of computer viruses more realistically as these parameters have fixed values in classical models.The essential features of the model,like reproduction number and equilibrium analysis,are discussed in fuzzy senses.Moreover,with fuzziness,two numerical methods,the forward Euler technique,and a nonstandard finite difference(NSFD)scheme,respectively,are developed and analyzed.In the evidence of the numerical simulations,the proposed NSFD method preserves the main features of the dynamic system.It can be considered a reliable tool to predict such types of solutions. 展开更多
关键词 SIR model fuzzy parameters computer virus NSFD scheme STABILITY
下载PDF
Enhancing IoT Data Security with Lightweight Blockchain and Okamoto Uchiyama Homomorphic Encryption 被引量:1
5
作者 Mohanad A.Mohammed Hala B.Abdul Wahab 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1731-1748,共18页
Blockchain technology has garnered significant attention from global organizations and researchers due to its potential as a solution for centralized system challenges.Concurrently,the Internet of Things(IoT)has revol... Blockchain technology has garnered significant attention from global organizations and researchers due to its potential as a solution for centralized system challenges.Concurrently,the Internet of Things(IoT)has revolutionized the Fourth Industrial Revolution by enabling interconnected devices to offer innovative services,ultimately enhancing human lives.This paper presents a new approach utilizing lightweight blockchain technology,effectively reducing the computational burden typically associated with conventional blockchain systems.By integrating this lightweight blockchain with IoT systems,substantial reductions in implementation time and computational complexity can be achieved.Moreover,the paper proposes the utilization of the Okamoto Uchiyama encryption algorithm,renowned for its homomorphic characteristics,to reinforce the privacy and security of IoT-generated data.The integration of homomorphic encryption and blockchain technology establishes a secure and decentralized platformfor storing and analyzing sensitive data of the supply chain data.This platformfacilitates the development of some business models and empowers decentralized applications to perform computations on encrypted data while maintaining data privacy.The results validate the robust security of the proposed system,comparable to standard blockchain implementations,leveraging the distinctive homomorphic attributes of the Okamoto Uchiyama algorithm and the lightweight blockchain paradigm. 展开更多
关键词 Blockchain IOT integration of IoT and blockchain consensus algorithm Okamoto Uchiyama homomorphic encryption lightweight blockchain
下载PDF
KurdSet: A Kurdish Handwritten Characters Recognition Dataset Using Convolutional Neural Network
6
作者 Sardar Hasen Ali Maiwan Bahjat Abdulrazzaq 《Computers, Materials & Continua》 SCIE EI 2024年第4期429-448,共20页
Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format fo... Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format for subsequent processing.Successfully recognizing complex and intricately shaped handwritten characters remains a significant obstacle.The use of convolutional neural network(CNN)in recent developments has notably advanced HCR,leveraging the ability to extract discriminative features from extensive sets of raw data.Because of the absence of pre-existing datasets in the Kurdish language,we created a Kurdish handwritten dataset called(KurdSet).The dataset consists of Kurdish characters,digits,texts,and symbols.The dataset consists of 1560 participants and contains 45,240 characters.In this study,we chose characters only from our dataset.We utilized a Kurdish dataset for handwritten character recognition.The study also utilizes various models,including InceptionV3,Xception,DenseNet121,and a customCNNmodel.To show the performance of the KurdSet dataset,we compared it to Arabic handwritten character recognition dataset(AHCD).We applied the models to both datasets to show the performance of our dataset.Additionally,the performance of the models is evaluated using test accuracy,which measures the percentage of correctly classified characters in the evaluation phase.All models performed well in the training phase,DenseNet121 exhibited the highest accuracy among the models,achieving a high accuracy of 99.80%on the Kurdish dataset.And Xception model achieved 98.66%using the Arabic dataset. 展开更多
关键词 CNN models Kurdish handwritten recognition KurdSet dataset Arabic handwritten recognition DenseNet121 model InceptionV3 model Xception model
下载PDF
Credit Card Fraud Detection Using Improved Deep Learning Models
7
作者 Sumaya S.Sulaiman Ibraheem Nadher Sarab M.Hameed 《Computers, Materials & Continua》 SCIE EI 2024年第1期1049-1069,共21页
Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown pr... Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection. 展开更多
关键词 Card fraud detection hyperparameter tuning deep learning autoencoder convolution neural network long short-term memory RESAMPLING
下载PDF
CL2ES-KDBC:A Novel Covariance Embedded Selection Based on Kernel Distributed Bayes Classifier for Detection of Cyber-Attacks in IoT Systems
8
作者 Talal Albalawi P.Ganeshkumar 《Computers, Materials & Continua》 SCIE EI 2024年第3期3511-3528,共18页
The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed wo... The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks. 展开更多
关键词 IoT security attack detection covariance linear learning embedding selection kernel distributed bayes classifier mongolian gazellas optimization
下载PDF
Enabling Efficient Data Transmission in Wireless Sensor Networks-Based IoT Application
9
作者 Ibraheem Al-Hejri Farag Azzedin +1 位作者 Sultan Almuhammadi Naeem Firdous Syed 《Computers, Materials & Continua》 SCIE EI 2024年第6期4197-4218,共22页
The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure ... The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure domains like oil and gas supply,intelligent transportation,power grids,and autonomous agriculture,it is essential to guarantee the confidentiality,integrity,and authenticity of data collected and exchanged.However,the limited resources coupled with the heterogeneity of IoT devices make it inefficient or sometimes infeasible to achieve secure data transmission using traditional cryptographic techniques.Consequently,designing a lightweight secure data transmission scheme is becoming essential.In this article,we propose lightweight secure data transmission(LSDT)scheme for IoT environments.LSDT consists of three phases and utilizes an effective combination of symmetric keys and the Elliptic Curve Menezes-Qu-Vanstone asymmetric key agreement protocol.We design the simulation environment and experiments to evaluate the performance of the LSDT scheme in terms of communication and computation costs.Security and performance analysis indicates that the LSDT scheme is secure,suitable for IoT applications,and performs better in comparison to other related security schemes. 展开更多
关键词 IoT LIGHTWEIGHT computation complexity communication overhead cybersecurity threats threat prevention secure data transmission Wireless Sensor Networks(WSNs) elliptic curve cryptography
下载PDF
Arrhythmia Detection by Using Chaos Theory with Machine Learning Algorithms
10
作者 Maie Aboghazalah Passent El-kafrawy +3 位作者 Abdelmoty M.Ahmed Rasha Elnemr Belgacem Bouallegue Ayman El-sayed 《Computers, Materials & Continua》 SCIE EI 2024年第6期3855-3875,共21页
Heart monitoring improves life quality.Electrocardiograms(ECGs or EKGs)detect heart irregularities.Machine learning algorithms can create a few ECG diagnosis processing methods.The first method uses raw ECG and time-s... Heart monitoring improves life quality.Electrocardiograms(ECGs or EKGs)detect heart irregularities.Machine learning algorithms can create a few ECG diagnosis processing methods.The first method uses raw ECG and time-series data.The second method classifies the ECG by patient experience.The third technique translates ECG impulses into Q waves,R waves and S waves(QRS)features using richer information.Because ECG signals vary naturally between humans and activities,we will combine the three feature selection methods to improve classification accuracy and diagnosis.Classifications using all three approaches have not been examined till now.Several researchers found that Machine Learning(ML)techniques can improve ECG classification.This study will compare popular machine learning techniques to evaluate ECG features.Four algorithms—Support Vector Machine(SVM),Decision Tree,Naive Bayes,and Neural Network—compare categorization results.SVM plus prior knowledge has the highest accuracy(99%)of the four ML methods.QRS characteristics failed to identify signals without chaos theory.With 99.8%classification accuracy,the Decision Tree technique outperformed all previous experiments. 展开更多
关键词 ECG extraction ECG leads time series prior knowledge and arrhythmia chaos theory QRS complex analysis machine learning ECG classification
下载PDF
Adaptive Cloud Intrusion Detection System Based on Pruned Exact Linear Time Technique
11
作者 Widad Elbakri Maheyzah Md.Siraj +2 位作者 Bander Ali Saleh Al-rimy Sultan Noman Qasem Tawfik Al-Hadhrami 《Computers, Materials & Continua》 SCIE EI 2024年第6期3725-3756,共32页
Cloud computing environments,characterized by dynamic scaling,distributed architectures,and complex work-loads,are increasingly targeted by malicious actors.These threats encompass unauthorized access,data breaches,de... Cloud computing environments,characterized by dynamic scaling,distributed architectures,and complex work-loads,are increasingly targeted by malicious actors.These threats encompass unauthorized access,data breaches,denial-of-service attacks,and evolving malware variants.Traditional security solutions often struggle with the dynamic nature of cloud environments,highlighting the need for robust Adaptive Cloud Intrusion Detection Systems(CIDS).Existing adaptive CIDS solutions,while offering improved detection capabilities,often face limitations such as reliance on approximations for change point detection,hindering their precision in identifying anomalies.This can lead to missed attacks or an abundance of false alarms,impacting overall security effectiveness.To address these challenges,we propose ACIDS(Adaptive Cloud Intrusion Detection System)-PELT.This novel Adaptive CIDS framework leverages the Pruned Exact Linear Time(PELT)algorithm and a Support Vector Machine(SVM)for enhanced accuracy and efficiency.ACIDS-PELT comprises four key components:(1)Feature Selection:Utilizing a hybrid harmony search algorithm and the symmetrical uncertainty filter(HSO-SU)to identify the most relevant features that effectively differentiate between normal and anomalous network traffic in the cloud environment.(2)Surveillance:Employing the PELT algorithm to detect change points within the network traffic data,enabling the identification of anomalies and potential security threats with improved precision compared to existing approaches.(3)Training Set:Labeled network traffic data forms the training set used to train the SVM classifier to distinguish between normal and anomalous behaviour patterns.(4)Testing Set:The testing set evaluates ACIDS-PELT’s performance by measuring its accuracy,precision,and recall in detecting security threats within the cloud environment.We evaluate the performance of ACIDS-PELT using the NSL-KDD benchmark dataset.The results demonstrate that ACIDS-PELT outperforms existing cloud intrusion detection techniques in terms of accuracy,precision,and recall.This superiority stems from ACIDS-PELT’s ability to overcome limitations associated with approximation and imprecision in change point detection while offering a more accurate and precise approach to detecting security threats in dynamic cloud environments. 展开更多
关键词 Adaptive cloud IDS harmony search distributed denial of service(DDoS) PELT machine learning SVM ISOTCID NSL-KDD
下载PDF
Evolution and Prospects of Foundation Models: From Large Language Models to Large Multimodal Models
12
作者 Zheyi Chen Liuchang Xu +5 位作者 Hongting Zheng Luyao Chen Amr Tolba Liang Zhao Keping Yu Hailin Feng 《Computers, Materials & Continua》 SCIE EI 2024年第8期1753-1808,共56页
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ... Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field. 展开更多
关键词 Artificial intelligence large language models large multimodal models foundation models
下载PDF
Improving Prediction Efficiency of Machine Learning Models for Cardiovascular Disease in IoST-Based Systems through Hyperparameter Optimization
13
作者 Tajim Md.Niamat Ullah Akhund Waleed M.Al-Nuwaiser 《Computers, Materials & Continua》 SCIE EI 2024年第9期3485-3506,共22页
This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning ap... This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies. 展开更多
关键词 Internet of sensing things(IoST) machine learning hyperparameter optimization cardiovascular disease prediction execution time analysis performance analysis wilcoxon signed-rank test
下载PDF
A Measurement Study of the Ethereum Underlying P2P Network
14
作者 Mohammad ZMasoud Yousef Jaradat +3 位作者 Ahmad Manasrah Mohammad Alia Khaled Suwais Sally Almanasra 《Computers, Materials & Continua》 SCIE EI 2024年第1期515-532,共18页
This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contra... This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contracts,and Web3.Moreover,its application layer language“Solidity”is widely used in smart contracts across different public and private blockchains.To this end,we wrote a new Ethereum client based on Geth to collect Ethereum node information.Moreover,various web scrapers have been written to collect nodes’historical data fromthe Internet Archive and the Wayback Machine project.The collected data has been compared with two other services that harvest the number of Ethereumnodes.Ourmethod has collectedmore than 30% more than the other services.The data trained a neural network model regarding time series to predict the number of online nodes in the future.Our findings show that there are less than 20% of the same nodes daily,indicating thatmost nodes in the network change frequently.It poses a question of the stability of the network.Furthermore,historical data shows that the top ten countries with Ethereum clients have not changed since 2016.The popular operating system of the underlying nodes has shifted from Windows to Linux over time,increasing node security.The results have also shown that the number of Middle East and North Africa(MENA)Ethereum nodes is neglected compared with nodes recorded from other regions.It opens the door for developing new mechanisms to encourage users from these regions to contribute to this technology.Finally,the model has been trained and demonstrated an accuracy of 92% in predicting the future number of nodes in the Ethereum network. 展开更多
关键词 Ethereum MEASUREMENT ethereum client neural network time series forecasting web-scarping wayback machine blockchain
下载PDF
A Hybrid Machine Learning Framework for Security Intrusion Detection
15
作者 Fatimah Mudhhi Alanazi Bothina Abdelmeneem Elsobky Shaimaa Aly Elmorsy 《Computer Systems Science & Engineering》 2024年第3期835-851,共17页
Proliferation of technology,coupled with networking growth,has catapulted cybersecurity to the forefront of modern security concerns.In this landscape,the precise detection of cyberattacks and anomalies within network... Proliferation of technology,coupled with networking growth,has catapulted cybersecurity to the forefront of modern security concerns.In this landscape,the precise detection of cyberattacks and anomalies within networks is crucial,necessitating the development of efficient intrusion detection systems(IDS).This article introduces a framework utilizing the fusion of fuzzy sets with support vector machines(SVM),named FSVM.The core strategy of FSVM lies in calculating the significance of network features to determine their relative importance.Features with minimal significance are prudently disregarded,a method akin to feature selection.This process not only curtails the computational burden of the classification algorithm but also ensures the preservation of high accuracy levels.To ascertain the efficacy of the FSVM model,we have employed a publicly available dataset from Kaggle,which encompasses two distinct decision labels.Our evaluation methodology involves a comprehensive comparison of the classification accuracy of the processed dataset against four contemporary models in the field.Key performance metrics scores are meticulously calculated for each model.The comparative analysis reveals that the FSVM model demonstrates a marked superiority over its counterparts,enhancing classification accuracy by a minimum of 3%.These findings underscore the FSVM model’s robustness and reliability,positioning it as a highly effective tool in the realm of cybersecurity. 展开更多
关键词 CYBERSECURITY fuzzy sets classification internet of things
下载PDF
Privacy-preserved learning from non-i.i.d data in fog-assisted IoT:A federated learning approach
16
作者 Mohamed Abdel-Basset Hossam Hawash +2 位作者 Nour Moustafa Imran Razzak Mohamed Abd Elfattah 《Digital Communications and Networks》 SCIE CSCD 2024年第2期404-415,共12页
With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become v... With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become vulnerable to sophisticated cyber-attacks,especially privacy-related attacks such as inference and data poisoning ones.Federated Learning(FL)has been regarded as a hopeful method to enable distributed learning with privacypreserved intelligence in IoT applications.Even though the significance of developing privacy-preserving FL has drawn as a great research interest,the current research only concentrates on FL with independent identically distributed(i.i.d)data and few studies have addressed the non-i.i.d setting.FL is known to be vulnerable to Generative Adversarial Network(GAN)attacks,where an adversary can presume to act as a contributor participating in the training process to acquire the private data of other contributors.This paper proposes an innovative Privacy Protection-based Federated Deep Learning(PP-FDL)framework,which accomplishes data protection against privacy-related GAN attacks,along with high classification rates from non-i.i.d data.PP-FDL is designed to enable fog nodes to cooperate to train the FDL model in a way that ensures contributors have no access to the data of each other,where class probabilities are protected utilizing a private identifier generated for each class.The PP-FDL framework is evaluated for image classification using simple convolutional networks which are trained using MNIST and CIFAR-10 datasets.The empirical results have revealed that PF-DFL can achieve data protection and the framework outperforms the other three state-of-the-art models with 3%–8%as accuracy improvements. 展开更多
关键词 Privacy preservation Federated learning Deep learning Fog computing Smart cities
下载PDF
Detection of Real-Time Distributed Denial-of-Service (DDoS) Attacks on Internet of Things (IoT) Networks Using Machine Learning Algorithms
17
作者 Zaed Mahdi Nada Abdalhussien +1 位作者 Naba Mahmood Rana Zaki 《Computers, Materials & Continua》 SCIE EI 2024年第8期2139-2159,共21页
The primary concern of modern technology is cyber attacks targeting the Internet of Things.As it is one of the most widely used networks today and vulnerable to attacks.Real-time threats pose with modern cyber attacks... The primary concern of modern technology is cyber attacks targeting the Internet of Things.As it is one of the most widely used networks today and vulnerable to attacks.Real-time threats pose with modern cyber attacks that pose a great danger to the Internet of Things(IoT)networks,as devices can be monitored or service isolated from them and affect users in one way or another.Securing Internet of Things networks is an important matter,as it requires the use of modern technologies and methods,and real and up-to-date data to design and train systems to keep pace with the modernity that attackers use to confront these attacks.One of the most common types of attacks against IoT devices is Distributed Denial-of-Service(DDoS)attacks.Our paper makes a unique contribution that differs from existing studies,in that we use recent data that contains real traffic and real attacks on IoT networks.And a hybrid method for selecting relevant features,And also how to choose highly efficient algorithms.What gives the model a high ability to detect distributed denial-of-service attacks.the model proposed is based on a two-stage process:selecting essential features and constructing a detection model using the K-neighbors algorithm with two classifier algorithms logistic regression and Stochastic Gradient Descent classifier(SGD),combining these classifiers through ensemble machine learning(stacking),and optimizing parameters through Grid Search-CV to enhance system accuracy.Experiments were conducted to evaluate the effectiveness of the proposed model using the CIC-IoT2023 and CIC-DDoS2019 datasets.Performance evaluation demonstrated the potential of our model in robust intrusion detection in IoT networks,achieving an accuracy of 99.965%and a detection time of 0.20 s for the CIC-IoT2023 dataset,and 99.968%accuracy with a detection time of 0.23 s for the CIC-DDoS 2019 dataset.Furthermore,a comparative analysis with recent related works highlighted the superiority of our methodology in intrusion detection,showing improvements in accuracy,recall,and detection time. 展开更多
关键词 DDOS Service NETWORKS
下载PDF
A Hybrid Classification and Identification of Pneumonia Using African Buffalo Optimization and CNN from Chest X-Ray Images
18
作者 Nasser Alalwan Ahmed I.Taloba +2 位作者 Amr Abozeid Ahmed Ibrahim Alzahrani Ali H.Al-Bayatti 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2497-2517,共21页
An illness known as pneumonia causes inflammation in the lungs.Since there is so much information available fromvarious X-ray images,diagnosing pneumonia has typically proven challenging.To improve image quality and s... An illness known as pneumonia causes inflammation in the lungs.Since there is so much information available fromvarious X-ray images,diagnosing pneumonia has typically proven challenging.To improve image quality and speed up the diagnosis of pneumonia,numerous approaches have been devised.To date,several methods have been employed to identify pneumonia.The Convolutional Neural Network(CNN)has achieved outstanding success in identifying and diagnosing diseases in the fields of medicine and radiology.However,these methods are complex,inefficient,and imprecise to analyze a big number of datasets.In this paper,a new hybrid method for the automatic classification and identification of Pneumonia from chest X-ray images is proposed.The proposed method(ABOCNN)utilized theAfrican BuffaloOptimization(ABO)algorithmto enhanceCNNperformance and accuracy.The Weinmed filter is employed for pre-processing to eliminate unwanted noises from chest X-ray images,followed by feature extraction using the Grey Level Co-Occurrence Matrix(GLCM)approach.Relevant features are then selected from the dataset using the ABO algorithm,and ultimately,high-performance deep learning using the CNN approach is introduced for the classification and identification of Pneumonia.Experimental results on various datasets showed that,when contrasted to other approaches,the ABO-CNN outperforms them all for the classification tasks.The proposed method exhibits superior values like 96.95%,88%,86%,and 86%for accuracy,precision,recall,and F1-score,respectively. 展开更多
关键词 African buffalo optimization convolutional neural network PNEUMONIA X-RAY
下载PDF
Enhancing Skin Cancer Diagnosis with Deep Learning:A Hybrid CNN-RNN Approach
19
作者 Syeda Shamaila Zareen Guangmin Sun +2 位作者 Mahwish Kundi Syed Furqan Qadri Salman Qadri 《Computers, Materials & Continua》 SCIE EI 2024年第4期1497-1519,共23页
Skin cancer diagnosis is difficult due to lesion presentation variability. Conventionalmethods struggle to manuallyextract features and capture lesions spatial and temporal variations. This study introduces a deep lea... Skin cancer diagnosis is difficult due to lesion presentation variability. Conventionalmethods struggle to manuallyextract features and capture lesions spatial and temporal variations. This study introduces a deep learning-basedConvolutional and Recurrent Neural Network (CNN-RNN) model with a ResNet-50 architecture which usedas the feature extractor to enhance skin cancer classification. Leveraging synergistic spatial feature extractionand temporal sequence learning, the model demonstrates robust performance on a dataset of 9000 skin lesionphotos from nine cancer types. Using pre-trained ResNet-50 for spatial data extraction and Long Short-TermMemory (LSTM) for temporal dependencies, the model achieves a high average recognition accuracy, surpassingprevious methods. The comprehensive evaluation, including accuracy, precision, recall, and F1-score, underscoresthe model’s competence in categorizing skin cancer types. This research contributes a sophisticated model andvaluable guidance for deep learning-based diagnostics, also this model excels in overcoming spatial and temporalcomplexities, offering a sophisticated solution for dermatological diagnostics research. 展开更多
关键词 Skin cancer classification deep learning Convolutional Neural Network(CNN) RNN ResNet-50
下载PDF
Metaheuristic-Driven Two-Stage Ensemble Deep Learning for Lung/Colon Cancer Classification
20
作者 Pouyan Razmjouei Elaheh Moharamkhani +2 位作者 Mohamad Hasanvand Maryam Daneshfar Mohammad Shokouhifar 《Computers, Materials & Continua》 SCIE EI 2024年第9期3855-3880,共26页
This study investigates the application of deep learning,ensemble learning,metaheuristic optimization,and image processing techniques for detecting lung and colon cancers,aiming to enhance treatment efficacy and impro... This study investigates the application of deep learning,ensemble learning,metaheuristic optimization,and image processing techniques for detecting lung and colon cancers,aiming to enhance treatment efficacy and improve survival rates.We introduce a metaheuristic-driven two-stage ensemble deep learning model for efficient lung/colon cancer classification.The diagnosis of lung and colon cancers is attempted using several unique indicators by different versions of deep Convolutional Neural Networks(CNNs)in feature extraction and model constructions,and utilizing the power of various Machine Learning(ML)algorithms for final classification.Specifically,we consider different scenarios consisting of two-class colon cancer,three-class lung cancer,and fiveclass combined lung/colon cancer to conduct feature extraction using four CNNs.These extracted features are then integrated to create a comprehensive feature set.In the next step,the optimization of the feature selection is conducted using a metaheuristic algorithm based on the Electric Eel Foraging Optimization(EEFO).This optimized feature subset is subsequently employed in various ML algorithms to determine the most effective ones through a rigorous evaluation process.The top-performing algorithms are refined using the High-Performance Filter(HPF)and integrated into an ensemble learning framework employing weighted averaging.Our findings indicate that the proposed ensemble learning model significantly surpasses existing methods in classification accuracy across all datasets,achieving accuracies of 99.85%for the two-class,98.70%for the three-class,and 98.96%for the five-class datasets. 展开更多
关键词 Lung cancer colon cancer feature selection electric eel foraging optimization deep learning ensemble learning
下载PDF
上一页 1 2 31 下一页 到第
使用帮助 返回顶部