Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ...Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ofthe quality assurance system (QAS). Current assessment methods suffer deficiencies related to accuracy and reliability, and they lack well-organized processes forcontinuous improvement planning. Moreover, the absence of automation, andintegration in QA processes forms a major obstacle towards developing efficientquality system. There is a pressing need to adopt security protocols that providerequired security services to safeguard the valuable information processed byQAS as well. This research proposes an effective methodology for LOs assessment and continuous improvement processes. The proposed approach ensuresmore accurate and reliable LOs assessment results and provides systematic wayfor utilizing those results in the continuous quality improvement. This systematicand well-specified QA processes were then utilized to model and implement automated and secure QAS that efficiently performs quality-related processes. Theproposed system adopts two security protocols that provide confidentiality, integrity, and authentication for quality data and reports. The security protocols avoidthe source repudiation, which is important in the quality reporting system. This isachieved through implementing powerful cryptographic algorithms. The QASenables efficient data collection and processing required for analysis and interpretation. It also prepares for the development of datasets that can be used in futureartificial intelligence (AI) researches to support decision making and improve thequality of academic programs. The proposed approach is implemented in a successful real case study for a computer science program. The current study servesscientific programs struggling to achieve academic accreditation, and gives rise tofully automating and integrating the QA processes and adopting modern AI andsecurity technologies to develop effective QAS.展开更多
A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extr...A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions.To prevent overfitting,in-depth features of facial images are extracted and assigned to the proposed convolutional neural network(CNN)models.Various CNN models are then trained.Finally,the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions,i.e.,fear,disgust,anger,surprise,sadness,happiness,neutral.For experimental purposes,three benchmark datasets,i.e.,SFEW,CK+,and KDEF are utilized.The performance of the proposed systemis compared with some state-of-the-artmethods concerning each dataset.Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics.Finally,the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users.展开更多
Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform thei...Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform their daily tasks.In these devices,higher latency factors need to be addressed appropriately.Therefore,the main goal of this research is to implement a real-time BCI architecture with minimum latency for command actuation.The proposed architecture is capable to communicate between different modules of the system by adopting an automotive,intelligent data processing and classification approach.Neuro-sky mind wave device has been used to transfer the data to our implemented server for command propulsion.Think-Net Convolutional Neural Network(TN-CNN)architecture has been proposed to recognize the brain signals and classify them into six primary mental states for data classification.Data collection and processing are the responsibility of the central integrated server for system load minimization.Testing of implemented architecture and deep learning model shows excellent results.The proposed system integrity level was the minimum data loss and the accurate commands processing mechanism.The training and testing results are 99%and 93%for custom model implementation based on TN-CNN.The proposed real-time architecture is capable of intelligent data processing unit with fewer errors,and it will benefit assistive devices working on the local server and cloud server.展开更多
Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.T...Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.To understand the dynamics of the virus propagation in a better way,a computer virus spread model with fuzzy parameters is presented in this work.It is assumed that all infected computers do not have the same contribution to the virus transmission process and each computer has a different degree of infectivity,which depends on the quantity of virus.Considering this,the parametersβandγbeing functions of the computer virus load,are considered fuzzy numbers.Using fuzzy theory helps us understand the spread of computer viruses more realistically as these parameters have fixed values in classical models.The essential features of the model,like reproduction number and equilibrium analysis,are discussed in fuzzy senses.Moreover,with fuzziness,two numerical methods,the forward Euler technique,and a nonstandard finite difference(NSFD)scheme,respectively,are developed and analyzed.In the evidence of the numerical simulations,the proposed NSFD method preserves the main features of the dynamic system.It can be considered a reliable tool to predict such types of solutions.展开更多
Blockchain technology has garnered significant attention from global organizations and researchers due to its potential as a solution for centralized system challenges.Concurrently,the Internet of Things(IoT)has revol...Blockchain technology has garnered significant attention from global organizations and researchers due to its potential as a solution for centralized system challenges.Concurrently,the Internet of Things(IoT)has revolutionized the Fourth Industrial Revolution by enabling interconnected devices to offer innovative services,ultimately enhancing human lives.This paper presents a new approach utilizing lightweight blockchain technology,effectively reducing the computational burden typically associated with conventional blockchain systems.By integrating this lightweight blockchain with IoT systems,substantial reductions in implementation time and computational complexity can be achieved.Moreover,the paper proposes the utilization of the Okamoto Uchiyama encryption algorithm,renowned for its homomorphic characteristics,to reinforce the privacy and security of IoT-generated data.The integration of homomorphic encryption and blockchain technology establishes a secure and decentralized platformfor storing and analyzing sensitive data of the supply chain data.This platformfacilitates the development of some business models and empowers decentralized applications to perform computations on encrypted data while maintaining data privacy.The results validate the robust security of the proposed system,comparable to standard blockchain implementations,leveraging the distinctive homomorphic attributes of the Okamoto Uchiyama algorithm and the lightweight blockchain paradigm.展开更多
Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format fo...Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format for subsequent processing.Successfully recognizing complex and intricately shaped handwritten characters remains a significant obstacle.The use of convolutional neural network(CNN)in recent developments has notably advanced HCR,leveraging the ability to extract discriminative features from extensive sets of raw data.Because of the absence of pre-existing datasets in the Kurdish language,we created a Kurdish handwritten dataset called(KurdSet).The dataset consists of Kurdish characters,digits,texts,and symbols.The dataset consists of 1560 participants and contains 45,240 characters.In this study,we chose characters only from our dataset.We utilized a Kurdish dataset for handwritten character recognition.The study also utilizes various models,including InceptionV3,Xception,DenseNet121,and a customCNNmodel.To show the performance of the KurdSet dataset,we compared it to Arabic handwritten character recognition dataset(AHCD).We applied the models to both datasets to show the performance of our dataset.Additionally,the performance of the models is evaluated using test accuracy,which measures the percentage of correctly classified characters in the evaluation phase.All models performed well in the training phase,DenseNet121 exhibited the highest accuracy among the models,achieving a high accuracy of 99.80%on the Kurdish dataset.And Xception model achieved 98.66%using the Arabic dataset.展开更多
Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown pr...Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection.展开更多
The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed wo...The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks.展开更多
The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure ...The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure domains like oil and gas supply,intelligent transportation,power grids,and autonomous agriculture,it is essential to guarantee the confidentiality,integrity,and authenticity of data collected and exchanged.However,the limited resources coupled with the heterogeneity of IoT devices make it inefficient or sometimes infeasible to achieve secure data transmission using traditional cryptographic techniques.Consequently,designing a lightweight secure data transmission scheme is becoming essential.In this article,we propose lightweight secure data transmission(LSDT)scheme for IoT environments.LSDT consists of three phases and utilizes an effective combination of symmetric keys and the Elliptic Curve Menezes-Qu-Vanstone asymmetric key agreement protocol.We design the simulation environment and experiments to evaluate the performance of the LSDT scheme in terms of communication and computation costs.Security and performance analysis indicates that the LSDT scheme is secure,suitable for IoT applications,and performs better in comparison to other related security schemes.展开更多
Heart monitoring improves life quality.Electrocardiograms(ECGs or EKGs)detect heart irregularities.Machine learning algorithms can create a few ECG diagnosis processing methods.The first method uses raw ECG and time-s...Heart monitoring improves life quality.Electrocardiograms(ECGs or EKGs)detect heart irregularities.Machine learning algorithms can create a few ECG diagnosis processing methods.The first method uses raw ECG and time-series data.The second method classifies the ECG by patient experience.The third technique translates ECG impulses into Q waves,R waves and S waves(QRS)features using richer information.Because ECG signals vary naturally between humans and activities,we will combine the three feature selection methods to improve classification accuracy and diagnosis.Classifications using all three approaches have not been examined till now.Several researchers found that Machine Learning(ML)techniques can improve ECG classification.This study will compare popular machine learning techniques to evaluate ECG features.Four algorithms—Support Vector Machine(SVM),Decision Tree,Naive Bayes,and Neural Network—compare categorization results.SVM plus prior knowledge has the highest accuracy(99%)of the four ML methods.QRS characteristics failed to identify signals without chaos theory.With 99.8%classification accuracy,the Decision Tree technique outperformed all previous experiments.展开更多
Cloud computing environments,characterized by dynamic scaling,distributed architectures,and complex work-loads,are increasingly targeted by malicious actors.These threats encompass unauthorized access,data breaches,de...Cloud computing environments,characterized by dynamic scaling,distributed architectures,and complex work-loads,are increasingly targeted by malicious actors.These threats encompass unauthorized access,data breaches,denial-of-service attacks,and evolving malware variants.Traditional security solutions often struggle with the dynamic nature of cloud environments,highlighting the need for robust Adaptive Cloud Intrusion Detection Systems(CIDS).Existing adaptive CIDS solutions,while offering improved detection capabilities,often face limitations such as reliance on approximations for change point detection,hindering their precision in identifying anomalies.This can lead to missed attacks or an abundance of false alarms,impacting overall security effectiveness.To address these challenges,we propose ACIDS(Adaptive Cloud Intrusion Detection System)-PELT.This novel Adaptive CIDS framework leverages the Pruned Exact Linear Time(PELT)algorithm and a Support Vector Machine(SVM)for enhanced accuracy and efficiency.ACIDS-PELT comprises four key components:(1)Feature Selection:Utilizing a hybrid harmony search algorithm and the symmetrical uncertainty filter(HSO-SU)to identify the most relevant features that effectively differentiate between normal and anomalous network traffic in the cloud environment.(2)Surveillance:Employing the PELT algorithm to detect change points within the network traffic data,enabling the identification of anomalies and potential security threats with improved precision compared to existing approaches.(3)Training Set:Labeled network traffic data forms the training set used to train the SVM classifier to distinguish between normal and anomalous behaviour patterns.(4)Testing Set:The testing set evaluates ACIDS-PELT’s performance by measuring its accuracy,precision,and recall in detecting security threats within the cloud environment.We evaluate the performance of ACIDS-PELT using the NSL-KDD benchmark dataset.The results demonstrate that ACIDS-PELT outperforms existing cloud intrusion detection techniques in terms of accuracy,precision,and recall.This superiority stems from ACIDS-PELT’s ability to overcome limitations associated with approximation and imprecision in change point detection while offering a more accurate and precise approach to detecting security threats in dynamic cloud environments.展开更多
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ...Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.展开更多
This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning ap...This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies.展开更多
This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contra...This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contracts,and Web3.Moreover,its application layer language“Solidity”is widely used in smart contracts across different public and private blockchains.To this end,we wrote a new Ethereum client based on Geth to collect Ethereum node information.Moreover,various web scrapers have been written to collect nodes’historical data fromthe Internet Archive and the Wayback Machine project.The collected data has been compared with two other services that harvest the number of Ethereumnodes.Ourmethod has collectedmore than 30% more than the other services.The data trained a neural network model regarding time series to predict the number of online nodes in the future.Our findings show that there are less than 20% of the same nodes daily,indicating thatmost nodes in the network change frequently.It poses a question of the stability of the network.Furthermore,historical data shows that the top ten countries with Ethereum clients have not changed since 2016.The popular operating system of the underlying nodes has shifted from Windows to Linux over time,increasing node security.The results have also shown that the number of Middle East and North Africa(MENA)Ethereum nodes is neglected compared with nodes recorded from other regions.It opens the door for developing new mechanisms to encourage users from these regions to contribute to this technology.Finally,the model has been trained and demonstrated an accuracy of 92% in predicting the future number of nodes in the Ethereum network.展开更多
Proliferation of technology,coupled with networking growth,has catapulted cybersecurity to the forefront of modern security concerns.In this landscape,the precise detection of cyberattacks and anomalies within network...Proliferation of technology,coupled with networking growth,has catapulted cybersecurity to the forefront of modern security concerns.In this landscape,the precise detection of cyberattacks and anomalies within networks is crucial,necessitating the development of efficient intrusion detection systems(IDS).This article introduces a framework utilizing the fusion of fuzzy sets with support vector machines(SVM),named FSVM.The core strategy of FSVM lies in calculating the significance of network features to determine their relative importance.Features with minimal significance are prudently disregarded,a method akin to feature selection.This process not only curtails the computational burden of the classification algorithm but also ensures the preservation of high accuracy levels.To ascertain the efficacy of the FSVM model,we have employed a publicly available dataset from Kaggle,which encompasses two distinct decision labels.Our evaluation methodology involves a comprehensive comparison of the classification accuracy of the processed dataset against four contemporary models in the field.Key performance metrics scores are meticulously calculated for each model.The comparative analysis reveals that the FSVM model demonstrates a marked superiority over its counterparts,enhancing classification accuracy by a minimum of 3%.These findings underscore the FSVM model’s robustness and reliability,positioning it as a highly effective tool in the realm of cybersecurity.展开更多
With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become v...With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become vulnerable to sophisticated cyber-attacks,especially privacy-related attacks such as inference and data poisoning ones.Federated Learning(FL)has been regarded as a hopeful method to enable distributed learning with privacypreserved intelligence in IoT applications.Even though the significance of developing privacy-preserving FL has drawn as a great research interest,the current research only concentrates on FL with independent identically distributed(i.i.d)data and few studies have addressed the non-i.i.d setting.FL is known to be vulnerable to Generative Adversarial Network(GAN)attacks,where an adversary can presume to act as a contributor participating in the training process to acquire the private data of other contributors.This paper proposes an innovative Privacy Protection-based Federated Deep Learning(PP-FDL)framework,which accomplishes data protection against privacy-related GAN attacks,along with high classification rates from non-i.i.d data.PP-FDL is designed to enable fog nodes to cooperate to train the FDL model in a way that ensures contributors have no access to the data of each other,where class probabilities are protected utilizing a private identifier generated for each class.The PP-FDL framework is evaluated for image classification using simple convolutional networks which are trained using MNIST and CIFAR-10 datasets.The empirical results have revealed that PF-DFL can achieve data protection and the framework outperforms the other three state-of-the-art models with 3%–8%as accuracy improvements.展开更多
The primary concern of modern technology is cyber attacks targeting the Internet of Things.As it is one of the most widely used networks today and vulnerable to attacks.Real-time threats pose with modern cyber attacks...The primary concern of modern technology is cyber attacks targeting the Internet of Things.As it is one of the most widely used networks today and vulnerable to attacks.Real-time threats pose with modern cyber attacks that pose a great danger to the Internet of Things(IoT)networks,as devices can be monitored or service isolated from them and affect users in one way or another.Securing Internet of Things networks is an important matter,as it requires the use of modern technologies and methods,and real and up-to-date data to design and train systems to keep pace with the modernity that attackers use to confront these attacks.One of the most common types of attacks against IoT devices is Distributed Denial-of-Service(DDoS)attacks.Our paper makes a unique contribution that differs from existing studies,in that we use recent data that contains real traffic and real attacks on IoT networks.And a hybrid method for selecting relevant features,And also how to choose highly efficient algorithms.What gives the model a high ability to detect distributed denial-of-service attacks.the model proposed is based on a two-stage process:selecting essential features and constructing a detection model using the K-neighbors algorithm with two classifier algorithms logistic regression and Stochastic Gradient Descent classifier(SGD),combining these classifiers through ensemble machine learning(stacking),and optimizing parameters through Grid Search-CV to enhance system accuracy.Experiments were conducted to evaluate the effectiveness of the proposed model using the CIC-IoT2023 and CIC-DDoS2019 datasets.Performance evaluation demonstrated the potential of our model in robust intrusion detection in IoT networks,achieving an accuracy of 99.965%and a detection time of 0.20 s for the CIC-IoT2023 dataset,and 99.968%accuracy with a detection time of 0.23 s for the CIC-DDoS 2019 dataset.Furthermore,a comparative analysis with recent related works highlighted the superiority of our methodology in intrusion detection,showing improvements in accuracy,recall,and detection time.展开更多
An illness known as pneumonia causes inflammation in the lungs.Since there is so much information available fromvarious X-ray images,diagnosing pneumonia has typically proven challenging.To improve image quality and s...An illness known as pneumonia causes inflammation in the lungs.Since there is so much information available fromvarious X-ray images,diagnosing pneumonia has typically proven challenging.To improve image quality and speed up the diagnosis of pneumonia,numerous approaches have been devised.To date,several methods have been employed to identify pneumonia.The Convolutional Neural Network(CNN)has achieved outstanding success in identifying and diagnosing diseases in the fields of medicine and radiology.However,these methods are complex,inefficient,and imprecise to analyze a big number of datasets.In this paper,a new hybrid method for the automatic classification and identification of Pneumonia from chest X-ray images is proposed.The proposed method(ABOCNN)utilized theAfrican BuffaloOptimization(ABO)algorithmto enhanceCNNperformance and accuracy.The Weinmed filter is employed for pre-processing to eliminate unwanted noises from chest X-ray images,followed by feature extraction using the Grey Level Co-Occurrence Matrix(GLCM)approach.Relevant features are then selected from the dataset using the ABO algorithm,and ultimately,high-performance deep learning using the CNN approach is introduced for the classification and identification of Pneumonia.Experimental results on various datasets showed that,when contrasted to other approaches,the ABO-CNN outperforms them all for the classification tasks.The proposed method exhibits superior values like 96.95%,88%,86%,and 86%for accuracy,precision,recall,and F1-score,respectively.展开更多
Skin cancer diagnosis is difficult due to lesion presentation variability. Conventionalmethods struggle to manuallyextract features and capture lesions spatial and temporal variations. This study introduces a deep lea...Skin cancer diagnosis is difficult due to lesion presentation variability. Conventionalmethods struggle to manuallyextract features and capture lesions spatial and temporal variations. This study introduces a deep learning-basedConvolutional and Recurrent Neural Network (CNN-RNN) model with a ResNet-50 architecture which usedas the feature extractor to enhance skin cancer classification. Leveraging synergistic spatial feature extractionand temporal sequence learning, the model demonstrates robust performance on a dataset of 9000 skin lesionphotos from nine cancer types. Using pre-trained ResNet-50 for spatial data extraction and Long Short-TermMemory (LSTM) for temporal dependencies, the model achieves a high average recognition accuracy, surpassingprevious methods. The comprehensive evaluation, including accuracy, precision, recall, and F1-score, underscoresthe model’s competence in categorizing skin cancer types. This research contributes a sophisticated model andvaluable guidance for deep learning-based diagnostics, also this model excels in overcoming spatial and temporalcomplexities, offering a sophisticated solution for dermatological diagnostics research.展开更多
This study investigates the application of deep learning,ensemble learning,metaheuristic optimization,and image processing techniques for detecting lung and colon cancers,aiming to enhance treatment efficacy and impro...This study investigates the application of deep learning,ensemble learning,metaheuristic optimization,and image processing techniques for detecting lung and colon cancers,aiming to enhance treatment efficacy and improve survival rates.We introduce a metaheuristic-driven two-stage ensemble deep learning model for efficient lung/colon cancer classification.The diagnosis of lung and colon cancers is attempted using several unique indicators by different versions of deep Convolutional Neural Networks(CNNs)in feature extraction and model constructions,and utilizing the power of various Machine Learning(ML)algorithms for final classification.Specifically,we consider different scenarios consisting of two-class colon cancer,three-class lung cancer,and fiveclass combined lung/colon cancer to conduct feature extraction using four CNNs.These extracted features are then integrated to create a comprehensive feature set.In the next step,the optimization of the feature selection is conducted using a metaheuristic algorithm based on the Electric Eel Foraging Optimization(EEFO).This optimized feature subset is subsequently employed in various ML algorithms to determine the most effective ones through a rigorous evaluation process.The top-performing algorithms are refined using the High-Performance Filter(HPF)and integrated into an ensemble learning framework employing weighted averaging.Our findings indicate that the proposed ensemble learning model significantly surpasses existing methods in classification accuracy across all datasets,achieving accuracies of 99.85%for the two-class,98.70%for the three-class,and 98.96%for the five-class datasets.展开更多
基金Author extends his appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding and supporting this work through Graduate Student Research Support Program.
文摘Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ofthe quality assurance system (QAS). Current assessment methods suffer deficiencies related to accuracy and reliability, and they lack well-organized processes forcontinuous improvement planning. Moreover, the absence of automation, andintegration in QA processes forms a major obstacle towards developing efficientquality system. There is a pressing need to adopt security protocols that providerequired security services to safeguard the valuable information processed byQAS as well. This research proposes an effective methodology for LOs assessment and continuous improvement processes. The proposed approach ensuresmore accurate and reliable LOs assessment results and provides systematic wayfor utilizing those results in the continuous quality improvement. This systematicand well-specified QA processes were then utilized to model and implement automated and secure QAS that efficiently performs quality-related processes. Theproposed system adopts two security protocols that provide confidentiality, integrity, and authentication for quality data and reports. The security protocols avoidthe source repudiation, which is important in the quality reporting system. This isachieved through implementing powerful cryptographic algorithms. The QASenables efficient data collection and processing required for analysis and interpretation. It also prepares for the development of datasets that can be used in futureartificial intelligence (AI) researches to support decision making and improve thequality of academic programs. The proposed approach is implemented in a successful real case study for a computer science program. The current study servesscientific programs struggling to achieve academic accreditation, and gives rise tofully automating and integrating the QA processes and adopting modern AI andsecurity technologies to develop effective QAS.
基金supported by the Researchers Supporting Project (No.RSP-2021/395),King Saud University,Riyadh,Saudi Arabia.
文摘A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions.To prevent overfitting,in-depth features of facial images are extracted and assigned to the proposed convolutional neural network(CNN)models.Various CNN models are then trained.Finally,the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions,i.e.,fear,disgust,anger,surprise,sadness,happiness,neutral.For experimental purposes,three benchmark datasets,i.e.,SFEW,CK+,and KDEF are utilized.The performance of the proposed systemis compared with some state-of-the-artmethods concerning each dataset.Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics.Finally,the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users.
基金Authors would like to acknowledge the support of the Deputy for Research and Innovation-Ministry of Education,Kingdom of Saudi Arabia for funding this research through a project(NU/IFC/ENT/01/014)under the institutional funding committee at Najran University,Kingdom of Saudi Arabia.
文摘Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform their daily tasks.In these devices,higher latency factors need to be addressed appropriately.Therefore,the main goal of this research is to implement a real-time BCI architecture with minimum latency for command actuation.The proposed architecture is capable to communicate between different modules of the system by adopting an automotive,intelligent data processing and classification approach.Neuro-sky mind wave device has been used to transfer the data to our implemented server for command propulsion.Think-Net Convolutional Neural Network(TN-CNN)architecture has been proposed to recognize the brain signals and classify them into six primary mental states for data classification.Data collection and processing are the responsibility of the central integrated server for system load minimization.Testing of implemented architecture and deep learning model shows excellent results.The proposed system integrity level was the minimum data loss and the accurate commands processing mechanism.The training and testing results are 99%and 93%for custom model implementation based on TN-CNN.The proposed real-time architecture is capable of intelligent data processing unit with fewer errors,and it will benefit assistive devices working on the local server and cloud server.
文摘Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.To understand the dynamics of the virus propagation in a better way,a computer virus spread model with fuzzy parameters is presented in this work.It is assumed that all infected computers do not have the same contribution to the virus transmission process and each computer has a different degree of infectivity,which depends on the quantity of virus.Considering this,the parametersβandγbeing functions of the computer virus load,are considered fuzzy numbers.Using fuzzy theory helps us understand the spread of computer viruses more realistically as these parameters have fixed values in classical models.The essential features of the model,like reproduction number and equilibrium analysis,are discussed in fuzzy senses.Moreover,with fuzziness,two numerical methods,the forward Euler technique,and a nonstandard finite difference(NSFD)scheme,respectively,are developed and analyzed.In the evidence of the numerical simulations,the proposed NSFD method preserves the main features of the dynamic system.It can be considered a reliable tool to predict such types of solutions.
文摘Blockchain technology has garnered significant attention from global organizations and researchers due to its potential as a solution for centralized system challenges.Concurrently,the Internet of Things(IoT)has revolutionized the Fourth Industrial Revolution by enabling interconnected devices to offer innovative services,ultimately enhancing human lives.This paper presents a new approach utilizing lightweight blockchain technology,effectively reducing the computational burden typically associated with conventional blockchain systems.By integrating this lightweight blockchain with IoT systems,substantial reductions in implementation time and computational complexity can be achieved.Moreover,the paper proposes the utilization of the Okamoto Uchiyama encryption algorithm,renowned for its homomorphic characteristics,to reinforce the privacy and security of IoT-generated data.The integration of homomorphic encryption and blockchain technology establishes a secure and decentralized platformfor storing and analyzing sensitive data of the supply chain data.This platformfacilitates the development of some business models and empowers decentralized applications to perform computations on encrypted data while maintaining data privacy.The results validate the robust security of the proposed system,comparable to standard blockchain implementations,leveraging the distinctive homomorphic attributes of the Okamoto Uchiyama algorithm and the lightweight blockchain paradigm.
文摘Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format for subsequent processing.Successfully recognizing complex and intricately shaped handwritten characters remains a significant obstacle.The use of convolutional neural network(CNN)in recent developments has notably advanced HCR,leveraging the ability to extract discriminative features from extensive sets of raw data.Because of the absence of pre-existing datasets in the Kurdish language,we created a Kurdish handwritten dataset called(KurdSet).The dataset consists of Kurdish characters,digits,texts,and symbols.The dataset consists of 1560 participants and contains 45,240 characters.In this study,we chose characters only from our dataset.We utilized a Kurdish dataset for handwritten character recognition.The study also utilizes various models,including InceptionV3,Xception,DenseNet121,and a customCNNmodel.To show the performance of the KurdSet dataset,we compared it to Arabic handwritten character recognition dataset(AHCD).We applied the models to both datasets to show the performance of our dataset.Additionally,the performance of the models is evaluated using test accuracy,which measures the percentage of correctly classified characters in the evaluation phase.All models performed well in the training phase,DenseNet121 exhibited the highest accuracy among the models,achieving a high accuracy of 99.80%on the Kurdish dataset.And Xception model achieved 98.66%using the Arabic dataset.
文摘Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection.
文摘The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks.
基金support of the Interdisciplinary Research Center for Intelligent Secure Systems(IRC-ISS)Internal Fund Grant#INSS2202.
文摘The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure domains like oil and gas supply,intelligent transportation,power grids,and autonomous agriculture,it is essential to guarantee the confidentiality,integrity,and authenticity of data collected and exchanged.However,the limited resources coupled with the heterogeneity of IoT devices make it inefficient or sometimes infeasible to achieve secure data transmission using traditional cryptographic techniques.Consequently,designing a lightweight secure data transmission scheme is becoming essential.In this article,we propose lightweight secure data transmission(LSDT)scheme for IoT environments.LSDT consists of three phases and utilizes an effective combination of symmetric keys and the Elliptic Curve Menezes-Qu-Vanstone asymmetric key agreement protocol.We design the simulation environment and experiments to evaluate the performance of the LSDT scheme in terms of communication and computation costs.Security and performance analysis indicates that the LSDT scheme is secure,suitable for IoT applications,and performs better in comparison to other related security schemes.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups(Grant Number RGP.2/246/44),B.B.,and https://www.kku.edu.sa/en.
文摘Heart monitoring improves life quality.Electrocardiograms(ECGs or EKGs)detect heart irregularities.Machine learning algorithms can create a few ECG diagnosis processing methods.The first method uses raw ECG and time-series data.The second method classifies the ECG by patient experience.The third technique translates ECG impulses into Q waves,R waves and S waves(QRS)features using richer information.Because ECG signals vary naturally between humans and activities,we will combine the three feature selection methods to improve classification accuracy and diagnosis.Classifications using all three approaches have not been examined till now.Several researchers found that Machine Learning(ML)techniques can improve ECG classification.This study will compare popular machine learning techniques to evaluate ECG features.Four algorithms—Support Vector Machine(SVM),Decision Tree,Naive Bayes,and Neural Network—compare categorization results.SVM plus prior knowledge has the highest accuracy(99%)of the four ML methods.QRS characteristics failed to identify signals without chaos theory.With 99.8%classification accuracy,the Decision Tree technique outperformed all previous experiments.
基金funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)through Research Partnership Program No.RP-21-07-09.
文摘Cloud computing environments,characterized by dynamic scaling,distributed architectures,and complex work-loads,are increasingly targeted by malicious actors.These threats encompass unauthorized access,data breaches,denial-of-service attacks,and evolving malware variants.Traditional security solutions often struggle with the dynamic nature of cloud environments,highlighting the need for robust Adaptive Cloud Intrusion Detection Systems(CIDS).Existing adaptive CIDS solutions,while offering improved detection capabilities,often face limitations such as reliance on approximations for change point detection,hindering their precision in identifying anomalies.This can lead to missed attacks or an abundance of false alarms,impacting overall security effectiveness.To address these challenges,we propose ACIDS(Adaptive Cloud Intrusion Detection System)-PELT.This novel Adaptive CIDS framework leverages the Pruned Exact Linear Time(PELT)algorithm and a Support Vector Machine(SVM)for enhanced accuracy and efficiency.ACIDS-PELT comprises four key components:(1)Feature Selection:Utilizing a hybrid harmony search algorithm and the symmetrical uncertainty filter(HSO-SU)to identify the most relevant features that effectively differentiate between normal and anomalous network traffic in the cloud environment.(2)Surveillance:Employing the PELT algorithm to detect change points within the network traffic data,enabling the identification of anomalies and potential security threats with improved precision compared to existing approaches.(3)Training Set:Labeled network traffic data forms the training set used to train the SVM classifier to distinguish between normal and anomalous behaviour patterns.(4)Testing Set:The testing set evaluates ACIDS-PELT’s performance by measuring its accuracy,precision,and recall in detecting security threats within the cloud environment.We evaluate the performance of ACIDS-PELT using the NSL-KDD benchmark dataset.The results demonstrate that ACIDS-PELT outperforms existing cloud intrusion detection techniques in terms of accuracy,precision,and recall.This superiority stems from ACIDS-PELT’s ability to overcome limitations associated with approximation and imprecision in change point detection while offering a more accurate and precise approach to detecting security threats in dynamic cloud environments.
基金We acknowledge funding from NSFC Grant 62306283.
文摘Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU),Grant Number IMSIU-RG23151.
文摘This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies.
基金the Arab Open University for Funding this work through AOU Research Fund No.(AOURG-2023-006).
文摘This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contracts,and Web3.Moreover,its application layer language“Solidity”is widely used in smart contracts across different public and private blockchains.To this end,we wrote a new Ethereum client based on Geth to collect Ethereum node information.Moreover,various web scrapers have been written to collect nodes’historical data fromthe Internet Archive and the Wayback Machine project.The collected data has been compared with two other services that harvest the number of Ethereumnodes.Ourmethod has collectedmore than 30% more than the other services.The data trained a neural network model regarding time series to predict the number of online nodes in the future.Our findings show that there are less than 20% of the same nodes daily,indicating thatmost nodes in the network change frequently.It poses a question of the stability of the network.Furthermore,historical data shows that the top ten countries with Ethereum clients have not changed since 2016.The popular operating system of the underlying nodes has shifted from Windows to Linux over time,increasing node security.The results have also shown that the number of Middle East and North Africa(MENA)Ethereum nodes is neglected compared with nodes recorded from other regions.It opens the door for developing new mechanisms to encourage users from these regions to contribute to this technology.Finally,the model has been trained and demonstrated an accuracy of 92% in predicting the future number of nodes in the Ethereum network.
文摘Proliferation of technology,coupled with networking growth,has catapulted cybersecurity to the forefront of modern security concerns.In this landscape,the precise detection of cyberattacks and anomalies within networks is crucial,necessitating the development of efficient intrusion detection systems(IDS).This article introduces a framework utilizing the fusion of fuzzy sets with support vector machines(SVM),named FSVM.The core strategy of FSVM lies in calculating the significance of network features to determine their relative importance.Features with minimal significance are prudently disregarded,a method akin to feature selection.This process not only curtails the computational burden of the classification algorithm but also ensures the preservation of high accuracy levels.To ascertain the efficacy of the FSVM model,we have employed a publicly available dataset from Kaggle,which encompasses two distinct decision labels.Our evaluation methodology involves a comprehensive comparison of the classification accuracy of the processed dataset against four contemporary models in the field.Key performance metrics scores are meticulously calculated for each model.The comparative analysis reveals that the FSVM model demonstrates a marked superiority over its counterparts,enhancing classification accuracy by a minimum of 3%.These findings underscore the FSVM model’s robustness and reliability,positioning it as a highly effective tool in the realm of cybersecurity.
文摘With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become vulnerable to sophisticated cyber-attacks,especially privacy-related attacks such as inference and data poisoning ones.Federated Learning(FL)has been regarded as a hopeful method to enable distributed learning with privacypreserved intelligence in IoT applications.Even though the significance of developing privacy-preserving FL has drawn as a great research interest,the current research only concentrates on FL with independent identically distributed(i.i.d)data and few studies have addressed the non-i.i.d setting.FL is known to be vulnerable to Generative Adversarial Network(GAN)attacks,where an adversary can presume to act as a contributor participating in the training process to acquire the private data of other contributors.This paper proposes an innovative Privacy Protection-based Federated Deep Learning(PP-FDL)framework,which accomplishes data protection against privacy-related GAN attacks,along with high classification rates from non-i.i.d data.PP-FDL is designed to enable fog nodes to cooperate to train the FDL model in a way that ensures contributors have no access to the data of each other,where class probabilities are protected utilizing a private identifier generated for each class.The PP-FDL framework is evaluated for image classification using simple convolutional networks which are trained using MNIST and CIFAR-10 datasets.The empirical results have revealed that PF-DFL can achieve data protection and the framework outperforms the other three state-of-the-art models with 3%–8%as accuracy improvements.
文摘The primary concern of modern technology is cyber attacks targeting the Internet of Things.As it is one of the most widely used networks today and vulnerable to attacks.Real-time threats pose with modern cyber attacks that pose a great danger to the Internet of Things(IoT)networks,as devices can be monitored or service isolated from them and affect users in one way or another.Securing Internet of Things networks is an important matter,as it requires the use of modern technologies and methods,and real and up-to-date data to design and train systems to keep pace with the modernity that attackers use to confront these attacks.One of the most common types of attacks against IoT devices is Distributed Denial-of-Service(DDoS)attacks.Our paper makes a unique contribution that differs from existing studies,in that we use recent data that contains real traffic and real attacks on IoT networks.And a hybrid method for selecting relevant features,And also how to choose highly efficient algorithms.What gives the model a high ability to detect distributed denial-of-service attacks.the model proposed is based on a two-stage process:selecting essential features and constructing a detection model using the K-neighbors algorithm with two classifier algorithms logistic regression and Stochastic Gradient Descent classifier(SGD),combining these classifiers through ensemble machine learning(stacking),and optimizing parameters through Grid Search-CV to enhance system accuracy.Experiments were conducted to evaluate the effectiveness of the proposed model using the CIC-IoT2023 and CIC-DDoS2019 datasets.Performance evaluation demonstrated the potential of our model in robust intrusion detection in IoT networks,achieving an accuracy of 99.965%and a detection time of 0.20 s for the CIC-IoT2023 dataset,and 99.968%accuracy with a detection time of 0.23 s for the CIC-DDoS 2019 dataset.Furthermore,a comparative analysis with recent related works highlighted the superiority of our methodology in intrusion detection,showing improvements in accuracy,recall,and detection time.
基金the Researchers Supporting Project Number(RSP2023 R157),King Saud University,Riyadh,Saudi Arabia.
文摘An illness known as pneumonia causes inflammation in the lungs.Since there is so much information available fromvarious X-ray images,diagnosing pneumonia has typically proven challenging.To improve image quality and speed up the diagnosis of pneumonia,numerous approaches have been devised.To date,several methods have been employed to identify pneumonia.The Convolutional Neural Network(CNN)has achieved outstanding success in identifying and diagnosing diseases in the fields of medicine and radiology.However,these methods are complex,inefficient,and imprecise to analyze a big number of datasets.In this paper,a new hybrid method for the automatic classification and identification of Pneumonia from chest X-ray images is proposed.The proposed method(ABOCNN)utilized theAfrican BuffaloOptimization(ABO)algorithmto enhanceCNNperformance and accuracy.The Weinmed filter is employed for pre-processing to eliminate unwanted noises from chest X-ray images,followed by feature extraction using the Grey Level Co-Occurrence Matrix(GLCM)approach.Relevant features are then selected from the dataset using the ABO algorithm,and ultimately,high-performance deep learning using the CNN approach is introduced for the classification and identification of Pneumonia.Experimental results on various datasets showed that,when contrasted to other approaches,the ABO-CNN outperforms them all for the classification tasks.The proposed method exhibits superior values like 96.95%,88%,86%,and 86%for accuracy,precision,recall,and F1-score,respectively.
文摘Skin cancer diagnosis is difficult due to lesion presentation variability. Conventionalmethods struggle to manuallyextract features and capture lesions spatial and temporal variations. This study introduces a deep learning-basedConvolutional and Recurrent Neural Network (CNN-RNN) model with a ResNet-50 architecture which usedas the feature extractor to enhance skin cancer classification. Leveraging synergistic spatial feature extractionand temporal sequence learning, the model demonstrates robust performance on a dataset of 9000 skin lesionphotos from nine cancer types. Using pre-trained ResNet-50 for spatial data extraction and Long Short-TermMemory (LSTM) for temporal dependencies, the model achieves a high average recognition accuracy, surpassingprevious methods. The comprehensive evaluation, including accuracy, precision, recall, and F1-score, underscoresthe model’s competence in categorizing skin cancer types. This research contributes a sophisticated model andvaluable guidance for deep learning-based diagnostics, also this model excels in overcoming spatial and temporalcomplexities, offering a sophisticated solution for dermatological diagnostics research.
文摘This study investigates the application of deep learning,ensemble learning,metaheuristic optimization,and image processing techniques for detecting lung and colon cancers,aiming to enhance treatment efficacy and improve survival rates.We introduce a metaheuristic-driven two-stage ensemble deep learning model for efficient lung/colon cancer classification.The diagnosis of lung and colon cancers is attempted using several unique indicators by different versions of deep Convolutional Neural Networks(CNNs)in feature extraction and model constructions,and utilizing the power of various Machine Learning(ML)algorithms for final classification.Specifically,we consider different scenarios consisting of two-class colon cancer,three-class lung cancer,and fiveclass combined lung/colon cancer to conduct feature extraction using four CNNs.These extracted features are then integrated to create a comprehensive feature set.In the next step,the optimization of the feature selection is conducted using a metaheuristic algorithm based on the Electric Eel Foraging Optimization(EEFO).This optimized feature subset is subsequently employed in various ML algorithms to determine the most effective ones through a rigorous evaluation process.The top-performing algorithms are refined using the High-Performance Filter(HPF)and integrated into an ensemble learning framework employing weighted averaging.Our findings indicate that the proposed ensemble learning model significantly surpasses existing methods in classification accuracy across all datasets,achieving accuracies of 99.85%for the two-class,98.70%for the three-class,and 98.96%for the five-class datasets.