For Future networks, many research projects have proposed different architectures around the globe;Software Defined Network(SDN) architectures, through separating Data and Control Layers, offer a crucial structure for...For Future networks, many research projects have proposed different architectures around the globe;Software Defined Network(SDN) architectures, through separating Data and Control Layers, offer a crucial structure for it. With a worldwide view and centralized Control, the SDN network provides flexible and reliable network management that improves network throughput and increases link utilization. In addition, it supports an innovative flow scheduling system to help advance Traffic Engineering(TE). For Medium and large-scale networks migrating directly from a legacy network to an SDN Network seems more complicated & even impossible, as there are High potential challenges, including technical, financial, security, shortage of standards, and quality of service degradation challenges. These challenges cause the birth and pave the ground for Hybrid SDN networks, where SDN devices coexist with traditional network devices. This study explores a Hybrid SDN network’s Traffic Engineering and Quality of Services Issues. Quality of service is described by network characteristics such as latency, jitter, loss, bandwidth,and network link utilization, using industry standards and mechanisms in a Hybrid SDN Network. We have organized the related studies in a way that the Quality of Service may gain the most benefit from the concept of Hybrid SDN networks using different algorithms and mechanisms: Deep Reinforcement Learning(DRL), Heuristic algorithm, K path partition algorithm, Genetic algorithm, SOTE algorithm, ROAR method, and Routing Optimization with different optimization mechanisms that help to ensure high-quality performance in a Hybrid SDN Network.展开更多
A comprehensive understanding of human intelligence is still an ongoing process,i.e.,human and information security are not yet perfectly matched.By understanding cognitive processes,designers can design humanized cog...A comprehensive understanding of human intelligence is still an ongoing process,i.e.,human and information security are not yet perfectly matched.By understanding cognitive processes,designers can design humanized cognitive information systems(CIS).The need for this research is justified because today’s business decision makers are faced with questions they cannot answer in a given amount of time without the use of cognitive information systems.The researchers aim to better strengthen cognitive information systems with more pronounced cognitive thresholds by demonstrating the resilience of cognitive resonant frequencies to reveal possible responses to improve the efficiency of human-computer interaction(HCI).Apractice-oriented research approach included research analysis and a review of existing articles to pursue a comparative research model;thereafter,amodel development paradigm was used to observe and monitor the progression of CIS during HCI.The scope of our research provides a broader perspective on how different disciplines affect HCI and how human cognitive models can be enhanced to enrich complements.We have identified a significant gap in the current literature on mental processing resulting from a wide range of theory and practice.展开更多
Wheat rust diseases are one of the major types of fungal diseases that cause substantial yield quality losses of 15%–20%every year.The wheat rust diseases are identified either through experienced evaluators or compu...Wheat rust diseases are one of the major types of fungal diseases that cause substantial yield quality losses of 15%–20%every year.The wheat rust diseases are identified either through experienced evaluators or computerassisted techniques.The experienced evaluators take time to identify the disease which is highly laborious and too costly.If wheat rust diseases are predicted at the development stages,then fungicides are sprayed earlier which helps to increase wheat yield quality.To solve the experienced evaluator issues,a combined region extraction and cross-entropy support vector machine(CE-SVM)model is proposed for wheat rust disease identification.In the proposed system,a total of 2300 secondary source images were augmented through flipping,cropping,and rotation techniques.The augmented images are preprocessed by histogram equalization.As a result,preprocessed images have been applied to region extraction convolutional neural networks(RCNN);Fast-RCNN,Faster-RCNN,and Mask-RCNN models for wheat plant patch extraction.Different layers of region extraction models construct a feature vector that is later passed to the CE-SVM model.As a result,the Gaussian kernel function in CE-SVM achieves high F1-score(88.43%)and accuracy(93.60%)for wheat stripe rust disease classification.展开更多
In today's Internet routing infrastructure,designers have addressed scal-ing concerns in routing constrained multiobjective optimization problems examining latency and mobility concerns as a secondary constrain.In...In today's Internet routing infrastructure,designers have addressed scal-ing concerns in routing constrained multiobjective optimization problems examining latency and mobility concerns as a secondary constrain.In tactical Mobile Ad-hoc Network(MANET),hubs can function based on the work plan in various social affairs and the internally connected hubs are almost having the related moving standards where the topology between one and the other are tightly coupled in steady support by considering the touchstone of hubs such as a self-sorted out,self-mending and self-administration.Clustering in the routing process is one of the key aspects to increase MANET performance by coordinat-ing the pathways using multiple criteria and analytics.We present a Group Adaptive Hybrid Routing Algorithm(GAHRA)for gathering portability,which pursues table-driven directing methodology in stable accumulations and on-request steering strategy for versatile situations.Based on this aspect,the research demonstrates an adjustable framework for commuting between the table-driven approach and the on-request approach,with the objectives of enhancing the out-put of MANET routing computation in each hub.Simulation analysis and replication results reveal that the proposed method is promising than a single well-known existing routing approach and is well-suited for sensitive MANET applications.展开更多
A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extr...A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions.To prevent overfitting,in-depth features of facial images are extracted and assigned to the proposed convolutional neural network(CNN)models.Various CNN models are then trained.Finally,the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions,i.e.,fear,disgust,anger,surprise,sadness,happiness,neutral.For experimental purposes,three benchmark datasets,i.e.,SFEW,CK+,and KDEF are utilized.The performance of the proposed systemis compared with some state-of-the-artmethods concerning each dataset.Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics.Finally,the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users.展开更多
Over the past era,subgraph mining from a large collection of graph database is a crucial problem.In addition,scalability is another big problem due to insufficient storage.There are several security challenges associa...Over the past era,subgraph mining from a large collection of graph database is a crucial problem.In addition,scalability is another big problem due to insufficient storage.There are several security challenges associated with subgraph mining in today’s on-demand system.To address this downside,our proposed work introduces a Blockchain-based Consensus algorithm for Authenticated query search in the Large-Scale Dynamic Graphs(BCCA-LSDG).The two-fold process is handled in the proposed BCCA-LSDG:graph indexing and authenticated query search(query processing).A blockchain-based reputation system is meant to maintain the trust blockchain and cloud server of the proposed architecture.To resolve the issues and provide safe big data transmission,the proposed technique also combines blockchain with a consensus algorithm architecture.Security of the big data is ensured by dividing the BC network into distinct networks,each with a restricted number of allowed entities,data kept in the cloud gate server,and data analysis in the blockchain.The consensus algorithm is crucial for maintaining the speed,performance and security of the blockchain.Then Dual Similarity based MapReduce helps in mapping and reducing the relevant subgraphs with the use of optimal feature sets.Finally,the graph index refinement process is undertaken to improve the query results.Concerning query error,fuzzy logic is used to refine the index of the graph dynamically.The proposed technique outperforms advanced methodologies in both blockchain and non-blockchain systems,and the combination of blockchain and subgraph provides a secure communication platform,according to the findings.展开更多
Deep learning has risen in popularity as a face recognition technology in recent years.Facenet,a deep convolutional neural network(DCNN)developed by Google,recognizes faces with 128 bytes per face.It also claims to ha...Deep learning has risen in popularity as a face recognition technology in recent years.Facenet,a deep convolutional neural network(DCNN)developed by Google,recognizes faces with 128 bytes per face.It also claims to have achieved 99.96%on the reputed Labelled Faces in the Wild(LFW)dataset.How-ever,the accuracy and validation rate of Facenet drops down eventually,there is a gradual decrease in the resolution of the images.This research paper aims at developing a new facial recognition system that can produce a higher accuracy rate and validation rate on low-resolution face images.The proposed system Extended Openface performs facial recognition by using three different features i)facial landmark ii)head pose iii)eye gaze.It extracts facial landmark detection using Scattered Gated Expert Network Constrained Local Model(SGEN-CLM).It also detects the head pose and eye gaze using Enhanced Constrained Local Neur-alfield(ECLNF).Extended openface employs a simple Support Vector Machine(SVM)for training and testing the face images.The system’s performance is assessed on low-resolution datasets like LFW,Indian Movie Face Database(IMFDB).The results demonstrated that Extended Openface has a better accuracy rate(12%)and validation rate(22%)than Facenet on low-resolution images.展开更多
Software Product Line(SPL)is a group of software-intensive systems that share common and variable resources for developing a particular system.The feature model is a tree-type structure used to manage SPL’s common an...Software Product Line(SPL)is a group of software-intensive systems that share common and variable resources for developing a particular system.The feature model is a tree-type structure used to manage SPL’s common and variable features with their different relations and problem of Crosstree Constraints(CTC).CTC problems exist in groups of common and variable features among the sub-tree of feature models more diverse in Internet of Things(IoT)devices because different Internet devices and protocols are communicated.Therefore,managing the CTC problem to achieve valid product configuration in IoT-based SPL is more complex,time-consuming,and hard.However,the CTC problem needs to be considered in previously proposed approaches such as Commonality VariabilityModeling of Features(COVAMOF)andGenarch+tool;therefore,invalid products are generated.This research has proposed a novel approach Binary Oriented Feature Selection Crosstree Constraints(BOFS-CTC),to find all possible valid products by selecting the features according to cardinality constraints and cross-tree constraint problems in the featuremodel of SPL.BOFS-CTC removes the invalid products at the early stage of feature selection for the product configuration.Furthermore,this research developed the BOFS-CTC algorithm and applied it to,IoT-based feature models.The findings of this research are that no relationship constraints and CTC violations occur and drive the valid feature product configurations for the application development by removing the invalid product configurations.The accuracy of BOFS-CTC is measured by the integration sampling technique,where different valid product configurations are compared with the product configurations derived by BOFS-CTC and found 100%correct.Using BOFS-CTC eliminates the testing cost and development effort of invalid SPL products.展开更多
With the improvement of current online communication schemes,it is now possible to successfully distribute and transport secured digital Content via the communication channel at a faster transmission rate.Traditional ...With the improvement of current online communication schemes,it is now possible to successfully distribute and transport secured digital Content via the communication channel at a faster transmission rate.Traditional steganography and cryptography concepts are used to achieve the goal of concealing secret Content on a media and encrypting it before transmission.Both of the techniques mentioned above aid in the confidentiality of feature content.The proposed approach concerns secret content embodiment in selected pixels on digital image layers such as Red,Green,and Blue.The private Content originated from a medical client and was forwarded to a medical practitioner on the server end through the internet.The K-Means clustering principle uses the contouring approach to frame the pixel clusters on the image layers.The content embodiment procedure is performed on the selected pixel groups of all layers of the image using the Least Significant Bit(LSB)substitution technique to build the secret Content embedded image known as the stego image,which is subsequently transmitted across the internet medium to the server end.The experimental results are computed using the inputs from“Open-Access Medical Image Repositories(aylward.org)”and demonstrate the scheme’s impudence as the Content concealing procedure progresses.展开更多
In this paper,the application of transportation systems in realtime traffic conditions is evaluated with data handling representations.The proposed method is designed in such a way as to detect the number of loads tha...In this paper,the application of transportation systems in realtime traffic conditions is evaluated with data handling representations.The proposed method is designed in such a way as to detect the number of loads that are present in a vehicle where functionality tasks are computed in the system.Compared to the existing approach,the design model in the proposed method is made by dividing the computing areas into several cluster regions,thereby reducing the complex monitoring system where control errors are minimized.Furthermore,a route management technique is combined with Artificial Intelligence(AI)algorithm to transmit the data to appropriate central servers.Therefore,the combined objective case studies are examined as minimization and maximization criteria,thus increasing the efficiency of the proposed method.Finally,four scenarios are chosen to investigate the projected design’s effectiveness.In all simulated metrics,the proposed approach provides better operational outcomes for an average percentage of 97,thereby reducing the amount of traffic in real-time conditions.展开更多
This research is focused on a highly effective and untapped feature called gammatone frequency cepstral coefficients(GFCC)for the detection of COVID-19 by using the nature-inspired meta-heuristic algorithm of deer hun...This research is focused on a highly effective and untapped feature called gammatone frequency cepstral coefficients(GFCC)for the detection of COVID-19 by using the nature-inspired meta-heuristic algorithm of deer hunting optimization and artificial neural network(DHO-ANN).The noisy crowdsourced cough datasets were collected from the public domain.This research work claimed that the GFCC yielded better results in terms of COVID-19 detection as compared to the widely used Mel-frequency cepstral coefficient in noisy crowdsourced speech corpora.The proposed algorithm's performance for detecting COVID-19 disease is rigorously validated using statistical measures,F1 score,confusion matrix,specificity,and sensitivity parameters.Besides,it is found that the proposed algorithm using GFCC performs well in terms of detecting the COVID-19 disease from the noisy crowdsourced cough dataset,COUGHVID.Moreover,the proposed algorithm and undertaken feature parameters have improved the detection of COVID-19 by 5%compared to the existing methods.展开更多
The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,...The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,working environments,topologies,and so on.The existing multi-core technology unlocks additional research opportunities for energy minimization by the use of effective task scheduling.At the same time,the task scheduling process is yet to be explored in the multi-core systems.This paper presents a new hybrid genetic algorithm(GA)with a krill herd(KH)based energy-efficient scheduling techni-que for multi-core systems(GAKH-SMCS).The goal of the GAKH-SMCS tech-nique is to derive scheduling tasks in such a way to achieve faster completion time and minimum energy dissipation.The GAKH-SMCS model involves a multi-objectivefitness function using four parameters such as makespan,processor utilization,speedup,and energy consumption to schedule tasks proficiently.The performance of the GAKH-SMCS model has been validated against two datasets namely random dataset and benchmark dataset.The experimental outcome ensured the effectiveness of the GAKH-SMCS model interms of makespan,pro-cessor utilization,speedup,and energy consumption.The overall simulation results depicted that the presented GAKH-SMCS model achieves energy effi-ciency by optimal task scheduling process in MCS.展开更多
Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japane...Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods.展开更多
Person identification is one of the most vital tasks for network security. People are more concerned about theirsecurity due to traditional passwords becoming weaker or leaking in various attacks. In recent decades, f...Person identification is one of the most vital tasks for network security. People are more concerned about theirsecurity due to traditional passwords becoming weaker or leaking in various attacks. In recent decades, fingerprintsand faces have been widely used for person identification, which has the risk of information leakage as a resultof reproducing fingers or faces by taking a snapshot. Recently, people have focused on creating an identifiablepattern, which will not be reproducible falsely by capturing psychological and behavioral information of a personusing vision and sensor-based techniques. In existing studies, most of the researchers used very complex patternsin this direction, which need special training and attention to remember the patterns and failed to capturethe psychological and behavioral information of a person properly. To overcome these problems, this researchdevised a novel dynamic hand gesture-based person identification system using a Leap Motion sensor. Thisstudy developed two hand gesture-based pattern datasets for performing the experiments, which contained morethan 500 samples, collected from 25 subjects. Various static and dynamic features were extracted from the handgeometry. Randomforest was used to measure feature importance using the Gini Index. Finally, the support vectormachinewas implemented for person identification and evaluate its performance using identification accuracy. Theexperimental results showed that the proposed system produced an identification accuracy of 99.8% for arbitraryhand gesture-based patterns and 99.6% for the same dynamic hand gesture-based patterns. This result indicatedthat the proposed system can be used for person identification in the field of security.展开更多
In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications...In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.展开更多
Brain tumor is a global issue due to which several people suffer,and its early diagnosis can help in the treatment in a more efficient manner.Identifying different types of brain tumors,including gliomas,meningiomas,p...Brain tumor is a global issue due to which several people suffer,and its early diagnosis can help in the treatment in a more efficient manner.Identifying different types of brain tumors,including gliomas,meningiomas,pituitary tumors,as well as confirming the absence of tumors,poses a significant challenge using MRI images.Current approaches predominantly rely on traditional machine learning and basic deep learning methods for image classification.These methods often rely on manual feature extraction and basic convolutional neural networks(CNNs).The limitations include inadequate accuracy,poor generalization of new data,and limited ability to manage the high variability in MRI images.Utilizing the EfficientNetB3 architecture,this study presents a groundbreaking approach in the computational engineering domain,enhancing MRI-based brain tumor classification.Our approach highlights a major advancement in employing sophisticated machine learning techniques within Computer Science and Engineering,showcasing a highly accurate framework with significant potential for healthcare technologies.The model achieves an outstanding 99%accuracy,exhibiting balanced precision,recall,and F1-scores across all tumor types,as detailed in the classification report.This successful implementation demonstrates the model’s potential as an essential tool for diagnosing and classifying brain tumors,marking a notable improvement over current methods.The integration of such advanced computational techniques in medical diagnostics can significantly enhance accuracy and efficiency,paving the way for wider application.This research highlights the revolutionary impact of deep learning technologies in improving diagnostic processes and patient outcomes in neuro-oncology.展开更多
The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and ...The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.展开更多
Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial i...Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.展开更多
The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation fo...The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.展开更多
In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia...In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.展开更多
文摘For Future networks, many research projects have proposed different architectures around the globe;Software Defined Network(SDN) architectures, through separating Data and Control Layers, offer a crucial structure for it. With a worldwide view and centralized Control, the SDN network provides flexible and reliable network management that improves network throughput and increases link utilization. In addition, it supports an innovative flow scheduling system to help advance Traffic Engineering(TE). For Medium and large-scale networks migrating directly from a legacy network to an SDN Network seems more complicated & even impossible, as there are High potential challenges, including technical, financial, security, shortage of standards, and quality of service degradation challenges. These challenges cause the birth and pave the ground for Hybrid SDN networks, where SDN devices coexist with traditional network devices. This study explores a Hybrid SDN network’s Traffic Engineering and Quality of Services Issues. Quality of service is described by network characteristics such as latency, jitter, loss, bandwidth,and network link utilization, using industry standards and mechanisms in a Hybrid SDN Network. We have organized the related studies in a way that the Quality of Service may gain the most benefit from the concept of Hybrid SDN networks using different algorithms and mechanisms: Deep Reinforcement Learning(DRL), Heuristic algorithm, K path partition algorithm, Genetic algorithm, SOTE algorithm, ROAR method, and Routing Optimization with different optimization mechanisms that help to ensure high-quality performance in a Hybrid SDN Network.
基金This work was supported by King Saud University through Researchers Supporting Project Number(RSP2022R426),King Saud University,Riyadh,Saudi Arabia.
文摘A comprehensive understanding of human intelligence is still an ongoing process,i.e.,human and information security are not yet perfectly matched.By understanding cognitive processes,designers can design humanized cognitive information systems(CIS).The need for this research is justified because today’s business decision makers are faced with questions they cannot answer in a given amount of time without the use of cognitive information systems.The researchers aim to better strengthen cognitive information systems with more pronounced cognitive thresholds by demonstrating the resilience of cognitive resonant frequencies to reveal possible responses to improve the efficiency of human-computer interaction(HCI).Apractice-oriented research approach included research analysis and a review of existing articles to pursue a comparative research model;thereafter,amodel development paradigm was used to observe and monitor the progression of CIS during HCI.The scope of our research provides a broader perspective on how different disciplines affect HCI and how human cognitive models can be enhanced to enrich complements.We have identified a significant gap in the current literature on mental processing resulting from a wide range of theory and practice.
文摘Wheat rust diseases are one of the major types of fungal diseases that cause substantial yield quality losses of 15%–20%every year.The wheat rust diseases are identified either through experienced evaluators or computerassisted techniques.The experienced evaluators take time to identify the disease which is highly laborious and too costly.If wheat rust diseases are predicted at the development stages,then fungicides are sprayed earlier which helps to increase wheat yield quality.To solve the experienced evaluator issues,a combined region extraction and cross-entropy support vector machine(CE-SVM)model is proposed for wheat rust disease identification.In the proposed system,a total of 2300 secondary source images were augmented through flipping,cropping,and rotation techniques.The augmented images are preprocessed by histogram equalization.As a result,preprocessed images have been applied to region extraction convolutional neural networks(RCNN);Fast-RCNN,Faster-RCNN,and Mask-RCNN models for wheat plant patch extraction.Different layers of region extraction models construct a feature vector that is later passed to the CE-SVM model.As a result,the Gaussian kernel function in CE-SVM achieves high F1-score(88.43%)and accuracy(93.60%)for wheat stripe rust disease classification.
文摘In today's Internet routing infrastructure,designers have addressed scal-ing concerns in routing constrained multiobjective optimization problems examining latency and mobility concerns as a secondary constrain.In tactical Mobile Ad-hoc Network(MANET),hubs can function based on the work plan in various social affairs and the internally connected hubs are almost having the related moving standards where the topology between one and the other are tightly coupled in steady support by considering the touchstone of hubs such as a self-sorted out,self-mending and self-administration.Clustering in the routing process is one of the key aspects to increase MANET performance by coordinat-ing the pathways using multiple criteria and analytics.We present a Group Adaptive Hybrid Routing Algorithm(GAHRA)for gathering portability,which pursues table-driven directing methodology in stable accumulations and on-request steering strategy for versatile situations.Based on this aspect,the research demonstrates an adjustable framework for commuting between the table-driven approach and the on-request approach,with the objectives of enhancing the out-put of MANET routing computation in each hub.Simulation analysis and replication results reveal that the proposed method is promising than a single well-known existing routing approach and is well-suited for sensitive MANET applications.
基金supported by the Researchers Supporting Project (No.RSP-2021/395),King Saud University,Riyadh,Saudi Arabia.
文摘A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions.To prevent overfitting,in-depth features of facial images are extracted and assigned to the proposed convolutional neural network(CNN)models.Various CNN models are then trained.Finally,the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions,i.e.,fear,disgust,anger,surprise,sadness,happiness,neutral.For experimental purposes,three benchmark datasets,i.e.,SFEW,CK+,and KDEF are utilized.The performance of the proposed systemis compared with some state-of-the-artmethods concerning each dataset.Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics.Finally,the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users.
文摘Over the past era,subgraph mining from a large collection of graph database is a crucial problem.In addition,scalability is another big problem due to insufficient storage.There are several security challenges associated with subgraph mining in today’s on-demand system.To address this downside,our proposed work introduces a Blockchain-based Consensus algorithm for Authenticated query search in the Large-Scale Dynamic Graphs(BCCA-LSDG).The two-fold process is handled in the proposed BCCA-LSDG:graph indexing and authenticated query search(query processing).A blockchain-based reputation system is meant to maintain the trust blockchain and cloud server of the proposed architecture.To resolve the issues and provide safe big data transmission,the proposed technique also combines blockchain with a consensus algorithm architecture.Security of the big data is ensured by dividing the BC network into distinct networks,each with a restricted number of allowed entities,data kept in the cloud gate server,and data analysis in the blockchain.The consensus algorithm is crucial for maintaining the speed,performance and security of the blockchain.Then Dual Similarity based MapReduce helps in mapping and reducing the relevant subgraphs with the use of optimal feature sets.Finally,the graph index refinement process is undertaken to improve the query results.Concerning query error,fuzzy logic is used to refine the index of the graph dynamically.The proposed technique outperforms advanced methodologies in both blockchain and non-blockchain systems,and the combination of blockchain and subgraph provides a secure communication platform,according to the findings.
文摘Deep learning has risen in popularity as a face recognition technology in recent years.Facenet,a deep convolutional neural network(DCNN)developed by Google,recognizes faces with 128 bytes per face.It also claims to have achieved 99.96%on the reputed Labelled Faces in the Wild(LFW)dataset.How-ever,the accuracy and validation rate of Facenet drops down eventually,there is a gradual decrease in the resolution of the images.This research paper aims at developing a new facial recognition system that can produce a higher accuracy rate and validation rate on low-resolution face images.The proposed system Extended Openface performs facial recognition by using three different features i)facial landmark ii)head pose iii)eye gaze.It extracts facial landmark detection using Scattered Gated Expert Network Constrained Local Model(SGEN-CLM).It also detects the head pose and eye gaze using Enhanced Constrained Local Neur-alfield(ECLNF).Extended openface employs a simple Support Vector Machine(SVM)for training and testing the face images.The system’s performance is assessed on low-resolution datasets like LFW,Indian Movie Face Database(IMFDB).The results demonstrated that Extended Openface has a better accuracy rate(12%)and validation rate(22%)than Facenet on low-resolution images.
文摘Software Product Line(SPL)is a group of software-intensive systems that share common and variable resources for developing a particular system.The feature model is a tree-type structure used to manage SPL’s common and variable features with their different relations and problem of Crosstree Constraints(CTC).CTC problems exist in groups of common and variable features among the sub-tree of feature models more diverse in Internet of Things(IoT)devices because different Internet devices and protocols are communicated.Therefore,managing the CTC problem to achieve valid product configuration in IoT-based SPL is more complex,time-consuming,and hard.However,the CTC problem needs to be considered in previously proposed approaches such as Commonality VariabilityModeling of Features(COVAMOF)andGenarch+tool;therefore,invalid products are generated.This research has proposed a novel approach Binary Oriented Feature Selection Crosstree Constraints(BOFS-CTC),to find all possible valid products by selecting the features according to cardinality constraints and cross-tree constraint problems in the featuremodel of SPL.BOFS-CTC removes the invalid products at the early stage of feature selection for the product configuration.Furthermore,this research developed the BOFS-CTC algorithm and applied it to,IoT-based feature models.The findings of this research are that no relationship constraints and CTC violations occur and drive the valid feature product configurations for the application development by removing the invalid product configurations.The accuracy of BOFS-CTC is measured by the integration sampling technique,where different valid product configurations are compared with the product configurations derived by BOFS-CTC and found 100%correct.Using BOFS-CTC eliminates the testing cost and development effort of invalid SPL products.
文摘With the improvement of current online communication schemes,it is now possible to successfully distribute and transport secured digital Content via the communication channel at a faster transmission rate.Traditional steganography and cryptography concepts are used to achieve the goal of concealing secret Content on a media and encrypting it before transmission.Both of the techniques mentioned above aid in the confidentiality of feature content.The proposed approach concerns secret content embodiment in selected pixels on digital image layers such as Red,Green,and Blue.The private Content originated from a medical client and was forwarded to a medical practitioner on the server end through the internet.The K-Means clustering principle uses the contouring approach to frame the pixel clusters on the image layers.The content embodiment procedure is performed on the selected pixel groups of all layers of the image using the Least Significant Bit(LSB)substitution technique to build the secret Content embedded image known as the stego image,which is subsequently transmitted across the internet medium to the server end.The experimental results are computed using the inputs from“Open-Access Medical Image Repositories(aylward.org)”and demonstrate the scheme’s impudence as the Content concealing procedure progresses.
基金funded by the Research Management Centre(RMC),Universiti Malaysia Sabah,through the Journal Article Fund UMS/PPI-DPJ1.
文摘In this paper,the application of transportation systems in realtime traffic conditions is evaluated with data handling representations.The proposed method is designed in such a way as to detect the number of loads that are present in a vehicle where functionality tasks are computed in the system.Compared to the existing approach,the design model in the proposed method is made by dividing the computing areas into several cluster regions,thereby reducing the complex monitoring system where control errors are minimized.Furthermore,a route management technique is combined with Artificial Intelligence(AI)algorithm to transmit the data to appropriate central servers.Therefore,the combined objective case studies are examined as minimization and maximization criteria,thus increasing the efficiency of the proposed method.Finally,four scenarios are chosen to investigate the projected design’s effectiveness.In all simulated metrics,the proposed approach provides better operational outcomes for an average percentage of 97,thereby reducing the amount of traffic in real-time conditions.
文摘This research is focused on a highly effective and untapped feature called gammatone frequency cepstral coefficients(GFCC)for the detection of COVID-19 by using the nature-inspired meta-heuristic algorithm of deer hunting optimization and artificial neural network(DHO-ANN).The noisy crowdsourced cough datasets were collected from the public domain.This research work claimed that the GFCC yielded better results in terms of COVID-19 detection as compared to the widely used Mel-frequency cepstral coefficient in noisy crowdsourced speech corpora.The proposed algorithm's performance for detecting COVID-19 disease is rigorously validated using statistical measures,F1 score,confusion matrix,specificity,and sensitivity parameters.Besides,it is found that the proposed algorithm using GFCC performs well in terms of detecting the COVID-19 disease from the noisy crowdsourced cough dataset,COUGHVID.Moreover,the proposed algorithm and undertaken feature parameters have improved the detection of COVID-19 by 5%compared to the existing methods.
基金supported by Taif University Researchers Supporting Program(Project Number:TURSP-2020/195)Taif University,Saudi Arabia.Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R203)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,working environments,topologies,and so on.The existing multi-core technology unlocks additional research opportunities for energy minimization by the use of effective task scheduling.At the same time,the task scheduling process is yet to be explored in the multi-core systems.This paper presents a new hybrid genetic algorithm(GA)with a krill herd(KH)based energy-efficient scheduling techni-que for multi-core systems(GAKH-SMCS).The goal of the GAKH-SMCS tech-nique is to derive scheduling tasks in such a way to achieve faster completion time and minimum energy dissipation.The GAKH-SMCS model involves a multi-objectivefitness function using four parameters such as makespan,processor utilization,speedup,and energy consumption to schedule tasks proficiently.The performance of the GAKH-SMCS model has been validated against two datasets namely random dataset and benchmark dataset.The experimental outcome ensured the effectiveness of the GAKH-SMCS model interms of makespan,pro-cessor utilization,speedup,and energy consumption.The overall simulation results depicted that the presented GAKH-SMCS model achieves energy effi-ciency by optimal task scheduling process in MCS.
基金supported by the Competitive Research Fund of the University of Aizu,Japan.
文摘Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods.
基金the Competitive Research Fund of the University of Aizu,Japan.
文摘Person identification is one of the most vital tasks for network security. People are more concerned about theirsecurity due to traditional passwords becoming weaker or leaking in various attacks. In recent decades, fingerprintsand faces have been widely used for person identification, which has the risk of information leakage as a resultof reproducing fingers or faces by taking a snapshot. Recently, people have focused on creating an identifiablepattern, which will not be reproducible falsely by capturing psychological and behavioral information of a personusing vision and sensor-based techniques. In existing studies, most of the researchers used very complex patternsin this direction, which need special training and attention to remember the patterns and failed to capturethe psychological and behavioral information of a person properly. To overcome these problems, this researchdevised a novel dynamic hand gesture-based person identification system using a Leap Motion sensor. Thisstudy developed two hand gesture-based pattern datasets for performing the experiments, which contained morethan 500 samples, collected from 25 subjects. Various static and dynamic features were extracted from the handgeometry. Randomforest was used to measure feature importance using the Gini Index. Finally, the support vectormachinewas implemented for person identification and evaluate its performance using identification accuracy. Theexperimental results showed that the proposed system produced an identification accuracy of 99.8% for arbitraryhand gesture-based patterns and 99.6% for the same dynamic hand gesture-based patterns. This result indicatedthat the proposed system can be used for person identification in the field of security.
基金This work was supported in part by the National Science and Technology Council of Taiwan,under Contract NSTC 112-2410-H-324-001-MY2.
文摘In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.
基金supported by the Researchers Supporting Program at King Saud University.Researchers Supporting Project number(RSPD2024R867),King Saud University,Riyadh,Saudi Arabia.
文摘Brain tumor is a global issue due to which several people suffer,and its early diagnosis can help in the treatment in a more efficient manner.Identifying different types of brain tumors,including gliomas,meningiomas,pituitary tumors,as well as confirming the absence of tumors,poses a significant challenge using MRI images.Current approaches predominantly rely on traditional machine learning and basic deep learning methods for image classification.These methods often rely on manual feature extraction and basic convolutional neural networks(CNNs).The limitations include inadequate accuracy,poor generalization of new data,and limited ability to manage the high variability in MRI images.Utilizing the EfficientNetB3 architecture,this study presents a groundbreaking approach in the computational engineering domain,enhancing MRI-based brain tumor classification.Our approach highlights a major advancement in employing sophisticated machine learning techniques within Computer Science and Engineering,showcasing a highly accurate framework with significant potential for healthcare technologies.The model achieves an outstanding 99%accuracy,exhibiting balanced precision,recall,and F1-scores across all tumor types,as detailed in the classification report.This successful implementation demonstrates the model’s potential as an essential tool for diagnosing and classifying brain tumors,marking a notable improvement over current methods.The integration of such advanced computational techniques in medical diagnostics can significantly enhance accuracy and efficiency,paving the way for wider application.This research highlights the revolutionary impact of deep learning technologies in improving diagnostic processes and patient outcomes in neuro-oncology.
基金supported by theCONAHCYT(Consejo Nacional deHumanidades,Ciencias y Tecnologias).
文摘The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.
基金financially supported by the Deanship of Scientific Research at King Khalid University under Research Grant Number(R.G.P.2/549/44).
文摘Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.
文摘The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.
基金funded by Researchers Supporting Program at King Saud University,(RSPD2024R809).
文摘In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.