This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspi...This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspiration from the sit-and-wait hunting strategy of these lizards.The algorithm’s core principles are meticulously detailed and mathematically structured into two distinct phases:(i)an exploration phase,which mimics the lizard’s sudden attack on its prey,and(ii)an exploitation phase,which simulates the lizard’s retreat to the treetops after feeding.To assess FLO’s efficacy in addressing optimization problems,its performance is rigorously tested on fifty-two standard benchmark functions.These functions include unimodal,high-dimensional multimodal,and fixed-dimensional multimodal functions,as well as the challenging CEC 2017 test suite.FLO’s performance is benchmarked against twelve established metaheuristic algorithms,providing a comprehensive comparative analysis.The simulation results demonstrate that FLO excels in both exploration and exploitation,effectively balancing these two critical aspects throughout the search process.This balanced approach enables FLO to outperform several competing algorithms in numerous test cases.Additionally,FLO is applied to twenty-two constrained optimization problems from the CEC 2011 test suite and four complex engineering design problems,further validating its robustness and versatility in solving real-world optimization challenges.Overall,the study highlights FLO’s superior performance and its potential as a powerful tool for tackling a wide range of optimization problems.展开更多
Artificial rabbits optimization(ARO)is a recently proposed biology-based optimization algorithm inspired by the detour foraging and random hiding behavior of rabbits in nature.However,for solving optimization problems...Artificial rabbits optimization(ARO)is a recently proposed biology-based optimization algorithm inspired by the detour foraging and random hiding behavior of rabbits in nature.However,for solving optimization problems,the ARO algorithm shows slow convergence speed and can fall into local minima.To overcome these drawbacks,this paper proposes chaotic opposition-based learning ARO(COARO),an improved version of the ARO algorithm that incorporates opposition-based learning(OBL)and chaotic local search(CLS)techniques.By adding OBL to ARO,the convergence speed of the algorithm increases and it explores the search space better.Chaotic maps in CLS provide rapid convergence by scanning the search space efficiently,since their ergodicity and non-repetitive properties.The proposed COARO algorithm has been tested using thirty-three distinct benchmark functions.The outcomes have been compared with the most recent optimization algorithms.Additionally,the COARO algorithm’s problem-solving capabilities have been evaluated using six different engineering design problems and compared with various other algorithms.This study also introduces a binary variant of the continuous COARO algorithm,named BCOARO.The performance of BCOARO was evaluated on the breast cancer dataset.The effectiveness of BCOARO has been compared with different feature selection algorithms.The proposed BCOARO outperforms alternative algorithms,according to the findings obtained for real applications in terms of accuracy performance,and fitness value.Extensive experiments show that the COARO and BCOARO algorithms achieve promising results compared to other metaheuristic algorithms.展开更多
In this work,we aim to introduce some modifications to the Anam-Net deep neural network(DNN)model for segmenting optic cup(OC)and optic disc(OD)in retinal fundus images to estimate the cup-to-disc ratio(CDR).The CDR i...In this work,we aim to introduce some modifications to the Anam-Net deep neural network(DNN)model for segmenting optic cup(OC)and optic disc(OD)in retinal fundus images to estimate the cup-to-disc ratio(CDR).The CDR is a reliable measure for the early diagnosis of Glaucoma.In this study,we developed a lightweight DNN model for OC and OD segmentation in retinal fundus images.Our DNN model is based on modifications to Anam-Net,incorporating an anamorphic depth embedding block.To reduce computational complexity,we employ a fixed filter size for all convolution layers in the encoder and decoder stages as the network deepens.This modification significantly reduces the number of trainable parameters,making the model lightweight and suitable for resource-constrained applications.We evaluate the performance of the developed model using two publicly available retinal image databases,namely RIM-ONE and Drishti-GS.The results demonstrate promising OC segmentation performance across most standard evaluation metrics while achieving analogous results for OD segmentation.We used two retinal fundus image databases named RIM-ONE and Drishti-GS that contained 159 images and 101 retinal images,respectively.For OD segmentation using the RIM-ONE we obtain an f1-score(F1),Jaccard coefficient(JC),and overlapping error(OE)of 0.950,0.9219,and 0.0781,respectively.Similarly,for OC segmentation using the same databases,we achieve scores of 0.8481(F1),0.7428(JC),and 0.2572(OE).Based on these experimental results and the significantly lower number of trainable parameters,we conclude that the developed model is highly suitable for the early diagnosis of glaucoma by accurately estimating the CDR.展开更多
The healthcare sector holds valuable and sensitive data.The amount of this data and the need to handle,exchange,and protect it,has been increasing at a fast pace.Due to their nature,software-defined networks(SDNs)are ...The healthcare sector holds valuable and sensitive data.The amount of this data and the need to handle,exchange,and protect it,has been increasing at a fast pace.Due to their nature,software-defined networks(SDNs)are widely used in healthcare systems,as they ensure effective resource utilization,safety,great network management,and monitoring.In this sector,due to the value of thedata,SDNs faceamajor challengeposed byawide range of attacks,such as distributed denial of service(DDoS)and probe attacks.These attacks reduce network performance,causing the degradation of different key performance indicators(KPIs)or,in the worst cases,a network failure which can threaten human lives.This can be significant,especially with the current expansion of portable healthcare that supports mobile and wireless devices for what is called mobile health,or m-health.In this study,we examine the effectiveness of using SDNs for defense against DDoS,as well as their effects on different network KPIs under various scenarios.We propose a threshold-based DDoS classifier(TBDC)technique to classify DDoS attacks in healthcare SDNs,aiming to block traffic considered a hazard in the form of a DDoS attack.We then evaluate the accuracy and performance of the proposed TBDC approach.Our technique shows outstanding performance,increasing the mean throughput by 190.3%,reducing the mean delay by 95%,and reducing packet loss by 99.7%relative to normal,with DDoS attack traffic.展开更多
Maintaining a steady power supply requires accurate forecasting of solar irradiance,since clean energy resources do not provide steady power.The existing forecasting studies have examined the limited effects of weathe...Maintaining a steady power supply requires accurate forecasting of solar irradiance,since clean energy resources do not provide steady power.The existing forecasting studies have examined the limited effects of weather conditions on solar radiation such as temperature and precipitation utilizing convolutional neural network(CNN),but no comprehensive study has been conducted on concentrations of air pollutants along with weather conditions.This paper proposes a hybrid approach based on deep learning,expanding the feature set by adding new air pollution concentrations,and ranking these features to select and reduce their size to improve efficiency.In order to improve the accuracy of feature selection,a maximum-dependency and minimum-redundancy(mRMR)criterion is applied to the constructed feature space to identify and rank the features.The combination of air pollution data with weather conditions data has enabled the prediction of solar irradiance with a higher accuracy.An evaluation of the proposed approach is conducted in Istanbul over 12 months for 43791 discrete times,with the main purpose of analyzing air data,including particular matter(PM10 and PM25),carbon monoxide(CO),nitric oxide(NOX),nitrogen dioxide(NO_(2)),ozone(O₃),sulfur dioxide(SO_(2))using a CNN,a long short-term memory network(LSTM),and MRMR feature extraction.Compared with the benchmark models with root mean square error(RMSE)results of 76.2,60.3,41.3,32.4,there is a significant improvement with the RMSE result of 5.536.This hybrid model presented here offers high prediction accuracy,a wider feature set,and a novel approach based on air concentrations combined with weather conditions for solar irradiance prediction.展开更多
From a medical perspective,the 12 leads of the heart in an electrocardiogram(ECG)signal have functional dependencies with each other.Therefore,all these leads report different aspects of an arrhythmia.Their difference...From a medical perspective,the 12 leads of the heart in an electrocardiogram(ECG)signal have functional dependencies with each other.Therefore,all these leads report different aspects of an arrhythmia.Their differences lie in the level of highlighting and displaying information about that arrhythmia.For example,although all leads show traces of atrial excitation,this function is more evident in lead II than in any other lead.In this article,a new model was proposed using ECG functional and structural dependencies between heart leads.In the prescreening stage,the ECG signals are segmented from the QRS point so that further analyzes can be performed on these segments in a more detailed manner.The mutual information indices were used to assess the relationship between leads.In order to calculate mutual information,the correlation between the 12 ECG leads has been calculated.The output of this step is a matrix containing all mutual information.Furthermore,to calculate the structural information of ECG signals,a capsule neural network was implemented to aid physicians in the automatic classification of cardiac arrhythmias.The architecture of this capsule neural network has been modified to perform the classification task.In the experimental results section,the proposed model was used to classify arrhythmias in ECG signals from the Chapman dataset.Numerical evaluations showed that this model has a precision of 97.02%,recall of 96.13%,F1-score of 96.57%and accuracy of 97.38%,indicating acceptable performance compared to other state-of-the-art methods.The proposed method shows an average accuracy of 2%superiority over similar works.展开更多
In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroi...In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)are also higher than the baseline models.The results of the proposed model help us identify the dominant feature sets that impact thyroid cancer classification and prediction.The features{calcification}and{shape}consistently emerged as the top-ranked features associated with thyroid malignancy,in both association-rule based interestingnessmetric values and SHAPmethods.The paper highlights the potential of the rule-based integrated models with SHAP in bridging the gap between the machine learning predictions and the interpretability of this prediction which is required for real-world medical applications.展开更多
In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the imag...In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the image to its original quality.The challenge lies in regenerating significantly compressed images into a state in which these become identifiable.Therefore,this study focuses on the restoration of JPEG images subjected to substantial degradation caused by maximum lossy compression using Generative Adversarial Networks(GAN).The generator in this network is based on theU-Net architecture.It features a newhourglass structure that preserves the characteristics of the deep layers.In addition,the network incorporates two loss functions to generate natural and high-quality images:Low Frequency(LF)loss and High Frequency(HF)loss.HF loss uses a pretrained VGG-16 network and is configured using a specific layer that best represents features.This can enhance the performance in the high-frequency region.In contrast,LF loss is used to handle the low-frequency region.The two loss functions facilitate the generation of images by the generator,which can mislead the discriminator while accurately generating high-and low-frequency regions.Consequently,by removing the blocking effects frommaximum lossy compressed images,images inwhich identities could be recognized are generated.This study represents a significant improvement over previous research in terms of the image resolution performance.展开更多
Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly di...Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed.展开更多
The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of ...The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.展开更多
Traffic in today’s cities is a serious problem that increases travel times,negatively affects the environment,and drains financial resources.This study presents an Artificial Intelligence(AI)augmentedMobile Ad Hoc Ne...Traffic in today’s cities is a serious problem that increases travel times,negatively affects the environment,and drains financial resources.This study presents an Artificial Intelligence(AI)augmentedMobile Ad Hoc Networks(MANETs)based real-time prediction paradigm for urban traffic challenges.MANETs are wireless networks that are based on mobile devices and may self-organize.The distributed nature of MANETs and the power of AI approaches are leveraged in this framework to provide reliable and timely traffic congestion forecasts.This study suggests a unique Chaotic Spatial Fuzzy Polynomial Neural Network(CSFPNN)technique to assess real-time data acquired from various sources within theMANETs.The framework uses the proposed approach to learn from the data and create predictionmodels to detect possible traffic problems and their severity in real time.Real-time traffic prediction allows for proactive actions like resource allocation,dynamic route advice,and traffic signal optimization to reduce congestion.The framework supports effective decision-making,decreases travel time,lowers fuel use,and enhances overall urban mobility by giving timely information to pedestrians,drivers,and urban planners.Extensive simulations and real-world datasets are used to test the proposed framework’s prediction accuracy,responsiveness,and scalability.Experimental results show that the suggested framework successfully anticipates urban traffic issues in real-time,enables proactive traffic management,and aids in creating smarter,more sustainable cities.展开更多
Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean...Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales.展开更多
This research proposes a highly effective soft computing paradigm for estimating the compressive strength(CS)of metakaolin-contained cemented materials.The proposed approach is a combination of an enhanced grey wolf o...This research proposes a highly effective soft computing paradigm for estimating the compressive strength(CS)of metakaolin-contained cemented materials.The proposed approach is a combination of an enhanced grey wolf optimizer(EGWO)and an extreme learning machine(ELM).EGWO is an augmented form of the classic grey wolf optimizer(GWO).Compared to standard GWO,EGWO has a better hunting mechanism and produces an optimal performance.The EGWO was used to optimize the ELM structure and a hybrid model,ELM-EGWO,was built.To train and validate the proposed ELM-EGWO model,a sum of 361 experimental results featuring five influencing factors was collected.Based on sensitivity analysis,three distinct cases of influencing parameters were considered to investigate the effect of influencing factors on predictive precision.Experimental consequences show that the constructed ELM-EGWO achieved the most accurate precision in both training(RMSE=0.0959)and testing(RMSE=0.0912)phases.The outcomes of the ELM-EGWO are significantly superior to those of deep neural networks(DNN),k-nearest neighbors(KNN),long short-term memory(LSTM),and other hybrid ELMs constructed with GWO,particle swarm optimization(PSO),harris hawks optimization(HHO),salp swarm algorithm(SSA),marine predators algorithm(MPA),and colony predation algorithm(CPA).The overall results demonstrate that the newly suggested ELM-EGWO has the potential to estimate the CS of metakaolin-contained cemented materials with a high degree of precision and robustness.展开更多
Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well a...Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well as recent advances in the field.Materials and Methods:This paper provides an overview of conventional radiography digital radiography panoramic radiography computed tomography and cone-beam computed tomography.Additionally recent advances in radiological imaging are discussed such as imaging diagnosis and modern computer-aided diagnosis systems.Results:This paper details the differences between the imaging techniques the benefits of each and the current advances in the field to aid in the diagnosis of medical conditions.Conclusion:Radiological imaging is an extremely important tool in modern medicine to assist in medical diagnosis.This work provides an overview of the types of imaging techniques used the recent advances made and their potential applications.展开更多
The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accide...The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accident prevention,cost reduction,and enhanced traffic regularity.Despite these benefits,IoV technology is susceptible to cyber-attacks,which can exploit vulnerabilities in the vehicle network,leading to perturbations,disturbances,non-recognition of traffic signs,accidents,and vehicle immobilization.This paper reviews the state-of-the-art achievements and developments in applying Deep Transfer Learning(DTL)models for Intrusion Detection Systems in the Internet of Vehicles(IDS-IoV)based on anomaly detection.IDS-IoV leverages anomaly detection through machine learning and DTL techniques to mitigate the risks posed by cyber-attacks.These systems can autonomously create specific models based on network data to differentiate between regular traffic and cyber-attacks.Among these techniques,transfer learning models are particularly promising due to their efficacy with tagged data,reduced training time,lower memory usage,and decreased computational complexity.We evaluate DTL models against criteria including the ability to transfer knowledge,detection rate,accurate analysis of complex data,and stability.This review highlights the significant progress made in the field,showcasing how DTL models enhance the performance and reliability of IDS-IoV systems.By examining recent advancements,we provide insights into how DTL can effectively address cyber-attack challenges in IoV environments,ensuring safer and more efficient transportation networks.展开更多
Path-based clustering algorithms typically generate clusters by optimizing a benchmark function.Most optimiza-tion methods in clustering algorithms often offer solutions close to the general optimal value.This study a...Path-based clustering algorithms typically generate clusters by optimizing a benchmark function.Most optimiza-tion methods in clustering algorithms often offer solutions close to the general optimal value.This study achieves the global optimum value for the criterion function in a shorter time using the minimax distance,Maximum Spanning Tree“MST”,and meta-heuristic algorithms,including Genetic Algorithm“GA”and Particle Swarm Optimization“PSO”.The Fast Path-based Clustering“FPC”algorithm proposed in this paper can find cluster centers correctly in most datasets and quickly perform clustering operations.The FPC does this operation using MST,the minimax distance,and a new hybrid meta-heuristic algorithm in a few rounds of algorithm iterations.This algorithm can achieve the global optimal value,and the main clustering process of the algorithm has a computational complexity of O�k2×n�.However,due to the complexity of the minimum distance algorithm,the total computational complexity is O�n2�.Experimental results of FPC on synthetic datasets with arbitrary shapes demonstrate that the algorithm is resistant to noise and outliers and can correctly identify clusters of varying sizes and numbers.In addition,the FPC requires the number of clusters as the only parameter to perform the clustering process.A comparative analysis of FPC and other clustering algorithms in this domain indicates that FPC exhibits superior speed,stability,and performance.展开更多
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn...Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.展开更多
Due to the fact that a memristor with memory properties is an ideal electronic component for implementation of the artificial neural synaptic function,a brand-new tristable locally active memristor model is first prop...Due to the fact that a memristor with memory properties is an ideal electronic component for implementation of the artificial neural synaptic function,a brand-new tristable locally active memristor model is first proposed in this paper.Here,a novel four-dimensional fractional-order memristive cellular neural network(FO-MCNN)model with hidden attractors is constructed to enhance the engineering feasibility of the original CNN model and its performance.Then,its hardware circuit implementation and complicated dynamic properties are investigated on multi-simulation platforms.Subsequently,it is used toward secure communication application scenarios.Taking it as the pseudo-random number generator(PRNG),a new privacy image security scheme is designed based on the adaptive sampling rate compressive sensing(ASR-CS)model.Eventually,the simulation analysis and comparative experiments manifest that the proposed data encryption scheme possesses strong immunity against various security attack models and satisfactory compression performance.展开更多
In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises e...In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises essential components such as base stations,edge servers,and numerous IIoT devices characterized by limited energy and computing capacities.The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption.The system operates in discrete time slots and employs a quasi-static approach,with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context.This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation,particularly relevant in real-time industrial applications.Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches,reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Qn O.Moreover,the algorithmeffectively balances complexity and network performance,as demonstratedwhen reducing the number of devices in each group(Ng)from 200 to 50,resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption.This research offers a practical solution for optimizing IIoT networks in real-time industrial settings.展开更多
As autonomous vehicles and the other supporting infrastructures(e.g.,smart cities and intelligent transportation systems)become more commonplace,the Internet of Vehicles(IoV)is getting increasingly prevalent.There hav...As autonomous vehicles and the other supporting infrastructures(e.g.,smart cities and intelligent transportation systems)become more commonplace,the Internet of Vehicles(IoV)is getting increasingly prevalent.There have been attempts to utilize Digital Twins(DTs)to facilitate the design,evaluation,and deployment of IoV-based systems,for example by supporting high-fidelity modeling,real-time monitoring,and advanced predictive capabilities.However,the literature review undertaken in this paper suggests that integrating DTs into IoV-based system design and deployment remains an understudied topic.In addition,this paper explains how DTs can benefit IoV system designers and implementers,as well as describes several challenges and opportunities for future researchers.展开更多
文摘This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspiration from the sit-and-wait hunting strategy of these lizards.The algorithm’s core principles are meticulously detailed and mathematically structured into two distinct phases:(i)an exploration phase,which mimics the lizard’s sudden attack on its prey,and(ii)an exploitation phase,which simulates the lizard’s retreat to the treetops after feeding.To assess FLO’s efficacy in addressing optimization problems,its performance is rigorously tested on fifty-two standard benchmark functions.These functions include unimodal,high-dimensional multimodal,and fixed-dimensional multimodal functions,as well as the challenging CEC 2017 test suite.FLO’s performance is benchmarked against twelve established metaheuristic algorithms,providing a comprehensive comparative analysis.The simulation results demonstrate that FLO excels in both exploration and exploitation,effectively balancing these two critical aspects throughout the search process.This balanced approach enables FLO to outperform several competing algorithms in numerous test cases.Additionally,FLO is applied to twenty-two constrained optimization problems from the CEC 2011 test suite and four complex engineering design problems,further validating its robustness and versatility in solving real-world optimization challenges.Overall,the study highlights FLO’s superior performance and its potential as a powerful tool for tackling a wide range of optimization problems.
基金funded by Firat University Scientific Research Projects Management Unit for the scientific research project of Feyza AltunbeyÖzbay,numbered MF.23.49.
文摘Artificial rabbits optimization(ARO)is a recently proposed biology-based optimization algorithm inspired by the detour foraging and random hiding behavior of rabbits in nature.However,for solving optimization problems,the ARO algorithm shows slow convergence speed and can fall into local minima.To overcome these drawbacks,this paper proposes chaotic opposition-based learning ARO(COARO),an improved version of the ARO algorithm that incorporates opposition-based learning(OBL)and chaotic local search(CLS)techniques.By adding OBL to ARO,the convergence speed of the algorithm increases and it explores the search space better.Chaotic maps in CLS provide rapid convergence by scanning the search space efficiently,since their ergodicity and non-repetitive properties.The proposed COARO algorithm has been tested using thirty-three distinct benchmark functions.The outcomes have been compared with the most recent optimization algorithms.Additionally,the COARO algorithm’s problem-solving capabilities have been evaluated using six different engineering design problems and compared with various other algorithms.This study also introduces a binary variant of the continuous COARO algorithm,named BCOARO.The performance of BCOARO was evaluated on the breast cancer dataset.The effectiveness of BCOARO has been compared with different feature selection algorithms.The proposed BCOARO outperforms alternative algorithms,according to the findings obtained for real applications in terms of accuracy performance,and fitness value.Extensive experiments show that the COARO and BCOARO algorithms achieve promising results compared to other metaheuristic algorithms.
基金funded byResearchers Supporting Project Number(RSPD2024R 553),King Saud University,Riyadh,Saudi Arabia.
文摘In this work,we aim to introduce some modifications to the Anam-Net deep neural network(DNN)model for segmenting optic cup(OC)and optic disc(OD)in retinal fundus images to estimate the cup-to-disc ratio(CDR).The CDR is a reliable measure for the early diagnosis of Glaucoma.In this study,we developed a lightweight DNN model for OC and OD segmentation in retinal fundus images.Our DNN model is based on modifications to Anam-Net,incorporating an anamorphic depth embedding block.To reduce computational complexity,we employ a fixed filter size for all convolution layers in the encoder and decoder stages as the network deepens.This modification significantly reduces the number of trainable parameters,making the model lightweight and suitable for resource-constrained applications.We evaluate the performance of the developed model using two publicly available retinal image databases,namely RIM-ONE and Drishti-GS.The results demonstrate promising OC segmentation performance across most standard evaluation metrics while achieving analogous results for OD segmentation.We used two retinal fundus image databases named RIM-ONE and Drishti-GS that contained 159 images and 101 retinal images,respectively.For OD segmentation using the RIM-ONE we obtain an f1-score(F1),Jaccard coefficient(JC),and overlapping error(OE)of 0.950,0.9219,and 0.0781,respectively.Similarly,for OC segmentation using the same databases,we achieve scores of 0.8481(F1),0.7428(JC),and 0.2572(OE).Based on these experimental results and the significantly lower number of trainable parameters,we conclude that the developed model is highly suitable for the early diagnosis of glaucoma by accurately estimating the CDR.
基金extend their appreciation to Researcher Supporting Project Number(RSPD2023R582)King Saud University,Riyadh,Saudi Arabia.
文摘The healthcare sector holds valuable and sensitive data.The amount of this data and the need to handle,exchange,and protect it,has been increasing at a fast pace.Due to their nature,software-defined networks(SDNs)are widely used in healthcare systems,as they ensure effective resource utilization,safety,great network management,and monitoring.In this sector,due to the value of thedata,SDNs faceamajor challengeposed byawide range of attacks,such as distributed denial of service(DDoS)and probe attacks.These attacks reduce network performance,causing the degradation of different key performance indicators(KPIs)or,in the worst cases,a network failure which can threaten human lives.This can be significant,especially with the current expansion of portable healthcare that supports mobile and wireless devices for what is called mobile health,or m-health.In this study,we examine the effectiveness of using SDNs for defense against DDoS,as well as their effects on different network KPIs under various scenarios.We propose a threshold-based DDoS classifier(TBDC)technique to classify DDoS attacks in healthcare SDNs,aiming to block traffic considered a hazard in the form of a DDoS attack.We then evaluate the accuracy and performance of the proposed TBDC approach.Our technique shows outstanding performance,increasing the mean throughput by 190.3%,reducing the mean delay by 95%,and reducing packet loss by 99.7%relative to normal,with DDoS attack traffic.
文摘Maintaining a steady power supply requires accurate forecasting of solar irradiance,since clean energy resources do not provide steady power.The existing forecasting studies have examined the limited effects of weather conditions on solar radiation such as temperature and precipitation utilizing convolutional neural network(CNN),but no comprehensive study has been conducted on concentrations of air pollutants along with weather conditions.This paper proposes a hybrid approach based on deep learning,expanding the feature set by adding new air pollution concentrations,and ranking these features to select and reduce their size to improve efficiency.In order to improve the accuracy of feature selection,a maximum-dependency and minimum-redundancy(mRMR)criterion is applied to the constructed feature space to identify and rank the features.The combination of air pollution data with weather conditions data has enabled the prediction of solar irradiance with a higher accuracy.An evaluation of the proposed approach is conducted in Istanbul over 12 months for 43791 discrete times,with the main purpose of analyzing air data,including particular matter(PM10 and PM25),carbon monoxide(CO),nitric oxide(NOX),nitrogen dioxide(NO_(2)),ozone(O₃),sulfur dioxide(SO_(2))using a CNN,a long short-term memory network(LSTM),and MRMR feature extraction.Compared with the benchmark models with root mean square error(RMSE)results of 76.2,60.3,41.3,32.4,there is a significant improvement with the RMSE result of 5.536.This hybrid model presented here offers high prediction accuracy,a wider feature set,and a novel approach based on air concentrations combined with weather conditions for solar irradiance prediction.
文摘From a medical perspective,the 12 leads of the heart in an electrocardiogram(ECG)signal have functional dependencies with each other.Therefore,all these leads report different aspects of an arrhythmia.Their differences lie in the level of highlighting and displaying information about that arrhythmia.For example,although all leads show traces of atrial excitation,this function is more evident in lead II than in any other lead.In this article,a new model was proposed using ECG functional and structural dependencies between heart leads.In the prescreening stage,the ECG signals are segmented from the QRS point so that further analyzes can be performed on these segments in a more detailed manner.The mutual information indices were used to assess the relationship between leads.In order to calculate mutual information,the correlation between the 12 ECG leads has been calculated.The output of this step is a matrix containing all mutual information.Furthermore,to calculate the structural information of ECG signals,a capsule neural network was implemented to aid physicians in the automatic classification of cardiac arrhythmias.The architecture of this capsule neural network has been modified to perform the classification task.In the experimental results section,the proposed model was used to classify arrhythmias in ECG signals from the Chapman dataset.Numerical evaluations showed that this model has a precision of 97.02%,recall of 96.13%,F1-score of 96.57%and accuracy of 97.38%,indicating acceptable performance compared to other state-of-the-art methods.The proposed method shows an average accuracy of 2%superiority over similar works.
文摘In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)are also higher than the baseline models.The results of the proposed model help us identify the dominant feature sets that impact thyroid cancer classification and prediction.The features{calcification}and{shape}consistently emerged as the top-ranked features associated with thyroid malignancy,in both association-rule based interestingnessmetric values and SHAPmethods.The paper highlights the potential of the rule-based integrated models with SHAP in bridging the gap between the machine learning predictions and the interpretability of this prediction which is required for real-world medical applications.
基金supported by the Technology Development Program(S3344882)funded by the Ministry of SMEs and Startups(MSS,Korea).
文摘In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the image to its original quality.The challenge lies in regenerating significantly compressed images into a state in which these become identifiable.Therefore,this study focuses on the restoration of JPEG images subjected to substantial degradation caused by maximum lossy compression using Generative Adversarial Networks(GAN).The generator in this network is based on theU-Net architecture.It features a newhourglass structure that preserves the characteristics of the deep layers.In addition,the network incorporates two loss functions to generate natural and high-quality images:Low Frequency(LF)loss and High Frequency(HF)loss.HF loss uses a pretrained VGG-16 network and is configured using a specific layer that best represents features.This can enhance the performance in the high-frequency region.In contrast,LF loss is used to handle the low-frequency region.The two loss functions facilitate the generation of images by the generator,which can mislead the discriminator while accurately generating high-and low-frequency regions.Consequently,by removing the blocking effects frommaximum lossy compressed images,images inwhich identities could be recognized are generated.This study represents a significant improvement over previous research in terms of the image resolution performance.
基金Researchers Supporting Project Number(RSPD2024R 553),King Saud University,Riyadh,Saudi Arabia.
文摘Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed.
基金This research was funded by Prince Sattam bin Abdulaziz University(Project Number PSAU/2023/01/25387).
文摘The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.
基金the Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2024-1008.
文摘Traffic in today’s cities is a serious problem that increases travel times,negatively affects the environment,and drains financial resources.This study presents an Artificial Intelligence(AI)augmentedMobile Ad Hoc Networks(MANETs)based real-time prediction paradigm for urban traffic challenges.MANETs are wireless networks that are based on mobile devices and may self-organize.The distributed nature of MANETs and the power of AI approaches are leveraged in this framework to provide reliable and timely traffic congestion forecasts.This study suggests a unique Chaotic Spatial Fuzzy Polynomial Neural Network(CSFPNN)technique to assess real-time data acquired from various sources within theMANETs.The framework uses the proposed approach to learn from the data and create predictionmodels to detect possible traffic problems and their severity in real time.Real-time traffic prediction allows for proactive actions like resource allocation,dynamic route advice,and traffic signal optimization to reduce congestion.The framework supports effective decision-making,decreases travel time,lowers fuel use,and enhances overall urban mobility by giving timely information to pedestrians,drivers,and urban planners.Extensive simulations and real-world datasets are used to test the proposed framework’s prediction accuracy,responsiveness,and scalability.Experimental results show that the suggested framework successfully anticipates urban traffic issues in real-time,enables proactive traffic management,and aids in creating smarter,more sustainable cities.
基金The National Key R&D Program of China under contract No.2021YFC3101603.
文摘Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales.
基金supported via funding from Prince Sattam Bin Abdulaziz University Project Number(PSAU/2023/R/1445).
文摘This research proposes a highly effective soft computing paradigm for estimating the compressive strength(CS)of metakaolin-contained cemented materials.The proposed approach is a combination of an enhanced grey wolf optimizer(EGWO)and an extreme learning machine(ELM).EGWO is an augmented form of the classic grey wolf optimizer(GWO).Compared to standard GWO,EGWO has a better hunting mechanism and produces an optimal performance.The EGWO was used to optimize the ELM structure and a hybrid model,ELM-EGWO,was built.To train and validate the proposed ELM-EGWO model,a sum of 361 experimental results featuring five influencing factors was collected.Based on sensitivity analysis,three distinct cases of influencing parameters were considered to investigate the effect of influencing factors on predictive precision.Experimental consequences show that the constructed ELM-EGWO achieved the most accurate precision in both training(RMSE=0.0959)and testing(RMSE=0.0912)phases.The outcomes of the ELM-EGWO are significantly superior to those of deep neural networks(DNN),k-nearest neighbors(KNN),long short-term memory(LSTM),and other hybrid ELMs constructed with GWO,particle swarm optimization(PSO),harris hawks optimization(HHO),salp swarm algorithm(SSA),marine predators algorithm(MPA),and colony predation algorithm(CPA).The overall results demonstrate that the newly suggested ELM-EGWO has the potential to estimate the CS of metakaolin-contained cemented materials with a high degree of precision and robustness.
文摘Class Title:Radiological imaging method a comprehensive overview purpose.This GPT paper provides an overview of the different forms of radiological imaging and the potential diagnosis capabilities they offer as well as recent advances in the field.Materials and Methods:This paper provides an overview of conventional radiography digital radiography panoramic radiography computed tomography and cone-beam computed tomography.Additionally recent advances in radiological imaging are discussed such as imaging diagnosis and modern computer-aided diagnosis systems.Results:This paper details the differences between the imaging techniques the benefits of each and the current advances in the field to aid in the diagnosis of medical conditions.Conclusion:Radiological imaging is an extremely important tool in modern medicine to assist in medical diagnosis.This work provides an overview of the types of imaging techniques used the recent advances made and their potential applications.
基金This paper is financed by the European Union-NextGenerationEU,through the National Recovery and Resilience Plan of the Republic of Bulgaria,Project No.BG-RRP-2.004-0001-C01.
文摘The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accident prevention,cost reduction,and enhanced traffic regularity.Despite these benefits,IoV technology is susceptible to cyber-attacks,which can exploit vulnerabilities in the vehicle network,leading to perturbations,disturbances,non-recognition of traffic signs,accidents,and vehicle immobilization.This paper reviews the state-of-the-art achievements and developments in applying Deep Transfer Learning(DTL)models for Intrusion Detection Systems in the Internet of Vehicles(IDS-IoV)based on anomaly detection.IDS-IoV leverages anomaly detection through machine learning and DTL techniques to mitigate the risks posed by cyber-attacks.These systems can autonomously create specific models based on network data to differentiate between regular traffic and cyber-attacks.Among these techniques,transfer learning models are particularly promising due to their efficacy with tagged data,reduced training time,lower memory usage,and decreased computational complexity.We evaluate DTL models against criteria including the ability to transfer knowledge,detection rate,accurate analysis of complex data,and stability.This review highlights the significant progress made in the field,showcasing how DTL models enhance the performance and reliability of IDS-IoV systems.By examining recent advancements,we provide insights into how DTL can effectively address cyber-attack challenges in IoV environments,ensuring safer and more efficient transportation networks.
文摘Path-based clustering algorithms typically generate clusters by optimizing a benchmark function.Most optimiza-tion methods in clustering algorithms often offer solutions close to the general optimal value.This study achieves the global optimum value for the criterion function in a shorter time using the minimax distance,Maximum Spanning Tree“MST”,and meta-heuristic algorithms,including Genetic Algorithm“GA”and Particle Swarm Optimization“PSO”.The Fast Path-based Clustering“FPC”algorithm proposed in this paper can find cluster centers correctly in most datasets and quickly perform clustering operations.The FPC does this operation using MST,the minimax distance,and a new hybrid meta-heuristic algorithm in a few rounds of algorithm iterations.This algorithm can achieve the global optimal value,and the main clustering process of the algorithm has a computational complexity of O�k2×n�.However,due to the complexity of the minimum distance algorithm,the total computational complexity is O�n2�.Experimental results of FPC on synthetic datasets with arbitrary shapes demonstrate that the algorithm is resistant to noise and outliers and can correctly identify clusters of varying sizes and numbers.In addition,the FPC requires the number of clusters as the only parameter to perform the clustering process.A comparative analysis of FPC and other clustering algorithms in this domain indicates that FPC exhibits superior speed,stability,and performance.
基金This Research is funded by Researchers Supporting Project Number(RSPD2024R947),King Saud University,Riyadh,Saudi Arabia.
文摘Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.
文摘Due to the fact that a memristor with memory properties is an ideal electronic component for implementation of the artificial neural synaptic function,a brand-new tristable locally active memristor model is first proposed in this paper.Here,a novel four-dimensional fractional-order memristive cellular neural network(FO-MCNN)model with hidden attractors is constructed to enhance the engineering feasibility of the original CNN model and its performance.Then,its hardware circuit implementation and complicated dynamic properties are investigated on multi-simulation platforms.Subsequently,it is used toward secure communication application scenarios.Taking it as the pseudo-random number generator(PRNG),a new privacy image security scheme is designed based on the adaptive sampling rate compressive sensing(ASR-CS)model.Eventually,the simulation analysis and comparative experiments manifest that the proposed data encryption scheme possesses strong immunity against various security attack models and satisfactory compression performance.
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group research project under Grant Number RGP2/474/44.
文摘In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises essential components such as base stations,edge servers,and numerous IIoT devices characterized by limited energy and computing capacities.The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption.The system operates in discrete time slots and employs a quasi-static approach,with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context.This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation,particularly relevant in real-time industrial applications.Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches,reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Qn O.Moreover,the algorithmeffectively balances complexity and network performance,as demonstratedwhen reducing the number of devices in each group(Ng)from 200 to 50,resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption.This research offers a practical solution for optimizing IIoT networks in real-time industrial settings.
基金supported by the Natural Science Foundation of Jiangsu Province of China under grant no.BK20211284the Financial and Science Technology Plan Project of Xinjiang Production and Construction Corps under grant no.2020DB005.
文摘As autonomous vehicles and the other supporting infrastructures(e.g.,smart cities and intelligent transportation systems)become more commonplace,the Internet of Vehicles(IoV)is getting increasingly prevalent.There have been attempts to utilize Digital Twins(DTs)to facilitate the design,evaluation,and deployment of IoV-based systems,for example by supporting high-fidelity modeling,real-time monitoring,and advanced predictive capabilities.However,the literature review undertaken in this paper suggests that integrating DTs into IoV-based system design and deployment remains an understudied topic.In addition,this paper explains how DTs can benefit IoV system designers and implementers,as well as describes several challenges and opportunities for future researchers.