The widespread adoption of QR codes has revolutionized various industries, streamlined transactions and improved inventory management. However, this increased reliance on QR code technology also exposes it to potentia...The widespread adoption of QR codes has revolutionized various industries, streamlined transactions and improved inventory management. However, this increased reliance on QR code technology also exposes it to potential security risks that malicious actors can exploit. QR code Phishing, or “Quishing”, is a type of phishing attack that leverages QR codes to deceive individuals into visiting malicious websites or downloading harmful software. These attacks can be particularly effective due to the growing popularity and trust in QR codes. This paper examines the importance of enhancing the security of QR codes through the utilization of artificial intelligence (AI). The abstract investigates the integration of AI methods for identifying and mitigating security threats associated with QR code usage. By assessing the current state of QR code security and evaluating the effectiveness of AI-driven solutions, this research aims to propose comprehensive strategies for strengthening QR code technology’s resilience. The study contributes to discussions on secure data encoding and retrieval, providing valuable insights into the evolving synergy between QR codes and AI for the advancement of secure digital communication.展开更多
This study introduces the Orbit Weighting Scheme(OWS),a novel approach aimed at enhancing the precision and efficiency of Vector Space information retrieval(IR)models,which have traditionally relied on weighting schem...This study introduces the Orbit Weighting Scheme(OWS),a novel approach aimed at enhancing the precision and efficiency of Vector Space information retrieval(IR)models,which have traditionally relied on weighting schemes like tf-idf and BM25.These conventional methods often struggle with accurately capturing document relevance,leading to inefficiencies in both retrieval performance and index size management.OWS proposes a dynamic weighting mechanism that evaluates the significance of terms based on their orbital position within the vector space,emphasizing term relationships and distribution patterns overlooked by existing models.Our research focuses on evaluating OWS’s impact on model accuracy using Information Retrieval metrics like Recall,Precision,InterpolatedAverage Precision(IAP),andMeanAverage Precision(MAP).Additionally,we assessOWS’s effectiveness in reducing the inverted index size,crucial for model efficiency.We compare OWS-based retrieval models against others using different schemes,including tf-idf variations and BM25Delta.Results reveal OWS’s superiority,achieving a 54%Recall and 81%MAP,and a notable 38%reduction in the inverted index size.This highlights OWS’s potential in optimizing retrieval processes and underscores the need for further research in this underrepresented area to fully leverage OWS’s capabilities in information retrieval methodologies.展开更多
People’s lives have become easier and simpler as technology has proliferated.This is especially true with the Internet of Things(IoT).The biggest problem for blind people is figuring out how to get where they want to...People’s lives have become easier and simpler as technology has proliferated.This is especially true with the Internet of Things(IoT).The biggest problem for blind people is figuring out how to get where they want to go.People with good eyesight need to help these people.Smart shoes are a technique that helps blind people find their way when they walk.So,a special shoe has been made to help blind people walk safely without worrying about running into other people or solid objects.In this research,we are making a new safety system and a smart shoe for blind people.The system is based on Internet of Things(IoT)technology and uses three ultrasonic sensors to allow users to hear and react to barriers.It has ultrasonic sensors and a microprocessor that can tell how far away something is and if there are any obstacles.Water and flame sensors were used,and a sound was used to let the person know if an obstacle was near him.The sensors use Global Positioning System(GPS)technology to detect motion from almost every side to keep an eye on them and ensure they are safe.To test our proposal,we gave a questionnaire to 100 people.The questionnaire has eleven questions,and 99.1%of the people who filled it out said that the product meets their needs.展开更多
The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social network...The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.展开更多
Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computa...Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.展开更多
This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspi...This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspiration from the sit-and-wait hunting strategy of these lizards.The algorithm’s core principles are meticulously detailed and mathematically structured into two distinct phases:(i)an exploration phase,which mimics the lizard’s sudden attack on its prey,and(ii)an exploitation phase,which simulates the lizard’s retreat to the treetops after feeding.To assess FLO’s efficacy in addressing optimization problems,its performance is rigorously tested on fifty-two standard benchmark functions.These functions include unimodal,high-dimensional multimodal,and fixed-dimensional multimodal functions,as well as the challenging CEC 2017 test suite.FLO’s performance is benchmarked against twelve established metaheuristic algorithms,providing a comprehensive comparative analysis.The simulation results demonstrate that FLO excels in both exploration and exploitation,effectively balancing these two critical aspects throughout the search process.This balanced approach enables FLO to outperform several competing algorithms in numerous test cases.Additionally,FLO is applied to twenty-two constrained optimization problems from the CEC 2011 test suite and four complex engineering design problems,further validating its robustness and versatility in solving real-world optimization challenges.Overall,the study highlights FLO’s superior performance and its potential as a powerful tool for tackling a wide range of optimization problems.展开更多
Osteoarthritis(OA)is a debilitating degenerative disease affecting multiple joint tissues,including cartilage,bone,synovium,and adipose tissues.OA presents diverse clinical phenotypes and distinct molecular endotypes,...Osteoarthritis(OA)is a debilitating degenerative disease affecting multiple joint tissues,including cartilage,bone,synovium,and adipose tissues.OA presents diverse clinical phenotypes and distinct molecular endotypes,including inflammatory,metabolic,mechanical,genetic,and synovial variants.Consequently,innovative technologies are needed to support the development of effective diagnostic and precision therapeutic approaches.Traditional analysis of bulk OA tissue extracts has limitations due to technical constraints,causing challenges in the differentiation between various physiological and pathological phenotypes in joint tissues.This issue has led to standardization difficulties and hindered the success of clinical trials.Gaining insights into the spatial variations of the cellular and molecular structures in OA tissues,encompassing DNA,RNA,metabolites,and proteins,as well as their chemical properties,elemental composition,and mechanical attributes,can contribute to a more comprehensive understanding of the disease subtypes.Spatially resolved biology enables biologists to investigate cells within the context of their tissue microenvironment,providing a more holistic view of cellular function.Recent advances in innovative spatial biology techniques now allow intact tissue sections to be examined using various-omics lenses,such as genomics,transcriptomics,proteomics,and metabolomics,with spatial data.This fusion of approaches provides researchers with critical insights into the molecular composition and functions of the cells and tissues at precise spatial coordinates.Furthermore,advanced imaging techniques,including high-resolution microscopy,hyperspectral imaging,and mass spectrometry imaging,enable the visualization and analysis of the spatial distribution of biomolecules,cells,and tissues.Linking these molecular imaging outputs to conventional tissue histology can facilitate a more comprehensive characterization of disease phenotypes.This review summarizes the recent advancements in the molecular imaging modalities and methodologies for in-depth spatial analysis.It explores their applications,challenges,and potential opportunities in the field of OA.Additionally,this review provides a perspective on the potential research directions for these contemporary approaches that can meet the requirements of clinical diagnoses and the establishment of therapeutic targets for OA.展开更多
Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control sy...Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control systems,such as Attribute-Based Access Control(ABAC)and Role-Based Access Control(RBAC),are limited in their ability to enforce access decisions due to the variability and dynamism of attributes related to users and resources.This paper proposes a method for enforcing access decisions that is adaptable and dynamic,based on multilayer hybrid deep learning techniques,particularly the Tabular Deep Neural Network Tabular DNN method.This technique transforms all input attributes in an access request into a binary classification(allow or deny)using multiple layers,ensuring accurate and efficient access decision-making.The proposed solution was evaluated using the Kaggle Amazon access control policy dataset and demonstrated its effectiveness by achieving a 94%accuracy rate.Additionally,the proposed solution enhances the implementation of access decisions based on a variety of resource and user attributes while ensuring privacy through indirect communication with the Policy Administration Point(PAP).This solution significantly improves the flexibility of access control systems,making themmore dynamic and adaptable to the evolving needs ofmodern organizations.Furthermore,it offers a scalable approach to manage the complexities associated with the BYOD environment,providing a robust framework for secure and efficient access management.展开更多
This research introduces an innovative ensemble approach,combining Deep Residual Networks(ResNets)and Bidirectional Gated Recurrent Units(BiGRU),augmented with an Attention Mechanism,for the classification of heart ar...This research introduces an innovative ensemble approach,combining Deep Residual Networks(ResNets)and Bidirectional Gated Recurrent Units(BiGRU),augmented with an Attention Mechanism,for the classification of heart arrhythmias.The escalating prevalence of cardiovascular diseases necessitates advanced diagnostic tools to enhance accuracy and efficiency.The model leverages the deep hierarchical feature extraction capabilities of ResNets,which are adept at identifying intricate patterns within electrocardiogram(ECG)data,while BiGRU layers capture the temporal dynamics essential for understanding the sequential nature of ECG signals.The integration of an Attention Mechanism refines the model’s focus on critical segments of ECG data,ensuring a nuanced analysis that highlights the most informative features for arrhythmia classification.Evaluated on a comprehensive dataset of 12-lead ECG recordings,our ensemble model demonstrates superior performance in distinguishing between various types of arrhythmias,with an accuracy of 98.4%,a precision of 98.1%,a recall of 98%,and an F-score of 98%.This novel combination of convolutional and recurrent neural networks,supplemented by attention-driven mechanisms,advances automated ECG analysis,contributing significantly to healthcare’s machine learning applications and presenting a step forward in developing non-invasive,efficient,and reliable tools for early diagnosis and management of heart diseases.展开更多
This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective o...This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations.展开更多
Integrating machine learning and data mining is crucial for processing big data and extracting valuable insights to enhance decision-making.However,imbalanced target variables within big data present technical challen...Integrating machine learning and data mining is crucial for processing big data and extracting valuable insights to enhance decision-making.However,imbalanced target variables within big data present technical challenges that hinder the performance of supervised learning classifiers on key evaluation metrics,limiting their overall effectiveness.This study presents a comprehensive review of both common and recently developed Supervised Learning Classifiers(SLCs)and evaluates their performance in data-driven decision-making.The evaluation uses various metrics,with a particular focus on the Harmonic Mean Score(F-1 score)on an imbalanced real-world bank target marketing dataset.The findings indicate that grid-search random forest and random-search random forest excel in Precision and area under the curve,while Extreme Gradient Boosting(XGBoost)outperforms other traditional classifiers in terms of F-1 score.Employing oversampling methods to address the imbalanced data shows significant performance improvement in XGBoost,delivering superior results across all metrics,particularly when using the SMOTE variant known as the BorderlineSMOTE2 technique.The study concludes several key factors for effectively addressing the challenges of supervised learning with imbalanced datasets.These factors include the importance of selecting appropriate datasets for training and testing,choosing the right classifiers,employing effective techniques for processing and handling imbalanced datasets,and identifying suitable metrics for performance evaluation.Additionally,factors also entail the utilisation of effective exploratory data analysis in conjunction with visualisation techniques to yield insights conducive to data-driven decision-making.展开更多
The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses ...The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date.展开更多
This study introduces a new classifier tailored to address the limitations inherent in conventional classifiers such as K-nearest neighbor(KNN),random forest(RF),decision tree(DT),and support vector machine(SVM)for ar...This study introduces a new classifier tailored to address the limitations inherent in conventional classifiers such as K-nearest neighbor(KNN),random forest(RF),decision tree(DT),and support vector machine(SVM)for arrhythmia detection.The proposed classifier leverages the Chi-square distance as a primary metric,providing a specialized and original approach for precise arrhythmia detection.To optimize feature selection and refine the classifier’s performance,particle swarm optimization(PSO)is integrated with the Chi-square distance as a fitness function.This synergistic integration enhances the classifier’s capabilities,resulting in a substantial improvement in accuracy for arrhythmia detection.Experimental results demonstrate the efficacy of the proposed method,achieving a noteworthy accuracy rate of 98% with PSO,higher than 89% achieved without any previous optimization.The classifier outperforms machine learning(ML)and deep learning(DL)techniques,underscoring its reliability and superiority in the realm of arrhythmia classification.The promising results render it an effective method to support both academic and medical communities,offering an advanced and precise solution for arrhythmia detection in electrocardiogram(ECG)data.展开更多
The efficiency of businesses is often hindered by the challenges encountered in traditional Supply Chain Manage-ment(SCM),which is characterized by elevated risks due to inadequate accountability and transparency.To a...The efficiency of businesses is often hindered by the challenges encountered in traditional Supply Chain Manage-ment(SCM),which is characterized by elevated risks due to inadequate accountability and transparency.To address these challenges and improve operations in green manufacturing,optimization algorithms play a crucial role in supporting decision-making processes.In this study,we propose a solution to the green lot size optimization issue by leveraging bio-inspired algorithms,notably the Stork Optimization Algorithm(SOA).The SOA draws inspiration from the hunting and winter migration strategies employed by storks in nature.The theoretical framework of SOA is elaborated and mathematically modeled through two distinct phases:exploration,based on migration simulation,and exploitation,based on hunting strategy simulation.To tackle the green lot size optimization issue,our methodology involved gathering real-world data,which was then transformed into a simplified function with multiple constraints aimed at optimizing total costs and minimizing CO_(2) emissions.This function served as input for the SOA model.Subsequently,the SOA model was applied to identify the optimal lot size that strikes a balance between cost-effectiveness and sustainability.Through extensive experimentation,we compared the performance of SOA with twelve established metaheuristic algorithms,consistently demonstrating that SOA outperformed the others.This study’s contribution lies in providing an effective solution to the sustainable lot-size optimization dilemma,thereby reducing environmental impact and enhancing supply chain efficiency.The simulation findings underscore that SOA consistently achieves superior outcomes compared to existing optimization methodologies,making it a promising approach for green manufacturing and sustainable supply chain management.展开更多
This paper introduces a groundbreaking metaheuristic algorithm named Magnificent Frigatebird Optimization(MFO),inspired by the unique behaviors observed in magnificent frigatebirds in their natural habitats.The founda...This paper introduces a groundbreaking metaheuristic algorithm named Magnificent Frigatebird Optimization(MFO),inspired by the unique behaviors observed in magnificent frigatebirds in their natural habitats.The foundation of MFO is based on the kleptoparasitic behavior of these birds,where they steal prey from other seabirds.In this process,a magnificent frigatebird targets a food-carrying seabird,aggressively pecking at it until the seabird drops its prey.The frigatebird then swiftly dives to capture the abandoned prey before it falls into the water.The theoretical framework of MFO is thoroughly detailed and mathematically represented,mimicking the frigatebird’s kleptoparasitic behavior in two distinct phases:exploration and exploitation.During the exploration phase,the algorithm searches for new potential solutions across a broad area,akin to the frigatebird scouting for vulnerable seabirds.In the exploitation phase,the algorithm fine-tunes the solutions,similar to the frigatebird focusing on a single target to secure its meal.To evaluate MFO’s performance,the algorithm is tested on twenty-three standard benchmark functions,including unimodal,high-dimensional multimodal,and fixed-dimensional multimodal types.The results from these evaluations highlight MFO’s proficiency in balancing exploration and exploitation throughout the optimization process.Comparative studies with twelve well-known metaheuristic algo-rithms demonstrate that MFO consistently achieves superior optimization results,outperforming its competitors across various metrics.In addition,the implementation of MFO on four engineering design problems shows the effectiveness of the proposed approach in handling real-world applications,thereby validating its practical utility and robustness.展开更多
The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orient...The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orientation detection.Political articles(especially in the Arab world)are different from other articles due to their subjectivity,in which the author’s beliefs and political affiliation might have a significant influence on a political article.With categories representing the main political ideologies,this problem may be thought of as a subset of the text categorization(classification).In general,the performance of machine learning models for text classification is sensitive to hyperparameter settings.Furthermore,the feature vector used to represent a document must capture,to some extent,the complex semantics of natural language.To this end,this paper presents an intelligent system to detect political Arabic article orientation that adapts the categorical boosting(CatBoost)method combined with a multi-level feature concept.Extracting features at multiple levels can enhance the model’s ability to discriminate between different classes or patterns.Each level may capture different aspects of the input data,contributing to a more comprehensive representation.CatBoost,a robust and efficient gradient-boosting algorithm,is utilized to effectively learn and predict the complex relationships between these features and the political orientation labels associated with the articles.A dataset of political Arabic texts collected from diverse sources,including postings and articles,is used to assess the suggested technique.Conservative,reform,and revolutionary are the three subcategories of these opinions.The results of this study demonstrate that compared to other frequently used machine learning models for text classification,the CatBoost method using multi-level features performs better with an accuracy of 98.14%.展开更多
With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become v...With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become vulnerable to sophisticated cyber-attacks,especially privacy-related attacks such as inference and data poisoning ones.Federated Learning(FL)has been regarded as a hopeful method to enable distributed learning with privacypreserved intelligence in IoT applications.Even though the significance of developing privacy-preserving FL has drawn as a great research interest,the current research only concentrates on FL with independent identically distributed(i.i.d)data and few studies have addressed the non-i.i.d setting.FL is known to be vulnerable to Generative Adversarial Network(GAN)attacks,where an adversary can presume to act as a contributor participating in the training process to acquire the private data of other contributors.This paper proposes an innovative Privacy Protection-based Federated Deep Learning(PP-FDL)framework,which accomplishes data protection against privacy-related GAN attacks,along with high classification rates from non-i.i.d data.PP-FDL is designed to enable fog nodes to cooperate to train the FDL model in a way that ensures contributors have no access to the data of each other,where class probabilities are protected utilizing a private identifier generated for each class.The PP-FDL framework is evaluated for image classification using simple convolutional networks which are trained using MNIST and CIFAR-10 datasets.The empirical results have revealed that PF-DFL can achieve data protection and the framework outperforms the other three state-of-the-art models with 3%–8%as accuracy improvements.展开更多
Network embedding aspires to learn a low-dimensional vector of each node in networks,which can apply to diverse data mining tasks.In real-life,many networks include rich attributes and temporal information.However,mos...Network embedding aspires to learn a low-dimensional vector of each node in networks,which can apply to diverse data mining tasks.In real-life,many networks include rich attributes and temporal information.However,most existing embedding approaches ignore either temporal information or network attributes.A self-attention based architecture using higher-order weights and node attributes for both static and temporal attributed network embedding is presented in this article.A random walk sampling algorithm based on higher-order weights and node attributes to capture network topological features is presented.For static attributed networks,the algorithm incorporates first-order to k-order weights,and node attribute similarities into one weighted graph to preserve topological features of networks.For temporal attribute networks,the algorithm incorporates previous snapshots of networks containing first-order to k-order weights,and nodes attribute similarities into one weighted graph.In addition,the algorithm utilises a damping factor to ensure that the more recent snapshots allocate a greater weight.Attribute features are then incorporated into topological features.Next,the authors adopt the most advanced architecture,Self-Attention Networks,to learn node representations.Experimental results on node classification of static attributed networks and link prediction of temporal attributed networks reveal that our proposed approach is competitive against diverse state-of-the-art baseline approaches.展开更多
Increasing Internet of Things(IoT)device connectivity makes botnet attacks more dangerous,carrying catastrophic hazards.As IoT botnets evolve,their dynamic and multifaceted nature hampers conventional detection method...Increasing Internet of Things(IoT)device connectivity makes botnet attacks more dangerous,carrying catastrophic hazards.As IoT botnets evolve,their dynamic and multifaceted nature hampers conventional detection methods.This paper proposes a risk assessment framework based on fuzzy logic and Particle Swarm Optimization(PSO)to address the risks associated with IoT botnets.Fuzzy logic addresses IoT threat uncertainties and ambiguities methodically.Fuzzy component settings are optimized using PSO to improve accuracy.The methodology allows for more complex thinking by transitioning from binary to continuous assessment.Instead of expert inputs,PSO data-driven tunes rules and membership functions.This study presents a complete IoT botnet risk assessment system.The methodology helps security teams allocate resources by categorizing threats as high,medium,or low severity.This study shows how CICIoT2023 can assess cyber risks.Our research has implications beyond detection,as it provides a proactive approach to risk management and promotes the development of more secure IoT environments.展开更多
Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely h...Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.展开更多
文摘The widespread adoption of QR codes has revolutionized various industries, streamlined transactions and improved inventory management. However, this increased reliance on QR code technology also exposes it to potential security risks that malicious actors can exploit. QR code Phishing, or “Quishing”, is a type of phishing attack that leverages QR codes to deceive individuals into visiting malicious websites or downloading harmful software. These attacks can be particularly effective due to the growing popularity and trust in QR codes. This paper examines the importance of enhancing the security of QR codes through the utilization of artificial intelligence (AI). The abstract investigates the integration of AI methods for identifying and mitigating security threats associated with QR code usage. By assessing the current state of QR code security and evaluating the effectiveness of AI-driven solutions, this research aims to propose comprehensive strategies for strengthening QR code technology’s resilience. The study contributes to discussions on secure data encoding and retrieval, providing valuable insights into the evolving synergy between QR codes and AI for the advancement of secure digital communication.
文摘This study introduces the Orbit Weighting Scheme(OWS),a novel approach aimed at enhancing the precision and efficiency of Vector Space information retrieval(IR)models,which have traditionally relied on weighting schemes like tf-idf and BM25.These conventional methods often struggle with accurately capturing document relevance,leading to inefficiencies in both retrieval performance and index size management.OWS proposes a dynamic weighting mechanism that evaluates the significance of terms based on their orbital position within the vector space,emphasizing term relationships and distribution patterns overlooked by existing models.Our research focuses on evaluating OWS’s impact on model accuracy using Information Retrieval metrics like Recall,Precision,InterpolatedAverage Precision(IAP),andMeanAverage Precision(MAP).Additionally,we assessOWS’s effectiveness in reducing the inverted index size,crucial for model efficiency.We compare OWS-based retrieval models against others using different schemes,including tf-idf variations and BM25Delta.Results reveal OWS’s superiority,achieving a 54%Recall and 81%MAP,and a notable 38%reduction in the inverted index size.This highlights OWS’s potential in optimizing retrieval processes and underscores the need for further research in this underrepresented area to fully leverage OWS’s capabilities in information retrieval methodologies.
文摘People’s lives have become easier and simpler as technology has proliferated.This is especially true with the Internet of Things(IoT).The biggest problem for blind people is figuring out how to get where they want to go.People with good eyesight need to help these people.Smart shoes are a technique that helps blind people find their way when they walk.So,a special shoe has been made to help blind people walk safely without worrying about running into other people or solid objects.In this research,we are making a new safety system and a smart shoe for blind people.The system is based on Internet of Things(IoT)technology and uses three ultrasonic sensors to allow users to hear and react to barriers.It has ultrasonic sensors and a microprocessor that can tell how far away something is and if there are any obstacles.Water and flame sensors were used,and a sound was used to let the person know if an obstacle was near him.The sensors use Global Positioning System(GPS)technology to detect motion from almost every side to keep an eye on them and ensure they are safe.To test our proposal,we gave a questionnaire to 100 people.The questionnaire has eleven questions,and 99.1%of the people who filled it out said that the product meets their needs.
文摘The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.
文摘Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.
文摘This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspiration from the sit-and-wait hunting strategy of these lizards.The algorithm’s core principles are meticulously detailed and mathematically structured into two distinct phases:(i)an exploration phase,which mimics the lizard’s sudden attack on its prey,and(ii)an exploitation phase,which simulates the lizard’s retreat to the treetops after feeding.To assess FLO’s efficacy in addressing optimization problems,its performance is rigorously tested on fifty-two standard benchmark functions.These functions include unimodal,high-dimensional multimodal,and fixed-dimensional multimodal functions,as well as the challenging CEC 2017 test suite.FLO’s performance is benchmarked against twelve established metaheuristic algorithms,providing a comprehensive comparative analysis.The simulation results demonstrate that FLO excels in both exploration and exploitation,effectively balancing these two critical aspects throughout the search process.This balanced approach enables FLO to outperform several competing algorithms in numerous test cases.Additionally,FLO is applied to twenty-two constrained optimization problems from the CEC 2011 test suite and four complex engineering design problems,further validating its robustness and versatility in solving real-world optimization challenges.Overall,the study highlights FLO’s superior performance and its potential as a powerful tool for tackling a wide range of optimization problems.
基金the NHMRC Investigator grant fellowship (APP1176298)the EMCR grant from the Centre for Biomedical Technologies (QUT)+4 种基金the QUT Postgraduate Research Award (QUTPRA)QUT HDR TOP-UP scholarshipQUT HDR Tuition Fee Sponsorshipfunding support from the Academy of Finland (315820)the Jane and Aatos Erkko Foundation (190001).
文摘Osteoarthritis(OA)is a debilitating degenerative disease affecting multiple joint tissues,including cartilage,bone,synovium,and adipose tissues.OA presents diverse clinical phenotypes and distinct molecular endotypes,including inflammatory,metabolic,mechanical,genetic,and synovial variants.Consequently,innovative technologies are needed to support the development of effective diagnostic and precision therapeutic approaches.Traditional analysis of bulk OA tissue extracts has limitations due to technical constraints,causing challenges in the differentiation between various physiological and pathological phenotypes in joint tissues.This issue has led to standardization difficulties and hindered the success of clinical trials.Gaining insights into the spatial variations of the cellular and molecular structures in OA tissues,encompassing DNA,RNA,metabolites,and proteins,as well as their chemical properties,elemental composition,and mechanical attributes,can contribute to a more comprehensive understanding of the disease subtypes.Spatially resolved biology enables biologists to investigate cells within the context of their tissue microenvironment,providing a more holistic view of cellular function.Recent advances in innovative spatial biology techniques now allow intact tissue sections to be examined using various-omics lenses,such as genomics,transcriptomics,proteomics,and metabolomics,with spatial data.This fusion of approaches provides researchers with critical insights into the molecular composition and functions of the cells and tissues at precise spatial coordinates.Furthermore,advanced imaging techniques,including high-resolution microscopy,hyperspectral imaging,and mass spectrometry imaging,enable the visualization and analysis of the spatial distribution of biomolecules,cells,and tissues.Linking these molecular imaging outputs to conventional tissue histology can facilitate a more comprehensive characterization of disease phenotypes.This review summarizes the recent advancements in the molecular imaging modalities and methodologies for in-depth spatial analysis.It explores their applications,challenges,and potential opportunities in the field of OA.Additionally,this review provides a perspective on the potential research directions for these contemporary approaches that can meet the requirements of clinical diagnoses and the establishment of therapeutic targets for OA.
基金partly supported by the University of Malaya Impact Oriented Interdisci-plinary Research Grant under Grant IIRG008(A,B,C)-19IISS.
文摘Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control systems,such as Attribute-Based Access Control(ABAC)and Role-Based Access Control(RBAC),are limited in their ability to enforce access decisions due to the variability and dynamism of attributes related to users and resources.This paper proposes a method for enforcing access decisions that is adaptable and dynamic,based on multilayer hybrid deep learning techniques,particularly the Tabular Deep Neural Network Tabular DNN method.This technique transforms all input attributes in an access request into a binary classification(allow or deny)using multiple layers,ensuring accurate and efficient access decision-making.The proposed solution was evaluated using the Kaggle Amazon access control policy dataset and demonstrated its effectiveness by achieving a 94%accuracy rate.Additionally,the proposed solution enhances the implementation of access decisions based on a variety of resource and user attributes while ensuring privacy through indirect communication with the Policy Administration Point(PAP).This solution significantly improves the flexibility of access control systems,making themmore dynamic and adaptable to the evolving needs ofmodern organizations.Furthermore,it offers a scalable approach to manage the complexities associated with the BYOD environment,providing a robust framework for secure and efficient access management.
基金supported by the research project—Application of Machine Learning Methods for Early Diagnosis of Pathologies of the Cardiovascular System funded by the Ministry of Science and Higher Education of the Republic of Kazakhstan.Grant No.IRN AP13068289.
文摘This research introduces an innovative ensemble approach,combining Deep Residual Networks(ResNets)and Bidirectional Gated Recurrent Units(BiGRU),augmented with an Attention Mechanism,for the classification of heart arrhythmias.The escalating prevalence of cardiovascular diseases necessitates advanced diagnostic tools to enhance accuracy and efficiency.The model leverages the deep hierarchical feature extraction capabilities of ResNets,which are adept at identifying intricate patterns within electrocardiogram(ECG)data,while BiGRU layers capture the temporal dynamics essential for understanding the sequential nature of ECG signals.The integration of an Attention Mechanism refines the model’s focus on critical segments of ECG data,ensuring a nuanced analysis that highlights the most informative features for arrhythmia classification.Evaluated on a comprehensive dataset of 12-lead ECG recordings,our ensemble model demonstrates superior performance in distinguishing between various types of arrhythmias,with an accuracy of 98.4%,a precision of 98.1%,a recall of 98%,and an F-score of 98%.This novel combination of convolutional and recurrent neural networks,supplemented by attention-driven mechanisms,advances automated ECG analysis,contributing significantly to healthcare’s machine learning applications and presenting a step forward in developing non-invasive,efficient,and reliable tools for early diagnosis and management of heart diseases.
文摘This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations.
基金support from the Cyber Technology Institute(CTI)at the School of Computer Science and Informatics,De Montfort University,United Kingdom,along with financial assistance from Universiti Tun Hussein Onn Malaysia and the UTHM Publisher’s office through publication fund E15216.
文摘Integrating machine learning and data mining is crucial for processing big data and extracting valuable insights to enhance decision-making.However,imbalanced target variables within big data present technical challenges that hinder the performance of supervised learning classifiers on key evaluation metrics,limiting their overall effectiveness.This study presents a comprehensive review of both common and recently developed Supervised Learning Classifiers(SLCs)and evaluates their performance in data-driven decision-making.The evaluation uses various metrics,with a particular focus on the Harmonic Mean Score(F-1 score)on an imbalanced real-world bank target marketing dataset.The findings indicate that grid-search random forest and random-search random forest excel in Precision and area under the curve,while Extreme Gradient Boosting(XGBoost)outperforms other traditional classifiers in terms of F-1 score.Employing oversampling methods to address the imbalanced data shows significant performance improvement in XGBoost,delivering superior results across all metrics,particularly when using the SMOTE variant known as the BorderlineSMOTE2 technique.The study concludes several key factors for effectively addressing the challenges of supervised learning with imbalanced datasets.These factors include the importance of selecting appropriate datasets for training and testing,choosing the right classifiers,employing effective techniques for processing and handling imbalanced datasets,and identifying suitable metrics for performance evaluation.Additionally,factors also entail the utilisation of effective exploratory data analysis in conjunction with visualisation techniques to yield insights conducive to data-driven decision-making.
文摘The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date.
文摘This study introduces a new classifier tailored to address the limitations inherent in conventional classifiers such as K-nearest neighbor(KNN),random forest(RF),decision tree(DT),and support vector machine(SVM)for arrhythmia detection.The proposed classifier leverages the Chi-square distance as a primary metric,providing a specialized and original approach for precise arrhythmia detection.To optimize feature selection and refine the classifier’s performance,particle swarm optimization(PSO)is integrated with the Chi-square distance as a fitness function.This synergistic integration enhances the classifier’s capabilities,resulting in a substantial improvement in accuracy for arrhythmia detection.Experimental results demonstrate the efficacy of the proposed method,achieving a noteworthy accuracy rate of 98% with PSO,higher than 89% achieved without any previous optimization.The classifier outperforms machine learning(ML)and deep learning(DL)techniques,underscoring its reliability and superiority in the realm of arrhythmia classification.The promising results render it an effective method to support both academic and medical communities,offering an advanced and precise solution for arrhythmia detection in electrocardiogram(ECG)data.
基金This research is funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan,Grant No.AP19674517.
文摘The efficiency of businesses is often hindered by the challenges encountered in traditional Supply Chain Manage-ment(SCM),which is characterized by elevated risks due to inadequate accountability and transparency.To address these challenges and improve operations in green manufacturing,optimization algorithms play a crucial role in supporting decision-making processes.In this study,we propose a solution to the green lot size optimization issue by leveraging bio-inspired algorithms,notably the Stork Optimization Algorithm(SOA).The SOA draws inspiration from the hunting and winter migration strategies employed by storks in nature.The theoretical framework of SOA is elaborated and mathematically modeled through two distinct phases:exploration,based on migration simulation,and exploitation,based on hunting strategy simulation.To tackle the green lot size optimization issue,our methodology involved gathering real-world data,which was then transformed into a simplified function with multiple constraints aimed at optimizing total costs and minimizing CO_(2) emissions.This function served as input for the SOA model.Subsequently,the SOA model was applied to identify the optimal lot size that strikes a balance between cost-effectiveness and sustainability.Through extensive experimentation,we compared the performance of SOA with twelve established metaheuristic algorithms,consistently demonstrating that SOA outperformed the others.This study’s contribution lies in providing an effective solution to the sustainable lot-size optimization dilemma,thereby reducing environmental impact and enhancing supply chain efficiency.The simulation findings underscore that SOA consistently achieves superior outcomes compared to existing optimization methodologies,making it a promising approach for green manufacturing and sustainable supply chain management.
基金This research is funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan(Grant No.AP19674517).
文摘This paper introduces a groundbreaking metaheuristic algorithm named Magnificent Frigatebird Optimization(MFO),inspired by the unique behaviors observed in magnificent frigatebirds in their natural habitats.The foundation of MFO is based on the kleptoparasitic behavior of these birds,where they steal prey from other seabirds.In this process,a magnificent frigatebird targets a food-carrying seabird,aggressively pecking at it until the seabird drops its prey.The frigatebird then swiftly dives to capture the abandoned prey before it falls into the water.The theoretical framework of MFO is thoroughly detailed and mathematically represented,mimicking the frigatebird’s kleptoparasitic behavior in two distinct phases:exploration and exploitation.During the exploration phase,the algorithm searches for new potential solutions across a broad area,akin to the frigatebird scouting for vulnerable seabirds.In the exploitation phase,the algorithm fine-tunes the solutions,similar to the frigatebird focusing on a single target to secure its meal.To evaluate MFO’s performance,the algorithm is tested on twenty-three standard benchmark functions,including unimodal,high-dimensional multimodal,and fixed-dimensional multimodal types.The results from these evaluations highlight MFO’s proficiency in balancing exploration and exploitation throughout the optimization process.Comparative studies with twelve well-known metaheuristic algo-rithms demonstrate that MFO consistently achieves superior optimization results,outperforming its competitors across various metrics.In addition,the implementation of MFO on four engineering design problems shows the effectiveness of the proposed approach in handling real-world applications,thereby validating its practical utility and robustness.
文摘The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orientation detection.Political articles(especially in the Arab world)are different from other articles due to their subjectivity,in which the author’s beliefs and political affiliation might have a significant influence on a political article.With categories representing the main political ideologies,this problem may be thought of as a subset of the text categorization(classification).In general,the performance of machine learning models for text classification is sensitive to hyperparameter settings.Furthermore,the feature vector used to represent a document must capture,to some extent,the complex semantics of natural language.To this end,this paper presents an intelligent system to detect political Arabic article orientation that adapts the categorical boosting(CatBoost)method combined with a multi-level feature concept.Extracting features at multiple levels can enhance the model’s ability to discriminate between different classes or patterns.Each level may capture different aspects of the input data,contributing to a more comprehensive representation.CatBoost,a robust and efficient gradient-boosting algorithm,is utilized to effectively learn and predict the complex relationships between these features and the political orientation labels associated with the articles.A dataset of political Arabic texts collected from diverse sources,including postings and articles,is used to assess the suggested technique.Conservative,reform,and revolutionary are the three subcategories of these opinions.The results of this study demonstrate that compared to other frequently used machine learning models for text classification,the CatBoost method using multi-level features performs better with an accuracy of 98.14%.
文摘With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become vulnerable to sophisticated cyber-attacks,especially privacy-related attacks such as inference and data poisoning ones.Federated Learning(FL)has been regarded as a hopeful method to enable distributed learning with privacypreserved intelligence in IoT applications.Even though the significance of developing privacy-preserving FL has drawn as a great research interest,the current research only concentrates on FL with independent identically distributed(i.i.d)data and few studies have addressed the non-i.i.d setting.FL is known to be vulnerable to Generative Adversarial Network(GAN)attacks,where an adversary can presume to act as a contributor participating in the training process to acquire the private data of other contributors.This paper proposes an innovative Privacy Protection-based Federated Deep Learning(PP-FDL)framework,which accomplishes data protection against privacy-related GAN attacks,along with high classification rates from non-i.i.d data.PP-FDL is designed to enable fog nodes to cooperate to train the FDL model in a way that ensures contributors have no access to the data of each other,where class probabilities are protected utilizing a private identifier generated for each class.The PP-FDL framework is evaluated for image classification using simple convolutional networks which are trained using MNIST and CIFAR-10 datasets.The empirical results have revealed that PF-DFL can achieve data protection and the framework outperforms the other three state-of-the-art models with 3%–8%as accuracy improvements.
基金Key research and development projects of Ningxia,Grant/Award Number:2022BDE03007Natural Science Foundation of Ningxia Province,Grant/Award Numbers:2023A0367,2021A0966,2022AAC05010,2022AAC03004,2021AAC03068。
文摘Network embedding aspires to learn a low-dimensional vector of each node in networks,which can apply to diverse data mining tasks.In real-life,many networks include rich attributes and temporal information.However,most existing embedding approaches ignore either temporal information or network attributes.A self-attention based architecture using higher-order weights and node attributes for both static and temporal attributed network embedding is presented in this article.A random walk sampling algorithm based on higher-order weights and node attributes to capture network topological features is presented.For static attributed networks,the algorithm incorporates first-order to k-order weights,and node attribute similarities into one weighted graph to preserve topological features of networks.For temporal attribute networks,the algorithm incorporates previous snapshots of networks containing first-order to k-order weights,and nodes attribute similarities into one weighted graph.In addition,the algorithm utilises a damping factor to ensure that the more recent snapshots allocate a greater weight.Attribute features are then incorporated into topological features.Next,the authors adopt the most advanced architecture,Self-Attention Networks,to learn node representations.Experimental results on node classification of static attributed networks and link prediction of temporal attributed networks reveal that our proposed approach is competitive against diverse state-of-the-art baseline approaches.
文摘Increasing Internet of Things(IoT)device connectivity makes botnet attacks more dangerous,carrying catastrophic hazards.As IoT botnets evolve,their dynamic and multifaceted nature hampers conventional detection methods.This paper proposes a risk assessment framework based on fuzzy logic and Particle Swarm Optimization(PSO)to address the risks associated with IoT botnets.Fuzzy logic addresses IoT threat uncertainties and ambiguities methodically.Fuzzy component settings are optimized using PSO to improve accuracy.The methodology allows for more complex thinking by transitioning from binary to continuous assessment.Instead of expert inputs,PSO data-driven tunes rules and membership functions.This study presents a complete IoT botnet risk assessment system.The methodology helps security teams allocate resources by categorizing threats as high,medium,or low severity.This study shows how CICIoT2023 can assess cyber risks.Our research has implications beyond detection,as it provides a proactive approach to risk management and promotes the development of more secure IoT environments.
文摘Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.