The widespread adoption of QR codes has revolutionized various industries, streamlined transactions and improved inventory management. However, this increased reliance on QR code technology also exposes it to potentia...The widespread adoption of QR codes has revolutionized various industries, streamlined transactions and improved inventory management. However, this increased reliance on QR code technology also exposes it to potential security risks that malicious actors can exploit. QR code Phishing, or “Quishing”, is a type of phishing attack that leverages QR codes to deceive individuals into visiting malicious websites or downloading harmful software. These attacks can be particularly effective due to the growing popularity and trust in QR codes. This paper examines the importance of enhancing the security of QR codes through the utilization of artificial intelligence (AI). The abstract investigates the integration of AI methods for identifying and mitigating security threats associated with QR code usage. By assessing the current state of QR code security and evaluating the effectiveness of AI-driven solutions, this research aims to propose comprehensive strategies for strengthening QR code technology’s resilience. The study contributes to discussions on secure data encoding and retrieval, providing valuable insights into the evolving synergy between QR codes and AI for the advancement of secure digital communication.展开更多
People’s lives have become easier and simpler as technology has proliferated.This is especially true with the Internet of Things(IoT).The biggest problem for blind people is figuring out how to get where they want to...People’s lives have become easier and simpler as technology has proliferated.This is especially true with the Internet of Things(IoT).The biggest problem for blind people is figuring out how to get where they want to go.People with good eyesight need to help these people.Smart shoes are a technique that helps blind people find their way when they walk.So,a special shoe has been made to help blind people walk safely without worrying about running into other people or solid objects.In this research,we are making a new safety system and a smart shoe for blind people.The system is based on Internet of Things(IoT)technology and uses three ultrasonic sensors to allow users to hear and react to barriers.It has ultrasonic sensors and a microprocessor that can tell how far away something is and if there are any obstacles.Water and flame sensors were used,and a sound was used to let the person know if an obstacle was near him.The sensors use Global Positioning System(GPS)technology to detect motion from almost every side to keep an eye on them and ensure they are safe.To test our proposal,we gave a questionnaire to 100 people.The questionnaire has eleven questions,and 99.1%of the people who filled it out said that the product meets their needs.展开更多
The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social network...The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.展开更多
Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computa...Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.展开更多
The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses ...The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date.展开更多
This study introduces a new classifier tailored to address the limitations inherent in conventional classifiers such as K-nearest neighbor(KNN),random forest(RF),decision tree(DT),and support vector machine(SVM)for ar...This study introduces a new classifier tailored to address the limitations inherent in conventional classifiers such as K-nearest neighbor(KNN),random forest(RF),decision tree(DT),and support vector machine(SVM)for arrhythmia detection.The proposed classifier leverages the Chi-square distance as a primary metric,providing a specialized and original approach for precise arrhythmia detection.To optimize feature selection and refine the classifier’s performance,particle swarm optimization(PSO)is integrated with the Chi-square distance as a fitness function.This synergistic integration enhances the classifier’s capabilities,resulting in a substantial improvement in accuracy for arrhythmia detection.Experimental results demonstrate the efficacy of the proposed method,achieving a noteworthy accuracy rate of 98% with PSO,higher than 89% achieved without any previous optimization.The classifier outperforms machine learning(ML)and deep learning(DL)techniques,underscoring its reliability and superiority in the realm of arrhythmia classification.The promising results render it an effective method to support both academic and medical communities,offering an advanced and precise solution for arrhythmia detection in electrocardiogram(ECG)data.展开更多
The efficiency of businesses is often hindered by the challenges encountered in traditional Supply Chain Manage-ment(SCM),which is characterized by elevated risks due to inadequate accountability and transparency.To a...The efficiency of businesses is often hindered by the challenges encountered in traditional Supply Chain Manage-ment(SCM),which is characterized by elevated risks due to inadequate accountability and transparency.To address these challenges and improve operations in green manufacturing,optimization algorithms play a crucial role in supporting decision-making processes.In this study,we propose a solution to the green lot size optimization issue by leveraging bio-inspired algorithms,notably the Stork Optimization Algorithm(SOA).The SOA draws inspiration from the hunting and winter migration strategies employed by storks in nature.The theoretical framework of SOA is elaborated and mathematically modeled through two distinct phases:exploration,based on migration simulation,and exploitation,based on hunting strategy simulation.To tackle the green lot size optimization issue,our methodology involved gathering real-world data,which was then transformed into a simplified function with multiple constraints aimed at optimizing total costs and minimizing CO_(2) emissions.This function served as input for the SOA model.Subsequently,the SOA model was applied to identify the optimal lot size that strikes a balance between cost-effectiveness and sustainability.Through extensive experimentation,we compared the performance of SOA with twelve established metaheuristic algorithms,consistently demonstrating that SOA outperformed the others.This study’s contribution lies in providing an effective solution to the sustainable lot-size optimization dilemma,thereby reducing environmental impact and enhancing supply chain efficiency.The simulation findings underscore that SOA consistently achieves superior outcomes compared to existing optimization methodologies,making it a promising approach for green manufacturing and sustainable supply chain management.展开更多
This paper introduces a groundbreaking metaheuristic algorithm named Magnificent Frigatebird Optimization(MFO),inspired by the unique behaviors observed in magnificent frigatebirds in their natural habitats.The founda...This paper introduces a groundbreaking metaheuristic algorithm named Magnificent Frigatebird Optimization(MFO),inspired by the unique behaviors observed in magnificent frigatebirds in their natural habitats.The foundation of MFO is based on the kleptoparasitic behavior of these birds,where they steal prey from other seabirds.In this process,a magnificent frigatebird targets a food-carrying seabird,aggressively pecking at it until the seabird drops its prey.The frigatebird then swiftly dives to capture the abandoned prey before it falls into the water.The theoretical framework of MFO is thoroughly detailed and mathematically represented,mimicking the frigatebird’s kleptoparasitic behavior in two distinct phases:exploration and exploitation.During the exploration phase,the algorithm searches for new potential solutions across a broad area,akin to the frigatebird scouting for vulnerable seabirds.In the exploitation phase,the algorithm fine-tunes the solutions,similar to the frigatebird focusing on a single target to secure its meal.To evaluate MFO’s performance,the algorithm is tested on twenty-three standard benchmark functions,including unimodal,high-dimensional multimodal,and fixed-dimensional multimodal types.The results from these evaluations highlight MFO’s proficiency in balancing exploration and exploitation throughout the optimization process.Comparative studies with twelve well-known metaheuristic algo-rithms demonstrate that MFO consistently achieves superior optimization results,outperforming its competitors across various metrics.In addition,the implementation of MFO on four engineering design problems shows the effectiveness of the proposed approach in handling real-world applications,thereby validating its practical utility and robustness.展开更多
Network embedding aspires to learn a low-dimensional vector of each node in networks,which can apply to diverse data mining tasks.In real-life,many networks include rich attributes and temporal information.However,mos...Network embedding aspires to learn a low-dimensional vector of each node in networks,which can apply to diverse data mining tasks.In real-life,many networks include rich attributes and temporal information.However,most existing embedding approaches ignore either temporal information or network attributes.A self-attention based architecture using higher-order weights and node attributes for both static and temporal attributed network embedding is presented in this article.A random walk sampling algorithm based on higher-order weights and node attributes to capture network topological features is presented.For static attributed networks,the algorithm incorporates first-order to k-order weights,and node attribute similarities into one weighted graph to preserve topological features of networks.For temporal attribute networks,the algorithm incorporates previous snapshots of networks containing first-order to k-order weights,and nodes attribute similarities into one weighted graph.In addition,the algorithm utilises a damping factor to ensure that the more recent snapshots allocate a greater weight.Attribute features are then incorporated into topological features.Next,the authors adopt the most advanced architecture,Self-Attention Networks,to learn node representations.Experimental results on node classification of static attributed networks and link prediction of temporal attributed networks reveal that our proposed approach is competitive against diverse state-of-the-art baseline approaches.展开更多
Increasing Internet of Things(IoT)device connectivity makes botnet attacks more dangerous,carrying catastrophic hazards.As IoT botnets evolve,their dynamic and multifaceted nature hampers conventional detection method...Increasing Internet of Things(IoT)device connectivity makes botnet attacks more dangerous,carrying catastrophic hazards.As IoT botnets evolve,their dynamic and multifaceted nature hampers conventional detection methods.This paper proposes a risk assessment framework based on fuzzy logic and Particle Swarm Optimization(PSO)to address the risks associated with IoT botnets.Fuzzy logic addresses IoT threat uncertainties and ambiguities methodically.Fuzzy component settings are optimized using PSO to improve accuracy.The methodology allows for more complex thinking by transitioning from binary to continuous assessment.Instead of expert inputs,PSO data-driven tunes rules and membership functions.This study presents a complete IoT botnet risk assessment system.The methodology helps security teams allocate resources by categorizing threats as high,medium,or low severity.This study shows how CICIoT2023 can assess cyber risks.Our research has implications beyond detection,as it provides a proactive approach to risk management and promotes the development of more secure IoT environments.展开更多
Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely h...Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.展开更多
As massive underground projects have become popular in dense urban cities,a problem has arisen:which model predicts the best for Tunnel Boring Machine(TBM)performance in these tunneling projects?However,performance le...As massive underground projects have become popular in dense urban cities,a problem has arisen:which model predicts the best for Tunnel Boring Machine(TBM)performance in these tunneling projects?However,performance level of TBMs in complex geological conditions is still a great challenge for practitioners and researchers.On the other hand,a reliable and accurate prediction of TBM performance is essential to planning an applicable tunnel construction schedule.The performance of TBM is very difficult to estimate due to various geotechnical and geological factors and machine specifications.The previously-proposed intelligent techniques in this field are mostly based on a single or base model with a low level of accuracy.Hence,this study aims to introduce a hybrid randomforest(RF)technique optimized by global harmony search with generalized oppositionbased learning(GOGHS)for forecasting TBM advance rate(AR).Optimizing the RF hyper-parameters in terms of,e.g.,tree number and maximum tree depth is the main objective of using the GOGHS-RF model.In the modelling of this study,a comprehensive databasewith themost influential parameters onTBMtogetherwithTBM AR were used as input and output variables,respectively.To examine the capability and power of the GOGHSRF model,three more hybrid models of particle swarm optimization-RF,genetic algorithm-RF and artificial bee colony-RF were also constructed to forecast TBM AR.Evaluation of the developed models was performed by calculating several performance indices,including determination coefficient(R2),root-mean-square-error(RMSE),and mean-absolute-percentage-error(MAPE).The results showed that theGOGHS-RF is a more accurate technique for estimatingTBMAR compared to the other applied models.The newly-developedGOGHS-RFmodel enjoyed R2=0.9937 and 0.9844,respectively,for train and test stages,which are higher than a pre-developed RF.Also,the importance of the input parameters was interpreted through the SHapley Additive exPlanations(SHAP)method,and it was found that thrust force per cutter is the most important variable on TBMAR.The GOGHS-RF model can be used in mechanized tunnel projects for predicting and checking performance.展开更多
This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspi...This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspiration from the sit-and-wait hunting strategy of these lizards.The algorithm’s core principles are meticulously detailed and mathematically structured into two distinct phases:(i)an exploration phase,which mimics the lizard’s sudden attack on its prey,and(ii)an exploitation phase,which simulates the lizard’s retreat to the treetops after feeding.To assess FLO’s efficacy in addressing optimization problems,its performance is rigorously tested on fifty-two standard benchmark functions.These functions include unimodal,high-dimensional multimodal,and fixed-dimensional multimodal functions,as well as the challenging CEC 2017 test suite.FLO’s performance is benchmarked against twelve established metaheuristic algorithms,providing a comprehensive comparative analysis.The simulation results demonstrate that FLO excels in both exploration and exploitation,effectively balancing these two critical aspects throughout the search process.This balanced approach enables FLO to outperform several competing algorithms in numerous test cases.Additionally,FLO is applied to twenty-two constrained optimization problems from the CEC 2011 test suite and four complex engineering design problems,further validating its robustness and versatility in solving real-world optimization challenges.Overall,the study highlights FLO’s superior performance and its potential as a powerful tool for tackling a wide range of optimization problems.展开更多
Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques...Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms.展开更多
Biometric recognition is a widely used technology for user authentication.In the application of this technology,biometric security and recognition accuracy are two important issues that should be considered.In terms o...Biometric recognition is a widely used technology for user authentication.In the application of this technology,biometric security and recognition accuracy are two important issues that should be considered.In terms of biometric security,cancellable biometrics is an effective technique for protecting biometric data.Regarding recognition accuracy,feature representation plays a significant role in the performance and reliability of cancellable biometric systems.How to design good feature representations for cancellable biometrics is a challenging topic that has attracted a great deal of attention from the computer vision community,especially from researchers of cancellable biometrics.Feature extraction and learning in cancellable biometrics is to find suitable feature representations with a view to achieving satisfactory recognition performance,while the privacy of biometric data is protected.This survey informs the progress,trend and challenges of feature extraction and learning for cancellable biometrics,thus shedding light on the latest developments and future research of this area.展开更多
The increased adoption of Internet of Medical Things (IoMT) technologies has resulted in the widespread use ofBody Area Networks (BANs) in medical and non-medical domains. However, the performance of IEEE 802.15.4-bas...The increased adoption of Internet of Medical Things (IoMT) technologies has resulted in the widespread use ofBody Area Networks (BANs) in medical and non-medical domains. However, the performance of IEEE 802.15.4-based BANs is impacted by challenges related to heterogeneous data traffic requirements among nodes, includingcontention during finite backoff periods, association delays, and traffic channel access through clear channelassessment (CCA) algorithms. These challenges lead to increased packet collisions, queuing delays, retransmissions,and the neglect of critical traffic, thereby hindering performance indicators such as throughput, packet deliveryratio, packet drop rate, and packet delay. Therefore, we propose Dynamic Next Backoff Period and Clear ChannelAssessment (DNBP-CCA) schemes to address these issues. The DNBP-CCA schemes leverage a combination ofthe Dynamic Next Backoff Period (DNBP) scheme and the Dynamic Next Clear Channel Assessment (DNCCA)scheme. The DNBP scheme employs a fuzzy Takagi, Sugeno, and Kang (TSK) model’s inference system toquantitatively analyze backoff exponent, channel clearance, collision ratio, and data rate as input parameters. Onthe other hand, the DNCCA scheme dynamically adapts the CCA process based on requested data transmission tothe coordinator, considering input parameters such as buffer status ratio and acknowledgement ratio. As a result,simulations demonstrate that our proposed schemes are better than some existing representative approaches andenhance data transmission, reduce node collisions, improve average throughput, and packet delivery ratio, anddecrease average packet drop rate and packet delay.展开更多
Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malwar...Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malware detection.However,there remains a need for comprehensive studies that compare the performance of different classifiers specifically for Windows malware detection.Addressing this gap can provide valuable insights for enhancing cybersecurity strategies.While numerous studies have explored malware detection using machine learning techniques,there is a lack of systematic comparison of supervised classifiers for Windows malware detection.Understanding the relative effectiveness of these classifiers can inform the selection of optimal detection methods and improve overall security measures.This study aims to bridge the research gap by conducting a comparative analysis of supervised machine learning classifiers for detecting malware on Windows systems.The objectives include Investigating the performance of various classifiers,such as Gaussian Naïve Bayes,K Nearest Neighbors(KNN),Stochastic Gradient Descent Classifier(SGDC),and Decision Tree,in detecting Windows malware.Evaluating the accuracy,efficiency,and suitability of each classifier for real-world malware detection scenarios.Identifying the strengths and limitations of different classifiers to provide insights for cybersecurity practitioners and researchers.Offering recommendations for selecting the most effective classifier for Windows malware detection based on empirical evidence.The study employs a structured methodology consisting of several phases:exploratory data analysis,data preprocessing,model training,and evaluation.Exploratory data analysis involves understanding the dataset’s characteristics and identifying preprocessing requirements.Data preprocessing includes cleaning,feature encoding,dimensionality reduction,and optimization to prepare the data for training.Model training utilizes various supervised classifiers,and their performance is evaluated using metrics such as accuracy,precision,recall,and F1 score.The study’s outcomes comprise a comparative analysis of supervised machine learning classifiers for Windows malware detection.Results reveal the effectiveness and efficiency of each classifier in detecting different types of malware.Additionally,insights into their strengths and limitations provide practical guidance for enhancing cybersecurity defenses.Overall,this research contributes to advancing malware detection techniques and bolstering the security posture of Windows systems against evolving cyber threats.展开更多
Osteoarthritis(OA)is a debilitating degenerative disease affecting multiple joint tissues,including cartilage,bone,synovium,and adipose tissues.OA presents diverse clinical phenotypes and distinct molecular endotypes,...Osteoarthritis(OA)is a debilitating degenerative disease affecting multiple joint tissues,including cartilage,bone,synovium,and adipose tissues.OA presents diverse clinical phenotypes and distinct molecular endotypes,including inflammatory,metabolic,mechanical,genetic,and synovial variants.Consequently,innovative technologies are needed to support the development of effective diagnostic and precision therapeutic approaches.Traditional analysis of bulk OA tissue extracts has limitations due to technical constraints,causing challenges in the differentiation between various physiological and pathological phenotypes in joint tissues.This issue has led to standardization difficulties and hindered the success of clinical trials.Gaining insights into the spatial variations of the cellular and molecular structures in OA tissues,encompassing DNA,RNA,metabolites,and proteins,as well as their chemical properties,elemental composition,and mechanical attributes,can contribute to a more comprehensive understanding of the disease subtypes.Spatially resolved biology enables biologists to investigate cells within the context of their tissue microenvironment,providing a more holistic view of cellular function.Recent advances in innovative spatial biology techniques now allow intact tissue sections to be examined using various-omics lenses,such as genomics,transcriptomics,proteomics,and metabolomics,with spatial data.This fusion of approaches provides researchers with critical insights into the molecular composition and functions of the cells and tissues at precise spatial coordinates.Furthermore,advanced imaging techniques,including high-resolution microscopy,hyperspectral imaging,and mass spectrometry imaging,enable the visualization and analysis of the spatial distribution of biomolecules,cells,and tissues.Linking these molecular imaging outputs to conventional tissue histology can facilitate a more comprehensive characterization of disease phenotypes.This review summarizes the recent advancements in the molecular imaging modalities and methodologies for in-depth spatial analysis.It explores their applications,challenges,and potential opportunities in the field of OA.Additionally,this review provides a perspective on the potential research directions for these contemporary approaches that can meet the requirements of clinical diagnoses and the establishment of therapeutic targets for OA.展开更多
The fundamental frequency plays a significant part in understanding and perceiving the pitch of a sound. The pitch is a fundamental attribute employed in numerous speech-related works. For fundamental frequency extrac...The fundamental frequency plays a significant part in understanding and perceiving the pitch of a sound. The pitch is a fundamental attribute employed in numerous speech-related works. For fundamental frequency extraction, several algorithms have been developed which one to use relies on the signal’s characteristics and the surrounding noise. Thus, the algorithm’s noise resistance becomes more critical than ever for precise fundamental frequency estimation. Nonetheless, numerous state-of-the-art algorithms face struggles in achieving satisfying outcomes when confronted with speech recordings that are noisy with low signal-to-noise ratio (SNR) values. Also, most of the recent techniques utilize different frame lengths for pitch extraction. From this point of view, This research considers different frame lengths on male and female speech signals for fundamental frequency extraction. Also, analyze the frame length dependency on the speech signal analytically to understand which frame length is more suitable and effective for male and female speech signals specifically. For the validation of our idea, we have utilized the conventional autocorrelation function (ACF), and state-of-the-art method BaNa. This study puts out a potent idea that will work better for speech processing applications in noisy speech. From experimental results, the proposed idea represents which frame length is more appropriate for male and female speech signals in noisy environments.展开更多
The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accide...The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accident prevention,cost reduction,and enhanced traffic regularity.Despite these benefits,IoV technology is susceptible to cyber-attacks,which can exploit vulnerabilities in the vehicle network,leading to perturbations,disturbances,non-recognition of traffic signs,accidents,and vehicle immobilization.This paper reviews the state-of-the-art achievements and developments in applying Deep Transfer Learning(DTL)models for Intrusion Detection Systems in the Internet of Vehicles(IDS-IoV)based on anomaly detection.IDS-IoV leverages anomaly detection through machine learning and DTL techniques to mitigate the risks posed by cyber-attacks.These systems can autonomously create specific models based on network data to differentiate between regular traffic and cyber-attacks.Among these techniques,transfer learning models are particularly promising due to their efficacy with tagged data,reduced training time,lower memory usage,and decreased computational complexity.We evaluate DTL models against criteria including the ability to transfer knowledge,detection rate,accurate analysis of complex data,and stability.This review highlights the significant progress made in the field,showcasing how DTL models enhance the performance and reliability of IDS-IoV systems.By examining recent advancements,we provide insights into how DTL can effectively address cyber-attack challenges in IoV environments,ensuring safer and more efficient transportation networks.展开更多
文摘The widespread adoption of QR codes has revolutionized various industries, streamlined transactions and improved inventory management. However, this increased reliance on QR code technology also exposes it to potential security risks that malicious actors can exploit. QR code Phishing, or “Quishing”, is a type of phishing attack that leverages QR codes to deceive individuals into visiting malicious websites or downloading harmful software. These attacks can be particularly effective due to the growing popularity and trust in QR codes. This paper examines the importance of enhancing the security of QR codes through the utilization of artificial intelligence (AI). The abstract investigates the integration of AI methods for identifying and mitigating security threats associated with QR code usage. By assessing the current state of QR code security and evaluating the effectiveness of AI-driven solutions, this research aims to propose comprehensive strategies for strengthening QR code technology’s resilience. The study contributes to discussions on secure data encoding and retrieval, providing valuable insights into the evolving synergy between QR codes and AI for the advancement of secure digital communication.
文摘People’s lives have become easier and simpler as technology has proliferated.This is especially true with the Internet of Things(IoT).The biggest problem for blind people is figuring out how to get where they want to go.People with good eyesight need to help these people.Smart shoes are a technique that helps blind people find their way when they walk.So,a special shoe has been made to help blind people walk safely without worrying about running into other people or solid objects.In this research,we are making a new safety system and a smart shoe for blind people.The system is based on Internet of Things(IoT)technology and uses three ultrasonic sensors to allow users to hear and react to barriers.It has ultrasonic sensors and a microprocessor that can tell how far away something is and if there are any obstacles.Water and flame sensors were used,and a sound was used to let the person know if an obstacle was near him.The sensors use Global Positioning System(GPS)technology to detect motion from almost every side to keep an eye on them and ensure they are safe.To test our proposal,we gave a questionnaire to 100 people.The questionnaire has eleven questions,and 99.1%of the people who filled it out said that the product meets their needs.
文摘The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks.
文摘Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.
文摘The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date.
文摘This study introduces a new classifier tailored to address the limitations inherent in conventional classifiers such as K-nearest neighbor(KNN),random forest(RF),decision tree(DT),and support vector machine(SVM)for arrhythmia detection.The proposed classifier leverages the Chi-square distance as a primary metric,providing a specialized and original approach for precise arrhythmia detection.To optimize feature selection and refine the classifier’s performance,particle swarm optimization(PSO)is integrated with the Chi-square distance as a fitness function.This synergistic integration enhances the classifier’s capabilities,resulting in a substantial improvement in accuracy for arrhythmia detection.Experimental results demonstrate the efficacy of the proposed method,achieving a noteworthy accuracy rate of 98% with PSO,higher than 89% achieved without any previous optimization.The classifier outperforms machine learning(ML)and deep learning(DL)techniques,underscoring its reliability and superiority in the realm of arrhythmia classification.The promising results render it an effective method to support both academic and medical communities,offering an advanced and precise solution for arrhythmia detection in electrocardiogram(ECG)data.
基金This research is funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan,Grant No.AP19674517.
文摘The efficiency of businesses is often hindered by the challenges encountered in traditional Supply Chain Manage-ment(SCM),which is characterized by elevated risks due to inadequate accountability and transparency.To address these challenges and improve operations in green manufacturing,optimization algorithms play a crucial role in supporting decision-making processes.In this study,we propose a solution to the green lot size optimization issue by leveraging bio-inspired algorithms,notably the Stork Optimization Algorithm(SOA).The SOA draws inspiration from the hunting and winter migration strategies employed by storks in nature.The theoretical framework of SOA is elaborated and mathematically modeled through two distinct phases:exploration,based on migration simulation,and exploitation,based on hunting strategy simulation.To tackle the green lot size optimization issue,our methodology involved gathering real-world data,which was then transformed into a simplified function with multiple constraints aimed at optimizing total costs and minimizing CO_(2) emissions.This function served as input for the SOA model.Subsequently,the SOA model was applied to identify the optimal lot size that strikes a balance between cost-effectiveness and sustainability.Through extensive experimentation,we compared the performance of SOA with twelve established metaheuristic algorithms,consistently demonstrating that SOA outperformed the others.This study’s contribution lies in providing an effective solution to the sustainable lot-size optimization dilemma,thereby reducing environmental impact and enhancing supply chain efficiency.The simulation findings underscore that SOA consistently achieves superior outcomes compared to existing optimization methodologies,making it a promising approach for green manufacturing and sustainable supply chain management.
基金This research is funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan(Grant No.AP19674517).
文摘This paper introduces a groundbreaking metaheuristic algorithm named Magnificent Frigatebird Optimization(MFO),inspired by the unique behaviors observed in magnificent frigatebirds in their natural habitats.The foundation of MFO is based on the kleptoparasitic behavior of these birds,where they steal prey from other seabirds.In this process,a magnificent frigatebird targets a food-carrying seabird,aggressively pecking at it until the seabird drops its prey.The frigatebird then swiftly dives to capture the abandoned prey before it falls into the water.The theoretical framework of MFO is thoroughly detailed and mathematically represented,mimicking the frigatebird’s kleptoparasitic behavior in two distinct phases:exploration and exploitation.During the exploration phase,the algorithm searches for new potential solutions across a broad area,akin to the frigatebird scouting for vulnerable seabirds.In the exploitation phase,the algorithm fine-tunes the solutions,similar to the frigatebird focusing on a single target to secure its meal.To evaluate MFO’s performance,the algorithm is tested on twenty-three standard benchmark functions,including unimodal,high-dimensional multimodal,and fixed-dimensional multimodal types.The results from these evaluations highlight MFO’s proficiency in balancing exploration and exploitation throughout the optimization process.Comparative studies with twelve well-known metaheuristic algo-rithms demonstrate that MFO consistently achieves superior optimization results,outperforming its competitors across various metrics.In addition,the implementation of MFO on four engineering design problems shows the effectiveness of the proposed approach in handling real-world applications,thereby validating its practical utility and robustness.
基金Key research and development projects of Ningxia,Grant/Award Number:2022BDE03007Natural Science Foundation of Ningxia Province,Grant/Award Numbers:2023A0367,2021A0966,2022AAC05010,2022AAC03004,2021AAC03068。
文摘Network embedding aspires to learn a low-dimensional vector of each node in networks,which can apply to diverse data mining tasks.In real-life,many networks include rich attributes and temporal information.However,most existing embedding approaches ignore either temporal information or network attributes.A self-attention based architecture using higher-order weights and node attributes for both static and temporal attributed network embedding is presented in this article.A random walk sampling algorithm based on higher-order weights and node attributes to capture network topological features is presented.For static attributed networks,the algorithm incorporates first-order to k-order weights,and node attribute similarities into one weighted graph to preserve topological features of networks.For temporal attribute networks,the algorithm incorporates previous snapshots of networks containing first-order to k-order weights,and nodes attribute similarities into one weighted graph.In addition,the algorithm utilises a damping factor to ensure that the more recent snapshots allocate a greater weight.Attribute features are then incorporated into topological features.Next,the authors adopt the most advanced architecture,Self-Attention Networks,to learn node representations.Experimental results on node classification of static attributed networks and link prediction of temporal attributed networks reveal that our proposed approach is competitive against diverse state-of-the-art baseline approaches.
文摘Increasing Internet of Things(IoT)device connectivity makes botnet attacks more dangerous,carrying catastrophic hazards.As IoT botnets evolve,their dynamic and multifaceted nature hampers conventional detection methods.This paper proposes a risk assessment framework based on fuzzy logic and Particle Swarm Optimization(PSO)to address the risks associated with IoT botnets.Fuzzy logic addresses IoT threat uncertainties and ambiguities methodically.Fuzzy component settings are optimized using PSO to improve accuracy.The methodology allows for more complex thinking by transitioning from binary to continuous assessment.Instead of expert inputs,PSO data-driven tunes rules and membership functions.This study presents a complete IoT botnet risk assessment system.The methodology helps security teams allocate resources by categorizing threats as high,medium,or low severity.This study shows how CICIoT2023 can assess cyber risks.Our research has implications beyond detection,as it provides a proactive approach to risk management and promotes the development of more secure IoT environments.
文摘Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.
基金the National Natural Science Foundation of China(Grant 42177164)the Distinguished Youth Science Foundation of Hunan Province of China(2022JJ10073).
文摘As massive underground projects have become popular in dense urban cities,a problem has arisen:which model predicts the best for Tunnel Boring Machine(TBM)performance in these tunneling projects?However,performance level of TBMs in complex geological conditions is still a great challenge for practitioners and researchers.On the other hand,a reliable and accurate prediction of TBM performance is essential to planning an applicable tunnel construction schedule.The performance of TBM is very difficult to estimate due to various geotechnical and geological factors and machine specifications.The previously-proposed intelligent techniques in this field are mostly based on a single or base model with a low level of accuracy.Hence,this study aims to introduce a hybrid randomforest(RF)technique optimized by global harmony search with generalized oppositionbased learning(GOGHS)for forecasting TBM advance rate(AR).Optimizing the RF hyper-parameters in terms of,e.g.,tree number and maximum tree depth is the main objective of using the GOGHS-RF model.In the modelling of this study,a comprehensive databasewith themost influential parameters onTBMtogetherwithTBM AR were used as input and output variables,respectively.To examine the capability and power of the GOGHSRF model,three more hybrid models of particle swarm optimization-RF,genetic algorithm-RF and artificial bee colony-RF were also constructed to forecast TBM AR.Evaluation of the developed models was performed by calculating several performance indices,including determination coefficient(R2),root-mean-square-error(RMSE),and mean-absolute-percentage-error(MAPE).The results showed that theGOGHS-RF is a more accurate technique for estimatingTBMAR compared to the other applied models.The newly-developedGOGHS-RFmodel enjoyed R2=0.9937 and 0.9844,respectively,for train and test stages,which are higher than a pre-developed RF.Also,the importance of the input parameters was interpreted through the SHapley Additive exPlanations(SHAP)method,and it was found that thrust force per cutter is the most important variable on TBMAR.The GOGHS-RF model can be used in mechanized tunnel projects for predicting and checking performance.
文摘This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspiration from the sit-and-wait hunting strategy of these lizards.The algorithm’s core principles are meticulously detailed and mathematically structured into two distinct phases:(i)an exploration phase,which mimics the lizard’s sudden attack on its prey,and(ii)an exploitation phase,which simulates the lizard’s retreat to the treetops after feeding.To assess FLO’s efficacy in addressing optimization problems,its performance is rigorously tested on fifty-two standard benchmark functions.These functions include unimodal,high-dimensional multimodal,and fixed-dimensional multimodal functions,as well as the challenging CEC 2017 test suite.FLO’s performance is benchmarked against twelve established metaheuristic algorithms,providing a comprehensive comparative analysis.The simulation results demonstrate that FLO excels in both exploration and exploitation,effectively balancing these two critical aspects throughout the search process.This balanced approach enables FLO to outperform several competing algorithms in numerous test cases.Additionally,FLO is applied to twenty-two constrained optimization problems from the CEC 2011 test suite and four complex engineering design problems,further validating its robustness and versatility in solving real-world optimization challenges.Overall,the study highlights FLO’s superior performance and its potential as a powerful tool for tackling a wide range of optimization problems.
基金This paper’s logical organisation and content quality have been enhanced,so the authors thank anonymous reviewers and journal editors for assistance.
文摘Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms.
基金Australian Research Council,Grant/Award Numbers:DP190103660,DP200103207,LP180100663UniSQ Capacity Building Grants,Grant/Award Number:1008313。
文摘Biometric recognition is a widely used technology for user authentication.In the application of this technology,biometric security and recognition accuracy are two important issues that should be considered.In terms of biometric security,cancellable biometrics is an effective technique for protecting biometric data.Regarding recognition accuracy,feature representation plays a significant role in the performance and reliability of cancellable biometric systems.How to design good feature representations for cancellable biometrics is a challenging topic that has attracted a great deal of attention from the computer vision community,especially from researchers of cancellable biometrics.Feature extraction and learning in cancellable biometrics is to find suitable feature representations with a view to achieving satisfactory recognition performance,while the privacy of biometric data is protected.This survey informs the progress,trend and challenges of feature extraction and learning for cancellable biometrics,thus shedding light on the latest developments and future research of this area.
基金Research Supporting Project Number(RSP2024R421),King Saud University,Riyadh,Saudi Arabia。
文摘The increased adoption of Internet of Medical Things (IoMT) technologies has resulted in the widespread use ofBody Area Networks (BANs) in medical and non-medical domains. However, the performance of IEEE 802.15.4-based BANs is impacted by challenges related to heterogeneous data traffic requirements among nodes, includingcontention during finite backoff periods, association delays, and traffic channel access through clear channelassessment (CCA) algorithms. These challenges lead to increased packet collisions, queuing delays, retransmissions,and the neglect of critical traffic, thereby hindering performance indicators such as throughput, packet deliveryratio, packet drop rate, and packet delay. Therefore, we propose Dynamic Next Backoff Period and Clear ChannelAssessment (DNBP-CCA) schemes to address these issues. The DNBP-CCA schemes leverage a combination ofthe Dynamic Next Backoff Period (DNBP) scheme and the Dynamic Next Clear Channel Assessment (DNCCA)scheme. The DNBP scheme employs a fuzzy Takagi, Sugeno, and Kang (TSK) model’s inference system toquantitatively analyze backoff exponent, channel clearance, collision ratio, and data rate as input parameters. Onthe other hand, the DNCCA scheme dynamically adapts the CCA process based on requested data transmission tothe coordinator, considering input parameters such as buffer status ratio and acknowledgement ratio. As a result,simulations demonstrate that our proposed schemes are better than some existing representative approaches andenhance data transmission, reduce node collisions, improve average throughput, and packet delivery ratio, anddecrease average packet drop rate and packet delay.
基金This researchwork is supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R411),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malware detection.However,there remains a need for comprehensive studies that compare the performance of different classifiers specifically for Windows malware detection.Addressing this gap can provide valuable insights for enhancing cybersecurity strategies.While numerous studies have explored malware detection using machine learning techniques,there is a lack of systematic comparison of supervised classifiers for Windows malware detection.Understanding the relative effectiveness of these classifiers can inform the selection of optimal detection methods and improve overall security measures.This study aims to bridge the research gap by conducting a comparative analysis of supervised machine learning classifiers for detecting malware on Windows systems.The objectives include Investigating the performance of various classifiers,such as Gaussian Naïve Bayes,K Nearest Neighbors(KNN),Stochastic Gradient Descent Classifier(SGDC),and Decision Tree,in detecting Windows malware.Evaluating the accuracy,efficiency,and suitability of each classifier for real-world malware detection scenarios.Identifying the strengths and limitations of different classifiers to provide insights for cybersecurity practitioners and researchers.Offering recommendations for selecting the most effective classifier for Windows malware detection based on empirical evidence.The study employs a structured methodology consisting of several phases:exploratory data analysis,data preprocessing,model training,and evaluation.Exploratory data analysis involves understanding the dataset’s characteristics and identifying preprocessing requirements.Data preprocessing includes cleaning,feature encoding,dimensionality reduction,and optimization to prepare the data for training.Model training utilizes various supervised classifiers,and their performance is evaluated using metrics such as accuracy,precision,recall,and F1 score.The study’s outcomes comprise a comparative analysis of supervised machine learning classifiers for Windows malware detection.Results reveal the effectiveness and efficiency of each classifier in detecting different types of malware.Additionally,insights into their strengths and limitations provide practical guidance for enhancing cybersecurity defenses.Overall,this research contributes to advancing malware detection techniques and bolstering the security posture of Windows systems against evolving cyber threats.
基金the NHMRC Investigator grant fellowship (APP1176298)the EMCR grant from the Centre for Biomedical Technologies (QUT)+4 种基金the QUT Postgraduate Research Award (QUTPRA)QUT HDR TOP-UP scholarshipQUT HDR Tuition Fee Sponsorshipfunding support from the Academy of Finland (315820)the Jane and Aatos Erkko Foundation (190001).
文摘Osteoarthritis(OA)is a debilitating degenerative disease affecting multiple joint tissues,including cartilage,bone,synovium,and adipose tissues.OA presents diverse clinical phenotypes and distinct molecular endotypes,including inflammatory,metabolic,mechanical,genetic,and synovial variants.Consequently,innovative technologies are needed to support the development of effective diagnostic and precision therapeutic approaches.Traditional analysis of bulk OA tissue extracts has limitations due to technical constraints,causing challenges in the differentiation between various physiological and pathological phenotypes in joint tissues.This issue has led to standardization difficulties and hindered the success of clinical trials.Gaining insights into the spatial variations of the cellular and molecular structures in OA tissues,encompassing DNA,RNA,metabolites,and proteins,as well as their chemical properties,elemental composition,and mechanical attributes,can contribute to a more comprehensive understanding of the disease subtypes.Spatially resolved biology enables biologists to investigate cells within the context of their tissue microenvironment,providing a more holistic view of cellular function.Recent advances in innovative spatial biology techniques now allow intact tissue sections to be examined using various-omics lenses,such as genomics,transcriptomics,proteomics,and metabolomics,with spatial data.This fusion of approaches provides researchers with critical insights into the molecular composition and functions of the cells and tissues at precise spatial coordinates.Furthermore,advanced imaging techniques,including high-resolution microscopy,hyperspectral imaging,and mass spectrometry imaging,enable the visualization and analysis of the spatial distribution of biomolecules,cells,and tissues.Linking these molecular imaging outputs to conventional tissue histology can facilitate a more comprehensive characterization of disease phenotypes.This review summarizes the recent advancements in the molecular imaging modalities and methodologies for in-depth spatial analysis.It explores their applications,challenges,and potential opportunities in the field of OA.Additionally,this review provides a perspective on the potential research directions for these contemporary approaches that can meet the requirements of clinical diagnoses and the establishment of therapeutic targets for OA.
文摘The fundamental frequency plays a significant part in understanding and perceiving the pitch of a sound. The pitch is a fundamental attribute employed in numerous speech-related works. For fundamental frequency extraction, several algorithms have been developed which one to use relies on the signal’s characteristics and the surrounding noise. Thus, the algorithm’s noise resistance becomes more critical than ever for precise fundamental frequency estimation. Nonetheless, numerous state-of-the-art algorithms face struggles in achieving satisfying outcomes when confronted with speech recordings that are noisy with low signal-to-noise ratio (SNR) values. Also, most of the recent techniques utilize different frame lengths for pitch extraction. From this point of view, This research considers different frame lengths on male and female speech signals for fundamental frequency extraction. Also, analyze the frame length dependency on the speech signal analytically to understand which frame length is more suitable and effective for male and female speech signals specifically. For the validation of our idea, we have utilized the conventional autocorrelation function (ACF), and state-of-the-art method BaNa. This study puts out a potent idea that will work better for speech processing applications in noisy speech. From experimental results, the proposed idea represents which frame length is more appropriate for male and female speech signals in noisy environments.
基金This paper is financed by the European Union-NextGenerationEU,through the National Recovery and Resilience Plan of the Republic of Bulgaria,Project No.BG-RRP-2.004-0001-C01.
文摘The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accident prevention,cost reduction,and enhanced traffic regularity.Despite these benefits,IoV technology is susceptible to cyber-attacks,which can exploit vulnerabilities in the vehicle network,leading to perturbations,disturbances,non-recognition of traffic signs,accidents,and vehicle immobilization.This paper reviews the state-of-the-art achievements and developments in applying Deep Transfer Learning(DTL)models for Intrusion Detection Systems in the Internet of Vehicles(IDS-IoV)based on anomaly detection.IDS-IoV leverages anomaly detection through machine learning and DTL techniques to mitigate the risks posed by cyber-attacks.These systems can autonomously create specific models based on network data to differentiate between regular traffic and cyber-attacks.Among these techniques,transfer learning models are particularly promising due to their efficacy with tagged data,reduced training time,lower memory usage,and decreased computational complexity.We evaluate DTL models against criteria including the ability to transfer knowledge,detection rate,accurate analysis of complex data,and stability.This review highlights the significant progress made in the field,showcasing how DTL models enhance the performance and reliability of IDS-IoV systems.By examining recent advancements,we provide insights into how DTL can effectively address cyber-attack challenges in IoV environments,ensuring safer and more efficient transportation networks.