BACKGROUND Diagnosing bacterial infections(BI)in patients with cirrhosis can be challenging because of unclear symptoms,low diagnostic accuracy,and lengthy culture testing times.Various biomarkers have been studied,in...BACKGROUND Diagnosing bacterial infections(BI)in patients with cirrhosis can be challenging because of unclear symptoms,low diagnostic accuracy,and lengthy culture testing times.Various biomarkers have been studied,including serum procal-citonin(PCT)and presepsin.However,the diagnostic performance of these markers remains unclear,requiring further informative studies to ascertain their diagnostic value.AIM To evaluate the pooled diagnostic performance of PCT and presepsin in detecting BI among patients with cirrhosis.INTRODUCTION Bacterial infections(BI)commonly occur in patients with cirrhosis,resulting in poor outcomes,including the development of cirrhotic complications,septic shock,acute-on-chronic liver failure(ACLF),multiple organ failures,and mortality[1,2].BI is observed in 20%-30%of hospitalized patients,with and without ACLF[3].Patients with cirrhosis are susceptible to BI because of internal and external factors.The major internal factors are changes in gut microbial composition and function,bacterial translocation,and cirrhosis-associated immune dysfunction syndrome[4,5].External factors include alcohol use,proton-pump inhibitor use,frailty,readmission,and invasive procedures.Spontaneous bacterial peritonitis(SBP),urinary tract infection,pneumonia,and primary bacteremia are the common BIs in hospit-alized patients with cirrhosis[6].Early diagnosis and adequate empirical antibiotic therapy are two critical factors that improve the prognosis of BI in patients with cirrhosis.However,early detection of BI in cirrhosis is challenging due to subtle clinical signs and symptoms,low sensitivity and specificity of systemic inflammatory response syndrome criteria,and low sensitivity of bacterial cultures.Thus,effective biomarkers need to be identified for the early detection of BI.Several biomarkers have been evaluated,but their efficacy in detecting BI is unclear.Procalcitonin(PCT)is a precursor of the hormone calcitonin,which is secreted by parafollicular cells of the thyroid gland[7].In the presence of BI,PCT gene expression increases in extrathyroidal tissues,causing a subsequent increase in serum PCT level[8].Changes in serum PCT are detectable as early as 4 hours after infection onset and peaks between 8 and 24 hours,making it a valuable diagnostic biomarker for BI.Several studies have demonstrated the favorable diagnostic accuracy of PCT in the diagnosis of BI in individuals with cirrhosis[9-13]and without cirrhosis[14-16].Since 2014,two meta-analyses have been published on the diagnostic value of PCT for SBP and BI in patients with cirrhosis[17,18].Other related studies have been conducted since then[10-12,19-33].Serum presepsin has recently emerged as a promising biomarker for diagnosing BI.This biomarker is the N-terminal fraction protein of the soluble CD14 g-negative bacterial lipopolysaccharide–lipopolysaccharide binding protein(sCD14-LPS-LBP)complex,which is cleaved by inflammatory serum protease in response to BI[34].Presepsin levels increase within 2 hours and peaks in 3 hours[35].This is useful for detecting BI since presepsin levels increase earlier than serum Our systematic review and meta-analysis was performed with adherence to PRISMA guidelines[37].展开更多
A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a...A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a subcategory of attack,host information,malicious scripts,etc.In terms of network perspectives,network traffic may contain an imbalanced number of harmful attacks when compared to normal traffic.It is challenging to identify a specific attack due to complex features and data imbalance issues.To address these issues,this paper proposes an Intrusion Detection System using transformer-based transfer learning for Imbalanced Network Traffic(IDS-INT).IDS-INT uses transformer-based transfer learning to learn feature interactions in both network feature representation and imbalanced data.First,detailed information about each type of attack is gathered from network interaction descriptions,which include network nodes,attack type,reference,host information,etc.Second,the transformer-based transfer learning approach is developed to learn detailed feature representation using their semantic anchors.Third,the Synthetic Minority Oversampling Technique(SMOTE)is implemented to balance abnormal traffic and detect minority attacks.Fourth,the Convolution Neural Network(CNN)model is designed to extract deep features from the balanced network traffic.Finally,the hybrid approach of the CNN-Long Short-Term Memory(CNN-LSTM)model is developed to detect different types of attacks from the deep features.Detailed experiments are conducted to test the proposed approach using three standard datasets,i.e.,UNsWNB15,CIC-IDS2017,and NSL-KDD.An explainable AI approach is implemented to interpret the proposed method and develop a trustable model.展开更多
The advent of pandemics such as COVID-19 significantly impacts human behaviour and lives every day.Therefore,it is essential to make medical services connected to internet,available in every remote location during the...The advent of pandemics such as COVID-19 significantly impacts human behaviour and lives every day.Therefore,it is essential to make medical services connected to internet,available in every remote location during these situations.Also,the security issues in the Internet of Medical Things(IoMT)used in these service,make the situation even more critical because cyberattacks on the medical devices might cause treatment delays or clinical failures.Hence,services in the healthcare ecosystem need rapid,uninterrupted,and secure facilities.The solution provided in this research addresses security concerns and services availability for patients with critical health in remote areas.This research aims to develop an intelligent Software Defined Networks(SDNs)enabled secure framework for IoT healthcare ecosystem.We propose a hybrid of machine learning and deep learning techniques(DNN+SVM)to identify network intrusions in the sensor-based healthcare data.In addition,this system can efficiently monitor connected devices and suspicious behaviours.Finally,we evaluate the performance of our proposed framework using various performance metrics based on the healthcare application scenarios.the experimental results show that the proposed approach effectively detects and mitigates attacks in the SDN-enabled IoT networks and performs better that other state-of-art-approaches.展开更多
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.展开更多
Unmanned aerial vehicles(UAVs)have been widely used in military,medical,wireless communications,aerial surveillance,etc.One key topic involving UAVs is pose estimation in autonomous navigation.A standard procedure for...Unmanned aerial vehicles(UAVs)have been widely used in military,medical,wireless communications,aerial surveillance,etc.One key topic involving UAVs is pose estimation in autonomous navigation.A standard procedure for this process is to combine inertial navigation system sensor information with the global navigation satellite system(GNSS)signal.However,some factors can interfere with the GNSS signal,such as ionospheric scintillation,jamming,or spoofing.One alternative method to avoid using the GNSS signal is to apply an image processing approach by matching UAV images with georeferenced images.But a high effort is required for image edge extraction.Here a support vector regression(SVR)model is proposed to reduce this computational load and processing time.The dynamic partial reconfiguration(DPR)of part of the SVR datapath is implemented to accelerate the process,reduce the area,and analyze its granularity by increasing the grain size of the reconfigurable region.Results show that the implementation in hardware is 68 times faster than that in software.This architecture with DPR also facilitates the low power consumption of 4 mW,leading to a reduction of 57%than that without DPR.This is also the lowest power consumption in current machine learning hardware implementations.Besides,the circuitry area is 41 times smaller.SVR with Gaussian kernel shows a success rate of 99.18%and minimum square error of 0.0146 for testing with the planning trajectory.This system is useful for adaptive applications where the user/designer can modify/reconfigure the hardware layout during its application,thus contributing to lower power consumption,smaller hardware area,and shorter execution time.展开更多
With the rise of remote work and the digital industry,advanced cyberattacks have become more diverse and complex in terms of attack types and characteristics,rendering them difficult to detect with conventional intrus...With the rise of remote work and the digital industry,advanced cyberattacks have become more diverse and complex in terms of attack types and characteristics,rendering them difficult to detect with conventional intrusion detection methods.Signature-based intrusion detection methods can be used to detect attacks;however,they cannot detect new malware.Endpoint detection and response(EDR)tools are attracting attention as a means of detecting attacks on endpoints in real-time to overcome the limitations of signature-based intrusion detection techniques.However,EDR tools are restricted by the continuous generation of unnecessary logs,resulting in poor detection performance and memory efficiency.Machine learning-based intrusion detection techniques for responding to advanced cyberattacks are memory intensive,using numerous features;they lack optimal feature selection for each attack type.To overcome these limitations,this study proposes a memory-efficient intrusion detection approach incorporating multi-binary classifiers using optimal feature selection.The proposed model detects multiple types of malicious attacks using parallel binary classifiers with optimal features for each attack type.The experimental results showed a 2.95%accuracy improvement and an 88.05%memory reduction using only six features compared to a model with 18 features.Furthermore,compared to a conventional multi-classification model with simple feature selection based on permutation importance,the accuracy improved by 11.67%and the memory usage decreased by 44.87%.The proposed scheme demonstrates that effective intrusion detection is achievable with minimal features,making it suitable for memory-limited mobile and Internet of Things devices.展开更多
The Internet of Things(IoT)links various devices to digital services and significantly improves the quality of our lives.However,as IoT connectivity is growing rapidly,so do the risks of network vulnerabilities and th...The Internet of Things(IoT)links various devices to digital services and significantly improves the quality of our lives.However,as IoT connectivity is growing rapidly,so do the risks of network vulnerabilities and threats.Many interesting Intrusion Detection Systems(IDSs)are presented based on machine learning(ML)techniques to overcome this problem.Given the resource limitations of fog computing environments,a lightweight IDS is essential.This paper introduces a hybrid deep learning(DL)method that combines convolutional neural networks(CNN)and long short-term memory(LSTM)to build an energy-aware,anomaly-based IDS.We test this system on a recent dataset,focusing on reducing overhead while maintaining high accuracy and a low false alarm rate.We compare CICIoT2023,KDD-99 and NSL-KDD datasets to evaluate the performance of the proposed IDS model based on key metrics,including latency,energy consumption,false alarm rate and detection rate metrics.Our findings show an accuracy rate over 92%and a false alarm rate below 0.38%.These results demonstrate that our system provides strong security without excessive resource use.The practicality of deploying IDS with limited resources is demonstrated by the successful implementation of IDS functionality on a Raspberry Pi acting as a Fog node.The proposed lightweight model,with a maximum power consumption of 6.12 W,demonstrates its potential to operate effectively on energy-limited devices such as low-power fog nodes or edge devices.We prioritize energy efficiency whilemaintaining high accuracy,distinguishing our scheme fromexisting approaches.Extensive experiments demonstrate a significant reduction in false positives,ensuring accurate identification of genuine security threats while minimizing unnecessary alerts.展开更多
This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intr...This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intrusion detection performance,given the vital relevance of safeguarding computer networks against harmful activity.The DNN-based IDS is trained and validated by the model using the NSL-KDD dataset,a popular benchmark for IDS research.The model performs well in both the training and validation stages,with 91.30%training accuracy and 94.38%validation accuracy.Thus,the model shows good learning and generalization capabilities with minor losses of 0.22 in training and 0.1553 in validation.Furthermore,for both macro and micro averages across class 0(normal)and class 1(anomalous)data,the study evaluates the model using a variety of assessment measures,such as accuracy scores,precision,recall,and F1 scores.The macro-average recall is 0.9422,the macro-average precision is 0.9482,and the accuracy scores are 0.942.Furthermore,macro-averaged F1 scores of 0.9245 for class 1 and 0.9434 for class 0 demonstrate the model’s ability to precisely identify anomalies precisely.The research also highlights how real-time threat monitoring and enhanced resistance against new online attacks may be achieved byDNN-based intrusion detection systems,which can significantly improve network security.The study underscores the critical function ofDNN-based IDS in contemporary cybersecurity procedures by setting the foundation for further developments in this field.Upcoming research aims to enhance intrusion detection systems by examining cooperative learning techniques and integrating up-to-date threat knowledge.展开更多
In the face of the increasingly severe Botnet problem on the Internet,how to effectively detect Botnet traffic in realtime has become a critical problem.Although the existing deepQnetwork(DQN)algorithminDeep reinforce...In the face of the increasingly severe Botnet problem on the Internet,how to effectively detect Botnet traffic in realtime has become a critical problem.Although the existing deepQnetwork(DQN)algorithminDeep reinforcement learning can solve the problem of real-time updating,its prediction results are always higher than the actual results.In Botnet traffic detection,although it performs well in the training set,the accuracy rate of predicting traffic is as high as%;however,in the test set,its accuracy has declined,and it is impossible to adjust its prediction strategy on time based on new data samples.However,in the new dataset,its accuracy has declined significantly.Therefore,this paper proposes a Botnet traffic detection system based on double-layer DQN(DDQN).Two Q-values are designed to adjust the model in policy and action,respectively,to achieve real-time model updates and improve the universality and robustness of the model under different data sets.Experiments show that compared with the DQN model,when using DDQN,the Q-value is not too high,and the detectionmodel has improved the accuracy and precision of Botnet traffic.Moreover,when using Botnet data sets other than the test set,the accuracy and precision of theDDQNmodel are still higher than DQN.展开更多
Enhancing road safety globally is imperative,especially given the significant portion of traffic-related fatalities attributed to motorcycle accidents resulting from non-compliance with helmet regulations.Acknowledgin...Enhancing road safety globally is imperative,especially given the significant portion of traffic-related fatalities attributed to motorcycle accidents resulting from non-compliance with helmet regulations.Acknowledging the critical role of helmets in rider protection,this paper presents an innovative approach to helmet violation detection using deep learning methodologies.The primary innovation involves the adaptation of the PerspectiveNet architecture,transitioning from the original Res2Net to the more efficient EfficientNet v2 backbone,aimed at bolstering detection capabilities.Through rigorous optimization techniques and extensive experimentation utilizing the India driving dataset(IDD)for training and validation,the system demonstrates exceptional performance,achieving an impressive detection accuracy of 95.2%,surpassing existing benchmarks.Furthermore,the optimized PerspectiveNet model showcases reduced computational complexity,marking a significant stride in real-time helmet violation detection for enhanced traffic management and road safety measures.展开更多
In the rapidly evolving urban landscape,outdoor parking lots have become an indispensable part of the city’s transportation system.The growth of parking lots has raised the likelihood of spontaneous vehicle combus-ti...In the rapidly evolving urban landscape,outdoor parking lots have become an indispensable part of the city’s transportation system.The growth of parking lots has raised the likelihood of spontaneous vehicle combus-tion,a significant safety hazard,making smoke detection an essential preventative step.However,the complex environment of outdoor parking lots presents additional challenges for smoke detection,which necessitates the development of more advanced and reliable smoke detection technologies.This paper addresses this concern and presents a novel smoke detection technique designed for the demanding environment of outdoor parking lots.First,we develop a novel dataset to fill the gap,as there is a lack of publicly available data.This dataset encompasses a wide range of smoke and fire scenarios,enhanced with data augmentation to ensure robustness against diverse outdoor conditions.Second,we utilize an optimized YOLOv5s model,integrated with the Squeeze-and-Excitation Network(SENet)attention mechanism,to significantly improve detection accuracy while maintaining real-time processing capabilities.Third,this paper implements an outdoor smoke detection system that is capable of accurately localizing and alerting in real time,enhancing the effectiveness and reliability of emergency response.Experiments show that the system has a high accuracy in terms of detecting smoke incidents in outdoor scenarios.展开更多
Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representation...Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representations.However,due to limitations in code representation and neural network design,the validity and practicality of the model still need to be improved.Additionally,due to differences in programming languages,most methods lack cross-language detection generality.To address these issues,in this paper,we analyze the shortcomings of previous code representations and neural networks.We propose a novel hierarchical code representation that combines Concrete Syntax Trees(CST)with Program Dependence Graphs(PDG).Furthermore,we introduce a Tree-Graph-Gated-Attention(TGGA)network based on gated recurrent units and attention mechanisms to build a Hierarchical Code Representation learning-based Vulnerability Detection(HCRVD)system.This system enables cross-language vulnerability detection at the function-level.The experiments show that HCRVD surpasses many competitors in vulnerability detection capabilities.It benefits from the hierarchical code representation learning method,and outperforms baseline in cross-language vulnerability detection by 9.772%and 11.819%in the C/C++and Java datasets,respectively.Moreover,HCRVD has certain ability to detect vulnerabilities in unknown programming languages and is useful in real open-source projects.HCRVD shows good validity,generality and practicality.展开更多
The ever-growing network traffic threat landscape necessitates adopting accurate and robust intrusion detection systems(IDSs).IDSs have become a research hotspot and have seen remarkable performance improvements.Gener...The ever-growing network traffic threat landscape necessitates adopting accurate and robust intrusion detection systems(IDSs).IDSs have become a research hotspot and have seen remarkable performance improvements.Generative adversarial networks(GANs)have also garnered increasing research interest recently due to their remarkable ability to generate data.This paper investigates the application of(GANs)in(IDS)and explores their current use within this research field.We delve into the adoption of GANs within signature-based,anomaly-based,and hybrid IDSs,focusing on their objectives,methodologies,and advantages.Overall,GANs have been widely employed,mainly focused on solving the class imbalance issue by generating realistic attack samples.While GANs have shown significant potential in addressing the class imbalance issue,there are still open opportunities and challenges to be addressed.Little attention has been paid to their applicability in distributed and decentralized domains,such as IoT networks.Efficiency and scalability have been mostly overlooked,and thus,future works must aim at addressing these gaps.展开更多
With the rise of blockchain technology,the security issues of smart contracts have become increasingly critical.Despite the availability of numerous smart contract vulnerability detection tools,many face challenges su...With the rise of blockchain technology,the security issues of smart contracts have become increasingly critical.Despite the availability of numerous smart contract vulnerability detection tools,many face challenges such as slow updates,usability issues,and limited installation methods.These challenges hinder the adoption and practicality of these tools.This paper examines smart contract vulnerability detection tools from 2016 to 2023,sourced from the Web of Science(WOS)and Google Scholar.By systematically collecting,screening,and synthesizing relevant research,38 open-source tools that provide installation methods were selected for further investigation.From a developer’s perspective,this paper offers a comprehensive survey of these 38 open-source tools,discussing their operating principles,installation methods,environmental dependencies,update frequencies,and installation challenges.Based on this,we propose an Ethereum smart contract vulnerability detection framework.This framework enables developers to easily utilize various detection tools and accurately analyze contract security issues.To validate the framework’s stability,over 1700 h of testing were conducted.Additionally,a comprehensive performance test was performed on the mainstream detection tools integrated within the framework,assessing their hardware requirements and vulnerability detection coverage.Experimental results indicate that the Slither tool demonstrates satisfactory performance in terms of system resource consumption and vulnerability detection coverage.This study represents the first performance evaluation of testing tools in this domain,providing significant reference value.展开更多
The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthca...The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthcare Systems(SHS)to extract valuable features fromheterogeneous and high-dimensional healthcare data for predicting various diseases and monitoring patient activities.These methods are employed across different domains that are susceptible to adversarial attacks,necessitating careful consideration.Hence,this paper proposes a crossover-based Multilayer Perceptron(CMLP)model.The collected samples are pre-processed and fed into the crossover-based multilayer perceptron neural network to detect adversarial attacks on themedical records of patients.Once an attack is detected,healthcare professionals are promptly alerted to prevent data leakage.The paper utilizes two datasets,namely the synthetic dataset and the University of Queensland Vital Signs(UQVS)dataset,from which numerous samples are collected.Experimental results are conducted to evaluate the performance of the proposed CMLP model,utilizing various performancemeasures such as Recall,Precision,Accuracy,and F1-score to predict patient activities.Comparing the proposed method with existing approaches,it achieves the highest accuracy,precision,recall,and F1-score.Specifically,the proposedmethod achieves a precision of 93%,an accuracy of 97%,an F1-score of 92%,and a recall of 92%.展开更多
The rapid growth and pervasive presence of the Internet of Things(IoT)have led to an unparalleled increase in IoT devices,thereby intensifying worries over IoT security.Deep learning(DL)-based intrusion detection(ID)h...The rapid growth and pervasive presence of the Internet of Things(IoT)have led to an unparalleled increase in IoT devices,thereby intensifying worries over IoT security.Deep learning(DL)-based intrusion detection(ID)has emerged as a vital method for protecting IoT environments.To rectify the deficiencies of current detection methodologies,we proposed and developed an IoT cyberattacks detection system(IoT-CDS)based on DL models for detecting bot attacks in IoT networks.The DL models—long short-term memory(LSTM),gated recurrent units(GRUs),and convolutional neural network-LSTM(CNN-LSTM)were suggested to detect and classify IoT attacks.The BoT-IoT dataset was used to examine the proposed IoT-CDS system,and the dataset includes six attacks with normal packets.The experiments conducted on the BoT-IoT network dataset reveal that the LSTM model attained an impressive accuracy rate of 99.99%.Compared with other internal and external methods using the same dataset,it is observed that the LSTM model achieved higher accuracy rates.LSTMs are more efficient than GRUs and CNN-LSTMs in real-time performance and resource efficiency for cyberattack detection.This method,without feature selection,demonstrates advantages in training time and detection accuracy.Consequently,the proposed approach can be extended to improve the security of various IoT applications,representing a significant contribution to IoT security.展开更多
In this paper, we propose a novel anomaly detection method for data centers based on a combination of graphstructure and abnormal attention mechanism. The method leverages the sensor monitoring data from targetpower s...In this paper, we propose a novel anomaly detection method for data centers based on a combination of graphstructure and abnormal attention mechanism. The method leverages the sensor monitoring data from targetpower substations to construct multidimensional time series. These time series are subsequently transformed intograph structures, and corresponding adjacency matrices are obtained. By incorporating the adjacency matricesand additional weights associated with the graph structure, an aggregation matrix is derived. The aggregationmatrix is then fed into a pre-trained graph convolutional neural network (GCN) to extract graph structure features.Moreover, both themultidimensional time series segments and the graph structure features are inputted into a pretrainedanomaly detectionmodel, resulting in corresponding anomaly detection results that help identify abnormaldata. The anomaly detection model consists of a multi-level encoder-decoder module, wherein each level includesa transformer encoder and decoder based on correlation differences. The attention module in the encoding layeradopts an abnormal attention module with a dual-branch structure. Experimental results demonstrate that ourproposed method significantly improves the accuracy and stability of anomaly detection.展开更多
The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accide...The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accident prevention,cost reduction,and enhanced traffic regularity.Despite these benefits,IoV technology is susceptible to cyber-attacks,which can exploit vulnerabilities in the vehicle network,leading to perturbations,disturbances,non-recognition of traffic signs,accidents,and vehicle immobilization.This paper reviews the state-of-the-art achievements and developments in applying Deep Transfer Learning(DTL)models for Intrusion Detection Systems in the Internet of Vehicles(IDS-IoV)based on anomaly detection.IDS-IoV leverages anomaly detection through machine learning and DTL techniques to mitigate the risks posed by cyber-attacks.These systems can autonomously create specific models based on network data to differentiate between regular traffic and cyber-attacks.Among these techniques,transfer learning models are particularly promising due to their efficacy with tagged data,reduced training time,lower memory usage,and decreased computational complexity.We evaluate DTL models against criteria including the ability to transfer knowledge,detection rate,accurate analysis of complex data,and stability.This review highlights the significant progress made in the field,showcasing how DTL models enhance the performance and reliability of IDS-IoV systems.By examining recent advancements,we provide insights into how DTL can effectively address cyber-attack challenges in IoV environments,ensuring safer and more efficient transportation networks.展开更多
Internet of Things(IoT)is vulnerable to data-tampering(DT)attacks.Due to resource limitations,many anomaly detection systems(ADSs)for IoT have high false positive rates when detecting DT attacks.This leads to the misr...Internet of Things(IoT)is vulnerable to data-tampering(DT)attacks.Due to resource limitations,many anomaly detection systems(ADSs)for IoT have high false positive rates when detecting DT attacks.This leads to the misreporting of normal data,which will impact the normal operation of IoT.To mitigate the impact caused by the high false positive rate of ADS,this paper proposes an ADS management scheme for clustered IoT.First,we model the data transmission and anomaly detection in clustered IoT.Then,the operation strategy of the clustered IoT is formulated as the running probabilities of all ADSs deployed on every IoT device.In the presence of a high false positive rate in ADSs,to deal with the trade-off between the security and availability of data,we develop a linear programming model referred to as a security trade-off(ST)model.Next,we develop an analysis framework for the ST model,and solve the ST model on an IoT simulation platform.Last,we reveal the effect of some factors on the maximum combined detection rate through theoretical analysis.Simulations show that the ADS management scheme can mitigate the data unavailability loss caused by the high false positive rates in ADS.展开更多
Damage to parcels reduces customer satisfactionwith delivery services and increases return-logistics costs.This can be prevented by detecting and addressing the damage before the parcels reach the customer.Consequentl...Damage to parcels reduces customer satisfactionwith delivery services and increases return-logistics costs.This can be prevented by detecting and addressing the damage before the parcels reach the customer.Consequently,various studies have been conducted on deep learning techniques related to the detection of parcel damage.This study proposes a deep learning-based damage detectionmethod for various types of parcels.Themethod is intended to be part of a parcel information-recognition systemthat identifies the volume and shipping information of parcels,and determines whether they are damaged;this method is intended for use in the actual parcel-transportation process.For this purpose,1)the study acquired image data in an environment simulating the actual parcel-transportation process,and 2)the training dataset was expanded based on StyleGAN3 with adaptive discriminator augmentation.Additionally,3)a preliminary distinction was made between the appearance of parcels and their damage status to enhance the performance of the parcel damage detection model and analyze the causes of parcel damage.Finally,using the dataset constructed based on the proposed method,a damage type detection model was trained,and its mean average precision was confirmed.This model can improve customer satisfaction and reduce return costs for parcel delivery companies.展开更多
文摘BACKGROUND Diagnosing bacterial infections(BI)in patients with cirrhosis can be challenging because of unclear symptoms,low diagnostic accuracy,and lengthy culture testing times.Various biomarkers have been studied,including serum procal-citonin(PCT)and presepsin.However,the diagnostic performance of these markers remains unclear,requiring further informative studies to ascertain their diagnostic value.AIM To evaluate the pooled diagnostic performance of PCT and presepsin in detecting BI among patients with cirrhosis.INTRODUCTION Bacterial infections(BI)commonly occur in patients with cirrhosis,resulting in poor outcomes,including the development of cirrhotic complications,septic shock,acute-on-chronic liver failure(ACLF),multiple organ failures,and mortality[1,2].BI is observed in 20%-30%of hospitalized patients,with and without ACLF[3].Patients with cirrhosis are susceptible to BI because of internal and external factors.The major internal factors are changes in gut microbial composition and function,bacterial translocation,and cirrhosis-associated immune dysfunction syndrome[4,5].External factors include alcohol use,proton-pump inhibitor use,frailty,readmission,and invasive procedures.Spontaneous bacterial peritonitis(SBP),urinary tract infection,pneumonia,and primary bacteremia are the common BIs in hospit-alized patients with cirrhosis[6].Early diagnosis and adequate empirical antibiotic therapy are two critical factors that improve the prognosis of BI in patients with cirrhosis.However,early detection of BI in cirrhosis is challenging due to subtle clinical signs and symptoms,low sensitivity and specificity of systemic inflammatory response syndrome criteria,and low sensitivity of bacterial cultures.Thus,effective biomarkers need to be identified for the early detection of BI.Several biomarkers have been evaluated,but their efficacy in detecting BI is unclear.Procalcitonin(PCT)is a precursor of the hormone calcitonin,which is secreted by parafollicular cells of the thyroid gland[7].In the presence of BI,PCT gene expression increases in extrathyroidal tissues,causing a subsequent increase in serum PCT level[8].Changes in serum PCT are detectable as early as 4 hours after infection onset and peaks between 8 and 24 hours,making it a valuable diagnostic biomarker for BI.Several studies have demonstrated the favorable diagnostic accuracy of PCT in the diagnosis of BI in individuals with cirrhosis[9-13]and without cirrhosis[14-16].Since 2014,two meta-analyses have been published on the diagnostic value of PCT for SBP and BI in patients with cirrhosis[17,18].Other related studies have been conducted since then[10-12,19-33].Serum presepsin has recently emerged as a promising biomarker for diagnosing BI.This biomarker is the N-terminal fraction protein of the soluble CD14 g-negative bacterial lipopolysaccharide–lipopolysaccharide binding protein(sCD14-LPS-LBP)complex,which is cleaved by inflammatory serum protease in response to BI[34].Presepsin levels increase within 2 hours and peaks in 3 hours[35].This is useful for detecting BI since presepsin levels increase earlier than serum Our systematic review and meta-analysis was performed with adherence to PRISMA guidelines[37].
文摘A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a subcategory of attack,host information,malicious scripts,etc.In terms of network perspectives,network traffic may contain an imbalanced number of harmful attacks when compared to normal traffic.It is challenging to identify a specific attack due to complex features and data imbalance issues.To address these issues,this paper proposes an Intrusion Detection System using transformer-based transfer learning for Imbalanced Network Traffic(IDS-INT).IDS-INT uses transformer-based transfer learning to learn feature interactions in both network feature representation and imbalanced data.First,detailed information about each type of attack is gathered from network interaction descriptions,which include network nodes,attack type,reference,host information,etc.Second,the transformer-based transfer learning approach is developed to learn detailed feature representation using their semantic anchors.Third,the Synthetic Minority Oversampling Technique(SMOTE)is implemented to balance abnormal traffic and detect minority attacks.Fourth,the Convolution Neural Network(CNN)model is designed to extract deep features from the balanced network traffic.Finally,the hybrid approach of the CNN-Long Short-Term Memory(CNN-LSTM)model is developed to detect different types of attacks from the deep features.Detailed experiments are conducted to test the proposed approach using three standard datasets,i.e.,UNsWNB15,CIC-IDS2017,and NSL-KDD.An explainable AI approach is implemented to interpret the proposed method and develop a trustable model.
文摘The advent of pandemics such as COVID-19 significantly impacts human behaviour and lives every day.Therefore,it is essential to make medical services connected to internet,available in every remote location during these situations.Also,the security issues in the Internet of Medical Things(IoMT)used in these service,make the situation even more critical because cyberattacks on the medical devices might cause treatment delays or clinical failures.Hence,services in the healthcare ecosystem need rapid,uninterrupted,and secure facilities.The solution provided in this research addresses security concerns and services availability for patients with critical health in remote areas.This research aims to develop an intelligent Software Defined Networks(SDNs)enabled secure framework for IoT healthcare ecosystem.We propose a hybrid of machine learning and deep learning techniques(DNN+SVM)to identify network intrusions in the sensor-based healthcare data.In addition,this system can efficiently monitor connected devices and suspicious behaviours.Finally,we evaluate the performance of our proposed framework using various performance metrics based on the healthcare application scenarios.the experimental results show that the proposed approach effectively detects and mitigates attacks in the SDN-enabled IoT networks and performs better that other state-of-art-approaches.
文摘While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
基金financially supported by the National Council for Scientific and Technological Development(CNPq,Brazil),Swedish-Brazilian Research and Innovation Centre(CISB),and Saab AB under Grant No.CNPq:200053/2022-1the National Council for Scientific and Technological Development(CNPq,Brazil)under Grants No.CNPq:312924/2017-8 and No.CNPq:314660/2020-8.
文摘Unmanned aerial vehicles(UAVs)have been widely used in military,medical,wireless communications,aerial surveillance,etc.One key topic involving UAVs is pose estimation in autonomous navigation.A standard procedure for this process is to combine inertial navigation system sensor information with the global navigation satellite system(GNSS)signal.However,some factors can interfere with the GNSS signal,such as ionospheric scintillation,jamming,or spoofing.One alternative method to avoid using the GNSS signal is to apply an image processing approach by matching UAV images with georeferenced images.But a high effort is required for image edge extraction.Here a support vector regression(SVR)model is proposed to reduce this computational load and processing time.The dynamic partial reconfiguration(DPR)of part of the SVR datapath is implemented to accelerate the process,reduce the area,and analyze its granularity by increasing the grain size of the reconfigurable region.Results show that the implementation in hardware is 68 times faster than that in software.This architecture with DPR also facilitates the low power consumption of 4 mW,leading to a reduction of 57%than that without DPR.This is also the lowest power consumption in current machine learning hardware implementations.Besides,the circuitry area is 41 times smaller.SVR with Gaussian kernel shows a success rate of 99.18%and minimum square error of 0.0146 for testing with the planning trajectory.This system is useful for adaptive applications where the user/designer can modify/reconfigure the hardware layout during its application,thus contributing to lower power consumption,smaller hardware area,and shorter execution time.
基金supported by MOTIE under Training Industrial Security Specialist for High-Tech Industry(RS-2024-00415520)supervised by the Korea Institute for Advancement of Technology(KIAT),and by MSIT under the ICT Challenge and Advanced Network of HRD(ICAN)Program(No.IITP-2022-RS-2022-00156310)supervised by the Institute of Information&Communication Technology Planning&Evaluation(IITP)。
文摘With the rise of remote work and the digital industry,advanced cyberattacks have become more diverse and complex in terms of attack types and characteristics,rendering them difficult to detect with conventional intrusion detection methods.Signature-based intrusion detection methods can be used to detect attacks;however,they cannot detect new malware.Endpoint detection and response(EDR)tools are attracting attention as a means of detecting attacks on endpoints in real-time to overcome the limitations of signature-based intrusion detection techniques.However,EDR tools are restricted by the continuous generation of unnecessary logs,resulting in poor detection performance and memory efficiency.Machine learning-based intrusion detection techniques for responding to advanced cyberattacks are memory intensive,using numerous features;they lack optimal feature selection for each attack type.To overcome these limitations,this study proposes a memory-efficient intrusion detection approach incorporating multi-binary classifiers using optimal feature selection.The proposed model detects multiple types of malicious attacks using parallel binary classifiers with optimal features for each attack type.The experimental results showed a 2.95%accuracy improvement and an 88.05%memory reduction using only six features compared to a model with 18 features.Furthermore,compared to a conventional multi-classification model with simple feature selection based on permutation importance,the accuracy improved by 11.67%and the memory usage decreased by 44.87%.The proposed scheme demonstrates that effective intrusion detection is achievable with minimal features,making it suitable for memory-limited mobile and Internet of Things devices.
基金supported by the interdisciplinary center of smart mobility and logistics at King Fahd University of Petroleum and Minerals(Grant number INML2400).
文摘The Internet of Things(IoT)links various devices to digital services and significantly improves the quality of our lives.However,as IoT connectivity is growing rapidly,so do the risks of network vulnerabilities and threats.Many interesting Intrusion Detection Systems(IDSs)are presented based on machine learning(ML)techniques to overcome this problem.Given the resource limitations of fog computing environments,a lightweight IDS is essential.This paper introduces a hybrid deep learning(DL)method that combines convolutional neural networks(CNN)and long short-term memory(LSTM)to build an energy-aware,anomaly-based IDS.We test this system on a recent dataset,focusing on reducing overhead while maintaining high accuracy and a low false alarm rate.We compare CICIoT2023,KDD-99 and NSL-KDD datasets to evaluate the performance of the proposed IDS model based on key metrics,including latency,energy consumption,false alarm rate and detection rate metrics.Our findings show an accuracy rate over 92%and a false alarm rate below 0.38%.These results demonstrate that our system provides strong security without excessive resource use.The practicality of deploying IDS with limited resources is demonstrated by the successful implementation of IDS functionality on a Raspberry Pi acting as a Fog node.The proposed lightweight model,with a maximum power consumption of 6.12 W,demonstrates its potential to operate effectively on energy-limited devices such as low-power fog nodes or edge devices.We prioritize energy efficiency whilemaintaining high accuracy,distinguishing our scheme fromexisting approaches.Extensive experiments demonstrate a significant reduction in false positives,ensuring accurate identification of genuine security threats while minimizing unnecessary alerts.
基金Princess Nourah bint Abdulrahman University for funding this project through the Researchers Supporting Project(PNURSP2024R319)funded by the Prince Sultan University,Riyadh,Saudi Arabia.
文摘This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intrusion detection performance,given the vital relevance of safeguarding computer networks against harmful activity.The DNN-based IDS is trained and validated by the model using the NSL-KDD dataset,a popular benchmark for IDS research.The model performs well in both the training and validation stages,with 91.30%training accuracy and 94.38%validation accuracy.Thus,the model shows good learning and generalization capabilities with minor losses of 0.22 in training and 0.1553 in validation.Furthermore,for both macro and micro averages across class 0(normal)and class 1(anomalous)data,the study evaluates the model using a variety of assessment measures,such as accuracy scores,precision,recall,and F1 scores.The macro-average recall is 0.9422,the macro-average precision is 0.9482,and the accuracy scores are 0.942.Furthermore,macro-averaged F1 scores of 0.9245 for class 1 and 0.9434 for class 0 demonstrate the model’s ability to precisely identify anomalies precisely.The research also highlights how real-time threat monitoring and enhanced resistance against new online attacks may be achieved byDNN-based intrusion detection systems,which can significantly improve network security.The study underscores the critical function ofDNN-based IDS in contemporary cybersecurity procedures by setting the foundation for further developments in this field.Upcoming research aims to enhance intrusion detection systems by examining cooperative learning techniques and integrating up-to-date threat knowledge.
基金the Liaoning Province Applied Basic Research Program,2023JH2/101600038.
文摘In the face of the increasingly severe Botnet problem on the Internet,how to effectively detect Botnet traffic in realtime has become a critical problem.Although the existing deepQnetwork(DQN)algorithminDeep reinforcement learning can solve the problem of real-time updating,its prediction results are always higher than the actual results.In Botnet traffic detection,although it performs well in the training set,the accuracy rate of predicting traffic is as high as%;however,in the test set,its accuracy has declined,and it is impossible to adjust its prediction strategy on time based on new data samples.However,in the new dataset,its accuracy has declined significantly.Therefore,this paper proposes a Botnet traffic detection system based on double-layer DQN(DDQN).Two Q-values are designed to adjust the model in policy and action,respectively,to achieve real-time model updates and improve the universality and robustness of the model under different data sets.Experiments show that compared with the DQN model,when using DDQN,the Q-value is not too high,and the detectionmodel has improved the accuracy and precision of Botnet traffic.Moreover,when using Botnet data sets other than the test set,the accuracy and precision of theDDQNmodel are still higher than DQN.
基金funded by the Deanship of Scientific Research at Northern Border University,Arar,Kingdom of Saudi Arabia through Research Group No.(RG-NBU-2022-1234).
文摘Enhancing road safety globally is imperative,especially given the significant portion of traffic-related fatalities attributed to motorcycle accidents resulting from non-compliance with helmet regulations.Acknowledging the critical role of helmets in rider protection,this paper presents an innovative approach to helmet violation detection using deep learning methodologies.The primary innovation involves the adaptation of the PerspectiveNet architecture,transitioning from the original Res2Net to the more efficient EfficientNet v2 backbone,aimed at bolstering detection capabilities.Through rigorous optimization techniques and extensive experimentation utilizing the India driving dataset(IDD)for training and validation,the system demonstrates exceptional performance,achieving an impressive detection accuracy of 95.2%,surpassing existing benchmarks.Furthermore,the optimized PerspectiveNet model showcases reduced computational complexity,marking a significant stride in real-time helmet violation detection for enhanced traffic management and road safety measures.
基金This work was supported byNatural Science Foundation of China(No.62362008,author Z.Z,https://www.nsfc.gov.cn/)Guizhou Provincial Science and Technology Projects(No.ZK[2022]149,author Z.Z,https://kjt.guizhou.gov.cn/)+2 种基金Guizhou Provincial Research Project(Youth)for Universities(No.[2022]104,author Z.Z,https://jyt.guizhou.gov.cn/)Natural Science Special Foundation of Guizhou University(No.[2021]47,author Z.Z,https://www.gzu.edu.cn/)GZU Cultivation Project of NSFC(No.[2020]80,author Z.Z,https://www.gzu.edu.cn/).
文摘In the rapidly evolving urban landscape,outdoor parking lots have become an indispensable part of the city’s transportation system.The growth of parking lots has raised the likelihood of spontaneous vehicle combus-tion,a significant safety hazard,making smoke detection an essential preventative step.However,the complex environment of outdoor parking lots presents additional challenges for smoke detection,which necessitates the development of more advanced and reliable smoke detection technologies.This paper addresses this concern and presents a novel smoke detection technique designed for the demanding environment of outdoor parking lots.First,we develop a novel dataset to fill the gap,as there is a lack of publicly available data.This dataset encompasses a wide range of smoke and fire scenarios,enhanced with data augmentation to ensure robustness against diverse outdoor conditions.Second,we utilize an optimized YOLOv5s model,integrated with the Squeeze-and-Excitation Network(SENet)attention mechanism,to significantly improve detection accuracy while maintaining real-time processing capabilities.Third,this paper implements an outdoor smoke detection system that is capable of accurately localizing and alerting in real time,enhancing the effectiveness and reliability of emergency response.Experiments show that the system has a high accuracy in terms of detecting smoke incidents in outdoor scenarios.
基金funded by the Major Science and Technology Projects in Henan Province,China,Grant No.221100210600.
文摘Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representations.However,due to limitations in code representation and neural network design,the validity and practicality of the model still need to be improved.Additionally,due to differences in programming languages,most methods lack cross-language detection generality.To address these issues,in this paper,we analyze the shortcomings of previous code representations and neural networks.We propose a novel hierarchical code representation that combines Concrete Syntax Trees(CST)with Program Dependence Graphs(PDG).Furthermore,we introduce a Tree-Graph-Gated-Attention(TGGA)network based on gated recurrent units and attention mechanisms to build a Hierarchical Code Representation learning-based Vulnerability Detection(HCRVD)system.This system enables cross-language vulnerability detection at the function-level.The experiments show that HCRVD surpasses many competitors in vulnerability detection capabilities.It benefits from the hierarchical code representation learning method,and outperforms baseline in cross-language vulnerability detection by 9.772%and 11.819%in the C/C++and Java datasets,respectively.Moreover,HCRVD has certain ability to detect vulnerabilities in unknown programming languages and is useful in real open-source projects.HCRVD shows good validity,generality and practicality.
文摘The ever-growing network traffic threat landscape necessitates adopting accurate and robust intrusion detection systems(IDSs).IDSs have become a research hotspot and have seen remarkable performance improvements.Generative adversarial networks(GANs)have also garnered increasing research interest recently due to their remarkable ability to generate data.This paper investigates the application of(GANs)in(IDS)and explores their current use within this research field.We delve into the adoption of GANs within signature-based,anomaly-based,and hybrid IDSs,focusing on their objectives,methodologies,and advantages.Overall,GANs have been widely employed,mainly focused on solving the class imbalance issue by generating realistic attack samples.While GANs have shown significant potential in addressing the class imbalance issue,there are still open opportunities and challenges to be addressed.Little attention has been paid to their applicability in distributed and decentralized domains,such as IoT networks.Efficiency and scalability have been mostly overlooked,and thus,future works must aim at addressing these gaps.
基金supported by the Major Public Welfare Special Fund of Henan Province(No.201300210200)the Major Science and Technology Research Special Fund of Henan Province(No.221100210400).
文摘With the rise of blockchain technology,the security issues of smart contracts have become increasingly critical.Despite the availability of numerous smart contract vulnerability detection tools,many face challenges such as slow updates,usability issues,and limited installation methods.These challenges hinder the adoption and practicality of these tools.This paper examines smart contract vulnerability detection tools from 2016 to 2023,sourced from the Web of Science(WOS)and Google Scholar.By systematically collecting,screening,and synthesizing relevant research,38 open-source tools that provide installation methods were selected for further investigation.From a developer’s perspective,this paper offers a comprehensive survey of these 38 open-source tools,discussing their operating principles,installation methods,environmental dependencies,update frequencies,and installation challenges.Based on this,we propose an Ethereum smart contract vulnerability detection framework.This framework enables developers to easily utilize various detection tools and accurately analyze contract security issues.To validate the framework’s stability,over 1700 h of testing were conducted.Additionally,a comprehensive performance test was performed on the mainstream detection tools integrated within the framework,assessing their hardware requirements and vulnerability detection coverage.Experimental results indicate that the Slither tool demonstrates satisfactory performance in terms of system resource consumption and vulnerability detection coverage.This study represents the first performance evaluation of testing tools in this domain,providing significant reference value.
基金funded by King Saud University through Researchers Supporting Program Number (RSP2024R499).
文摘The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthcare Systems(SHS)to extract valuable features fromheterogeneous and high-dimensional healthcare data for predicting various diseases and monitoring patient activities.These methods are employed across different domains that are susceptible to adversarial attacks,necessitating careful consideration.Hence,this paper proposes a crossover-based Multilayer Perceptron(CMLP)model.The collected samples are pre-processed and fed into the crossover-based multilayer perceptron neural network to detect adversarial attacks on themedical records of patients.Once an attack is detected,healthcare professionals are promptly alerted to prevent data leakage.The paper utilizes two datasets,namely the synthetic dataset and the University of Queensland Vital Signs(UQVS)dataset,from which numerous samples are collected.Experimental results are conducted to evaluate the performance of the proposed CMLP model,utilizing various performancemeasures such as Recall,Precision,Accuracy,and F1-score to predict patient activities.Comparing the proposed method with existing approaches,it achieves the highest accuracy,precision,recall,and F1-score.Specifically,the proposedmethod achieves a precision of 93%,an accuracy of 97%,an F1-score of 92%,and a recall of 92%.
文摘The rapid growth and pervasive presence of the Internet of Things(IoT)have led to an unparalleled increase in IoT devices,thereby intensifying worries over IoT security.Deep learning(DL)-based intrusion detection(ID)has emerged as a vital method for protecting IoT environments.To rectify the deficiencies of current detection methodologies,we proposed and developed an IoT cyberattacks detection system(IoT-CDS)based on DL models for detecting bot attacks in IoT networks.The DL models—long short-term memory(LSTM),gated recurrent units(GRUs),and convolutional neural network-LSTM(CNN-LSTM)were suggested to detect and classify IoT attacks.The BoT-IoT dataset was used to examine the proposed IoT-CDS system,and the dataset includes six attacks with normal packets.The experiments conducted on the BoT-IoT network dataset reveal that the LSTM model attained an impressive accuracy rate of 99.99%.Compared with other internal and external methods using the same dataset,it is observed that the LSTM model achieved higher accuracy rates.LSTMs are more efficient than GRUs and CNN-LSTMs in real-time performance and resource efficiency for cyberattack detection.This method,without feature selection,demonstrates advantages in training time and detection accuracy.Consequently,the proposed approach can be extended to improve the security of various IoT applications,representing a significant contribution to IoT security.
基金the Science and Technology Project of China Southern Power Grid Company,Ltd.(031200KK52200003)the National Natural Science Foundation of China(Nos.62371253,52278119).
文摘In this paper, we propose a novel anomaly detection method for data centers based on a combination of graphstructure and abnormal attention mechanism. The method leverages the sensor monitoring data from targetpower substations to construct multidimensional time series. These time series are subsequently transformed intograph structures, and corresponding adjacency matrices are obtained. By incorporating the adjacency matricesand additional weights associated with the graph structure, an aggregation matrix is derived. The aggregationmatrix is then fed into a pre-trained graph convolutional neural network (GCN) to extract graph structure features.Moreover, both themultidimensional time series segments and the graph structure features are inputted into a pretrainedanomaly detectionmodel, resulting in corresponding anomaly detection results that help identify abnormaldata. The anomaly detection model consists of a multi-level encoder-decoder module, wherein each level includesa transformer encoder and decoder based on correlation differences. The attention module in the encoding layeradopts an abnormal attention module with a dual-branch structure. Experimental results demonstrate that ourproposed method significantly improves the accuracy and stability of anomaly detection.
基金This paper is financed by the European Union-NextGenerationEU,through the National Recovery and Resilience Plan of the Republic of Bulgaria,Project No.BG-RRP-2.004-0001-C01.
文摘The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accident prevention,cost reduction,and enhanced traffic regularity.Despite these benefits,IoV technology is susceptible to cyber-attacks,which can exploit vulnerabilities in the vehicle network,leading to perturbations,disturbances,non-recognition of traffic signs,accidents,and vehicle immobilization.This paper reviews the state-of-the-art achievements and developments in applying Deep Transfer Learning(DTL)models for Intrusion Detection Systems in the Internet of Vehicles(IDS-IoV)based on anomaly detection.IDS-IoV leverages anomaly detection through machine learning and DTL techniques to mitigate the risks posed by cyber-attacks.These systems can autonomously create specific models based on network data to differentiate between regular traffic and cyber-attacks.Among these techniques,transfer learning models are particularly promising due to their efficacy with tagged data,reduced training time,lower memory usage,and decreased computational complexity.We evaluate DTL models against criteria including the ability to transfer knowledge,detection rate,accurate analysis of complex data,and stability.This review highlights the significant progress made in the field,showcasing how DTL models enhance the performance and reliability of IDS-IoV systems.By examining recent advancements,we provide insights into how DTL can effectively address cyber-attack challenges in IoV environments,ensuring safer and more efficient transportation networks.
基金This study was funded by the Chongqing Normal University Startup Foundation for PhD(22XLB021)was also supported by the Open Research Project of the State Key Laboratory of Industrial Control Technology,Zhejiang University,China(No.ICT2023B40).
文摘Internet of Things(IoT)is vulnerable to data-tampering(DT)attacks.Due to resource limitations,many anomaly detection systems(ADSs)for IoT have high false positive rates when detecting DT attacks.This leads to the misreporting of normal data,which will impact the normal operation of IoT.To mitigate the impact caused by the high false positive rate of ADS,this paper proposes an ADS management scheme for clustered IoT.First,we model the data transmission and anomaly detection in clustered IoT.Then,the operation strategy of the clustered IoT is formulated as the running probabilities of all ADSs deployed on every IoT device.In the presence of a high false positive rate in ADSs,to deal with the trade-off between the security and availability of data,we develop a linear programming model referred to as a security trade-off(ST)model.Next,we develop an analysis framework for the ST model,and solve the ST model on an IoT simulation platform.Last,we reveal the effect of some factors on the maximum combined detection rate through theoretical analysis.Simulations show that the ADS management scheme can mitigate the data unavailability loss caused by the high false positive rates in ADS.
基金supported by a Korea Agency for Infrastructure Technology Advancement(KAIA)grant funded by the Ministry of Land,Infrastructure,and Transport(Grant 1615013176)(https://www.kaia.re.kr/eng/main.do,accessed on 01/06/2024)supported by a Korea Evaluation Institute of Industrial Technology(KEIT)grant funded by the Korean Government(MOTIE)(141518499)(https://www.keit.re.kr/index.es?sid=a2,accessed on 01/06/2024).
文摘Damage to parcels reduces customer satisfactionwith delivery services and increases return-logistics costs.This can be prevented by detecting and addressing the damage before the parcels reach the customer.Consequently,various studies have been conducted on deep learning techniques related to the detection of parcel damage.This study proposes a deep learning-based damage detectionmethod for various types of parcels.Themethod is intended to be part of a parcel information-recognition systemthat identifies the volume and shipping information of parcels,and determines whether they are damaged;this method is intended for use in the actual parcel-transportation process.For this purpose,1)the study acquired image data in an environment simulating the actual parcel-transportation process,and 2)the training dataset was expanded based on StyleGAN3 with adaptive discriminator augmentation.Additionally,3)a preliminary distinction was made between the appearance of parcels and their damage status to enhance the performance of the parcel damage detection model and analyze the causes of parcel damage.Finally,using the dataset constructed based on the proposed method,a damage type detection model was trained,and its mean average precision was confirmed.This model can improve customer satisfaction and reduce return costs for parcel delivery companies.