Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ...Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.展开更多
Machine Learning(ML)systems often involve a re-training process to make better predictions and classifications.This re-training process creates a loophole and poses a security threat for ML systems.Adversaries leverag...Machine Learning(ML)systems often involve a re-training process to make better predictions and classifications.This re-training process creates a loophole and poses a security threat for ML systems.Adversaries leverage this loophole and design data poisoning attacks against ML systems.Data poisoning attacks are a type of attack in which an adversary manipulates the training dataset to degrade the ML system’s performance.Data poisoning attacks are challenging to detect,and even more difficult to respond to,particularly in the Internet of Things(IoT)environment.To address this problem,we proposed DISTINIT,the first proactive data poisoning attack detection framework using distancemeasures.We found that Jaccard Distance(JD)can be used in the DISTINIT(among other distance measures)and we finally improved the JD to attain an Optimized JD(OJD)with lower time and space complexity.Our security analysis shows that the DISTINIT is secure against data poisoning attacks by considering key features of adversarial attacks.We conclude that the proposed OJD-based DISTINIT is effective and efficient against data poisoning attacks where in-time detection is critical for IoT applications with large volumes of streaming data.展开更多
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca...The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.展开更多
In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are...In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are becoming increasingly popular,they are now the primary target of mobile malware,which could lead to both privacy leakage and property loss.To address the rapidly deteriorating security issues caused by mobile malware,various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them.Nevertheless,in order to avoid being caught by these malware detection mechanisms,malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications.In this paper,several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them.First,we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks.Then,we specifically focus on the data poisoning attack and evasion attack models,which may mutate various application features,such as API calls,permissions and the class label,to produce adversarial examples.Then,we propose and design a malware detection approach that is resistant to adversarial examples.To observe and investigate how the malware detection system is influenced by the adversarial example attacks,we conduct experiments on some real Android application datasets which are composed of both malware and benign applications.Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks.展开更多
文摘Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.
基金This work was supported by a National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)under Grant 2020R1A2B5B01002145.
文摘Machine Learning(ML)systems often involve a re-training process to make better predictions and classifications.This re-training process creates a loophole and poses a security threat for ML systems.Adversaries leverage this loophole and design data poisoning attacks against ML systems.Data poisoning attacks are a type of attack in which an adversary manipulates the training dataset to degrade the ML system’s performance.Data poisoning attacks are challenging to detect,and even more difficult to respond to,particularly in the Internet of Things(IoT)environment.To address this problem,we proposed DISTINIT,the first proactive data poisoning attack detection framework using distancemeasures.We found that Jaccard Distance(JD)can be used in the DISTINIT(among other distance measures)and we finally improved the JD to attain an Optimized JD(OJD)with lower time and space complexity.Our security analysis shows that the DISTINIT is secure against data poisoning attacks by considering key features of adversarial attacks.We conclude that the proposed OJD-based DISTINIT is effective and efficient against data poisoning attacks where in-time detection is critical for IoT applications with large volumes of streaming data.
基金supported in part by the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(Grant No.2022C03174)the National Natural Science Foundation of China(No.92067103)+4 种基金the Key Research and Development Program of Shaanxi,China(No.2021ZDLGY06-02)the Natural Science Foundation of Shaanxi Province(No.2019ZDLGY12-02)the Shaanxi Innovation Team Project(No.2018TD-007)the Xi'an Science and technology Innovation Plan(No.201809168CX9JC10)the Fundamental Research Funds for the Central Universities(No.YJS2212)and National 111 Program of China B16037.
文摘The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.
文摘In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are becoming increasingly popular,they are now the primary target of mobile malware,which could lead to both privacy leakage and property loss.To address the rapidly deteriorating security issues caused by mobile malware,various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them.Nevertheless,in order to avoid being caught by these malware detection mechanisms,malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications.In this paper,several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them.First,we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks.Then,we specifically focus on the data poisoning attack and evasion attack models,which may mutate various application features,such as API calls,permissions and the class label,to produce adversarial examples.Then,we propose and design a malware detection approach that is resistant to adversarial examples.To observe and investigate how the malware detection system is influenced by the adversarial example attacks,we conduct experiments on some real Android application datasets which are composed of both malware and benign applications.Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks.