期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks:An Empirical Study
1
作者 Shahad Alzahrani Hatim Alsuwat Emad Alsuwat 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1635-1654,共20页
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ... Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data. 展开更多
关键词 Bayesian networks data poisoning attacks latent variables structure learning algorithms adversarial attacks
下载PDF
DISTINIT:Data poISoning atTacks dectectIon usiNg optIized jaCcard disTance
2
作者 Maria Sameen Seong Oun Hwang 《Computers, Materials & Continua》 SCIE EI 2022年第12期4559-4576,共18页
Machine Learning(ML)systems often involve a re-training process to make better predictions and classifications.This re-training process creates a loophole and poses a security threat for ML systems.Adversaries leverag... Machine Learning(ML)systems often involve a re-training process to make better predictions and classifications.This re-training process creates a loophole and poses a security threat for ML systems.Adversaries leverage this loophole and design data poisoning attacks against ML systems.Data poisoning attacks are a type of attack in which an adversary manipulates the training dataset to degrade the ML system’s performance.Data poisoning attacks are challenging to detect,and even more difficult to respond to,particularly in the Internet of Things(IoT)environment.To address this problem,we proposed DISTINIT,the first proactive data poisoning attack detection framework using distancemeasures.We found that Jaccard Distance(JD)can be used in the DISTINIT(among other distance measures)and we finally improved the JD to attain an Optimized JD(OJD)with lower time and space complexity.Our security analysis shows that the DISTINIT is secure against data poisoning attacks by considering key features of adversarial attacks.We conclude that the proposed OJD-based DISTINIT is effective and efficient against data poisoning attacks where in-time detection is critical for IoT applications with large volumes of streaming data. 展开更多
关键词 data poisoning attacks detection framework jaccard distance(JD) optimized jaccard distance(OJD) security analysis
下载PDF
Data complexity-based batch sanitization method against poison in distributed learning
3
作者 Silv Wang Kai Fan +2 位作者 Kuan Zhang Hui Li Yintang Yang 《Digital Communications and Networks》 SCIE CSCD 2024年第2期416-428,共13页
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca... The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios. 展开更多
关键词 Distributed machine learning security Federated learning data poisoning attacks data sanitization Batch detection data complexity
下载PDF
DroidEnemy: Battling adversarial example attacks for Android malware detection
4
作者 Neha Bala Aemun Ahmar +3 位作者 Wenjia Li Fernanda Tovar Arpit Battu Prachi Bambarkar 《Digital Communications and Networks》 SCIE CSCD 2022年第6期1040-1047,共8页
In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are... In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are becoming increasingly popular,they are now the primary target of mobile malware,which could lead to both privacy leakage and property loss.To address the rapidly deteriorating security issues caused by mobile malware,various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them.Nevertheless,in order to avoid being caught by these malware detection mechanisms,malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications.In this paper,several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them.First,we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks.Then,we specifically focus on the data poisoning attack and evasion attack models,which may mutate various application features,such as API calls,permissions and the class label,to produce adversarial examples.Then,we propose and design a malware detection approach that is resistant to adversarial examples.To observe and investigate how the malware detection system is influenced by the adversarial example attacks,we conduct experiments on some real Android application datasets which are composed of both malware and benign applications.Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks. 展开更多
关键词 Security Malware detection Adversarial example attack data poisoning attack Evasi on attack Machine learning ANDROID
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部