期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks:An Empirical Study
1
作者 Shahad Alzahrani Hatim Alsuwat Emad Alsuwat 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1635-1654,共20页
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ... Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data. 展开更多
关键词 Bayesian networks data poisoning attacks latent variables structure learning algorithms adversarial attacks
下载PDF
Data complexity-based batch sanitization method against poison in distributed learning
2
作者 Silv Wang Kai Fan +2 位作者 Kuan Zhang Hui Li Yintang Yang 《Digital Communications and Networks》 SCIE CSCD 2024年第2期416-428,共13页
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca... The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios. 展开更多
关键词 Distributed machine learning security Federated learning data poisoning attacks data sanitization Batch detection data complexity
下载PDF
DISTINIT:Data poISoning atTacks dectectIon usiNg optIized jaCcard disTance
3
作者 Maria Sameen Seong Oun Hwang 《Computers, Materials & Continua》 SCIE EI 2022年第12期4559-4576,共18页
Machine Learning(ML)systems often involve a re-training process to make better predictions and classifications.This re-training process creates a loophole and poses a security threat for ML systems.Adversaries leverag... Machine Learning(ML)systems often involve a re-training process to make better predictions and classifications.This re-training process creates a loophole and poses a security threat for ML systems.Adversaries leverage this loophole and design data poisoning attacks against ML systems.Data poisoning attacks are a type of attack in which an adversary manipulates the training dataset to degrade the ML system’s performance.Data poisoning attacks are challenging to detect,and even more difficult to respond to,particularly in the Internet of Things(IoT)environment.To address this problem,we proposed DISTINIT,the first proactive data poisoning attack detection framework using distancemeasures.We found that Jaccard Distance(JD)can be used in the DISTINIT(among other distance measures)and we finally improved the JD to attain an Optimized JD(OJD)with lower time and space complexity.Our security analysis shows that the DISTINIT is secure against data poisoning attacks by considering key features of adversarial attacks.We conclude that the proposed OJD-based DISTINIT is effective and efficient against data poisoning attacks where in-time detection is critical for IoT applications with large volumes of streaming data. 展开更多
关键词 data poisoning attacks detection framework jaccard distance(JD) optimized jaccard distance(OJD) security analysis
下载PDF
Security Vulnerability Analyses of Large Language Models (LLMs) through Extension of the Common Vulnerability Scoring System (CVSS) Framework
4
作者 Alicia Biju Vishnupriya Ramesh Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期340-358,共19页
Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, a... Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks. 展开更多
关键词 Common Vulnerability Scoring System (CVSS) Large Language Models (LLMs) DALL-E Prompt Injections Training data Poisoning CVSS Metrics
下载PDF
DroidEnemy: Battling adversarial example attacks for Android malware detection
5
作者 Neha Bala Aemun Ahmar +3 位作者 Wenjia Li Fernanda Tovar Arpit Battu Prachi Bambarkar 《Digital Communications and Networks》 SCIE CSCD 2022年第6期1040-1047,共8页
In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are... In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are becoming increasingly popular,they are now the primary target of mobile malware,which could lead to both privacy leakage and property loss.To address the rapidly deteriorating security issues caused by mobile malware,various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them.Nevertheless,in order to avoid being caught by these malware detection mechanisms,malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications.In this paper,several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them.First,we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks.Then,we specifically focus on the data poisoning attack and evasion attack models,which may mutate various application features,such as API calls,permissions and the class label,to produce adversarial examples.Then,we propose and design a malware detection approach that is resistant to adversarial examples.To observe and investigate how the malware detection system is influenced by the adversarial example attacks,we conduct experiments on some real Android application datasets which are composed of both malware and benign applications.Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks. 展开更多
关键词 Security Malware detection Adversarial example attack data poisoning attack Evasi on attack Machine learning ANDROID
下载PDF
An Anti-Poisoning Attack Method for Distributed AI System
6
作者 Xuezhu Xin Yang Bai +2 位作者 Haixin Wang Yunzhen Mou Jian Tan 《Journal of Computer and Communications》 2021年第12期99-105,共7页
<div style="text-align:justify;"> In distributed AI system, the models trained on data from potentially unreliable sources can be attacked by manipulating the training data distribution by inserting ca... <div style="text-align:justify;"> In distributed AI system, the models trained on data from potentially unreliable sources can be attacked by manipulating the training data distribution by inserting carefully crafted samples into the training set, which is known as Data Poisoning. Poisoning will to change the model behavior and reduce model performance. This paper proposes an algorithm that gives an improvement of both efficiency and security for data poisoning in a distributed AI system. The past methods of active defense often have a large number of invalid checks, which slows down the operation efficiency of the whole system. While passive defense also has problems of missing data and slow detection of error source. The proposed algorithm establishes the suspect hypothesis level to test and extend the verification of data packets and estimates the risk of terminal data. It can enhance the health degree of a distributed AI system by preventing the occurrence of poisoning attack and ensuring the efficiency and safety of the system operation. </div> 展开更多
关键词 data Poisoning Distributed AI System Credit Probability Mechanism Inspection Module Suspect Hypothesis Level
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部