期刊文献+

An Anti-Poisoning Attack Method for Distributed AI System

An Anti-Poisoning Attack Method for Distributed AI System
下载PDF
导出
摘要 <div style="text-align:justify;"> In distributed AI system, the models trained on data from potentially unreliable sources can be attacked by manipulating the training data distribution by inserting carefully crafted samples into the training set, which is known as Data Poisoning. Poisoning will to change the model behavior and reduce model performance. This paper proposes an algorithm that gives an improvement of both efficiency and security for data poisoning in a distributed AI system. The past methods of active defense often have a large number of invalid checks, which slows down the operation efficiency of the whole system. While passive defense also has problems of missing data and slow detection of error source. The proposed algorithm establishes the suspect hypothesis level to test and extend the verification of data packets and estimates the risk of terminal data. It can enhance the health degree of a distributed AI system by preventing the occurrence of poisoning attack and ensuring the efficiency and safety of the system operation. </div> <div style="text-align:justify;"> In distributed AI system, the models trained on data from potentially unreliable sources can be attacked by manipulating the training data distribution by inserting carefully crafted samples into the training set, which is known as Data Poisoning. Poisoning will to change the model behavior and reduce model performance. This paper proposes an algorithm that gives an improvement of both efficiency and security for data poisoning in a distributed AI system. The past methods of active defense often have a large number of invalid checks, which slows down the operation efficiency of the whole system. While passive defense also has problems of missing data and slow detection of error source. The proposed algorithm establishes the suspect hypothesis level to test and extend the verification of data packets and estimates the risk of terminal data. It can enhance the health degree of a distributed AI system by preventing the occurrence of poisoning attack and ensuring the efficiency and safety of the system operation. </div>
作者 Xuezhu Xin Yang Bai Haixin Wang Yunzhen Mou Jian Tan Xuezhu Xin;Yang Bai;Haixin Wang;Yunzhen Mou;Jian Tan(Innovation Center for Intelligent System on Recognition and Decision, Beijing Jinghang Research Institute of Computing and Communication, Beijing, China;School of Digital Media and Art Design, Beijing University of Posts and Telecommunications, Beijing, China)
出处 《Journal of Computer and Communications》 2021年第12期99-105,共7页 电脑和通信(英文)
关键词 Data Poisoning Distributed AI System Credit Probability Mechanism Inspection Module Suspect Hypothesis Level Data Poisoning Distributed AI System Credit Probability Mechanism Inspection Module Suspect Hypothesis Level
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部