期刊文献+

基于防御蒸馏和联邦学习的安全深度神经网络研究 被引量:1

Secure Deep Neural Network Based on Defensive Distillation and Federated Learning
下载PDF
导出
摘要 联邦学习(federated learning,FL)由谷歌于2016年提出的多方协作的机器学习方案,而深度神经网络(deep neural network,DNN)算法已经被证明在联邦学习中拥有很好的效果.然而,最近的研究表明,和其他机器学习技术一样,DNN也容易受到对抗样本的攻击.这种攻击严重的威胁了DNN所支持的系统的安全性,有些情况下可能会带来灾难性的后果.将DNN与防御蒸馏和联邦学习相结合,来降低对抗样本在DNN中的作用并保护用户的隐私安全,并使用稀疏三元(sparse ternary compression,STC)算法来减少在联邦学习中训练的通信开销.实验表明,与不使用STC的方案相比,使用STC的该方案在保证系统和数据安全的情况下,极大地减小了通信开销. Federated learning(FL)is a multi-party collaborative machine learning scheme proposed by Google in 2016.Deep neural network(DNN)algorithms have been proven to have good results in federated learning.However,recent studies have shown that,like other machine learning techniques,DNNs are also vulnerable to attacks from adversarial samples.This kind of attack seriously threatens the security of the system supported by DNN,and in some cases may bring catastrophic consequences.We combine DNN with defensive distillation and federated learning to reduce the role of adversarial samples in DNN and protect the privacy of users,and use the sparse ternary compression(STC)to reduce the communication overhead of training in federated learning.The experiments show that,compared with the scheme that does not use STC,the scheme using STC greatly reduces the communication overhead while ensuring system and data security.
作者 肖林声 钱慎一 XIAO Linsheng;QIAN Shenyi(College of Computer and Communication Engineering,Zhengzhou University of Light Industry,Zhengzhou 450002,China)
出处 《湖北民族大学学报(自然科学版)》 CAS 2021年第2期168-174,共7页 Journal of Hubei Minzu University:Natural Science Edition
基金 国家自然科学基金项目(61672470,61802350) 国家重点研发计划项目(2016YFE0100600,2016YFE0100300).
关键词 联邦学习 深度神经网络 对抗样本 稀疏三元 通信开销 federated learning deep neural network adversarial samples sparse ternary communication overhead
  • 相关文献

同被引文献9

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部