期刊文献+

基于随机断层与梯度剪裁的横向联邦学习后门防御研究 被引量:2

Backdoor Defense of Horizontal Federated Learning Based on Random Cutting and Gradient Clipping
下载PDF
导出
摘要 联邦学习解决了用户隐私与数据共享相悖之大数据困局,体现了“数据可用不可见”的理念。然而,联邦模型在训练过程中存在后门攻击的风险。攻击者通过本地训练一个包含后门任务的攻击模型,并将模型参数放大一定比例,从而实现将后门植入联邦模型中。针对横向联邦学习模型所面临的后门威胁,从博弈的视角,提出一种基于随机断层与梯度剪裁相结合的后门防御策略和技术方案:中心服务器在收到参与方提交的梯度信息后,随机确定每个参与方的神经网络层,然后将各参与方的梯度贡献分层聚合,并使用梯度阈值对梯度参数进行裁剪。梯度剪裁和随机断层可削弱个别参与方异常数据的影响力,使联邦模型在学习后门特征时陷入平缓期,长时间无法学习到后门特征,同时不影响正常任务的学习。如果中心服务器在平缓期内结束联邦学习,即可实现对后门攻击的防御。实验结果表明,该方法可以有效地防御联邦学习中潜在的后门威胁,同时保证了模型的准确性。因此,该方法可以应用于横向联邦学习场景中,为联邦学习的安全保驾护航。 Federated learning is a methodology that solves the contradiction of big data between user privacy and data sharing,and realize the concept of“data is invisible but available”.However,the federated model is at risk of backdoor attacks in the training process.The attacker trains a attack model containing a backdoor task locally,and amplifies the model parameters by a certain proportion to implant the backdoor into the federated model.Facing the backdoor threat in the training process of the horizontal federated learning,from the perspective of the game theory,this paper proposes a backdoor defense strategy and technical proposal based on the combination of random cutting and gradient clipping.After receiving the gradient from the participants,the central server randomly chooses the neural network layer from each participant,and aggregates the gradient contributions of each participant layer by layer.Then,the central sever clips gradient parameters according to gradient threshold.Gradient clipping and random cutting can weaken the influence generated by abnormal data from minority participants.It falls into platform state when the federated model learning backdoor features,so that it keeps failing on learning backdoor features without affecting the lear-ning process of target tasks.If the central server completes the federated learning during platform state,it can defend against backdoor attacks.Experimental results show that the proposed method can effectively defend against potential backdoor threats in fe-derated learning.At the same time,the accuracy of the model is ensured.Therefore,it can be applied in horizontal federated lear-ning scenarios,providing security protection for federated learning.
作者 许文韬 王斌君 XU Wentao;WANG Binjun(College of Information and Cyber Security,People’s Public Security University of China,Beijing 100038,China)
出处 《计算机科学》 CSCD 北大核心 2023年第11期356-363,共8页 Computer Science
基金 国家社会科学基金重点项目(20AZD114)。
关键词 横向联邦学习 后门攻击 随机断层 梯度剪裁 Horizontal federated learning Backdoor attack Random cutting Gradient clipping
  • 相关文献

同被引文献9

引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部