期刊文献+

PRFL:一种隐私保护联邦学习鲁棒聚合方法

PRFL:Privacy-preserving Robust Aggregation Method for Federated Learning
下载PDF
导出
摘要 联邦学习允许用户通过交换模型参数共同训练一个模型,能够降低数据泄露风险。但研究发现,通过模型参数仍能推断出用户隐私信息。对此,许多研究提出了模型隐私保护聚合方法。此外,恶意用户可通过提交精心构造的投毒模型破坏联邦学习聚合,且模型在隐私保护下聚合,恶意用户可以实施更加隐蔽的投毒攻击。为了在实现隐私保护的同时抵抗投毒攻击,提出了一种隐私保护联邦学习鲁棒聚合方法PRFL。PRFL不仅能够有效防御拜占庭用户发起的投毒攻击,还保证了本地模型的隐私性、全局模型的准确性和高效性。首先,提出了一种双服务器结构下轻量级模型隐私保护聚合方法,实现模型隐私保护聚合,同时保证全局模型的准确性并且不会引入开销问题;然后,提出了一种密态模型距离计算方法,在不暴露本地模型参数的同时允许双方服务器计算出模型距离,并基于该方法和局部离群因子算法(Local Outlier Factor,LOF)设计了一种投毒模型检测方法;最后,对PRFL的安全性进行了分析。在两种真实图像数据集上的实验结果表明:无攻击时,PRFL可以取得与FedAvg相近的模型准确率;PRFL在数据独立同分布(IID)和非独立同分布(Non-IID)设置下能有效防御3种先进的投毒攻击,并优于现有的Krum,Median,Trimmed mean方法。 Federated learning allows users to train a model together by exchanging model parameters and can reduce the risk of data leakage.However,studies have found that user privacy information can still be inferred through model parameters,and many studies have proposed model privacy-preserving aggregation methods.Moreover,malicious users can corrupt federated learning aggregation by submitting carefully constructed poisoning models,and with models aggregated under privacy protection,malicious users can implement more hidden poisoning attacks.In order to implement privacy protection while resisting poisoning attacks,a privacy-preserving federated learning robust aggregation method named PRFL is proposed.PRFL can not only effectively defends against poisoning attacks launched by Byzantine users,but also guarantee the privacy of the local model,the accuracy and efficiency of the global model.Specifically,a lightweight model privacy-preserving aggregation method under dual-server architecture is first proposed to achieve the privacy-preserving aggregation of the model,while guaranteeing the accuracy of global model without introducing overhead problems.Then a secret model distance computation method is proposed,which allows both servers to compute model distances without exposing the local model parameters,and poisoning model detection method is designed based on this method and local outlier factor(LOF)algorithm.Finally,security of PRFL is analysed.Experimental results on two real image datasets show that PRFL can obtain similar model accuracy to FedAvg under no attack,and PRFL can effectively defend against three advanced poisoning attacks and outperform existing Krum,Median,and Trimmed mean methods in both the data independent identically distributed(IID)and non-IID settings.
作者 高琦 孙奕 盖新貌 王友贺 杨帆 GAO Qi;SUN Yi;GAI Xinmao;WANG Youhe;YANG Fan(School of Cryptography Engineering,Information Engineering University,Zhengzhou 450001,China;Unit 61623,Beijing 100036,China;Unit 93216,Beijing 100085,China)
出处 《计算机科学》 CSCD 北大核心 2024年第11期356-367,共12页 Computer Science
关键词 联邦学习 隐私保护 投毒攻击 鲁棒聚合 离群值 Federated learning Privacy protection Poisoning attack Robust aggregation Outlier
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部