期刊文献+

参数解耦在差分隐私保护下的联邦学习中的应用

Application of Parameter Decoupling in Differentially Privacy Protection Federated Learning
下载PDF
导出
摘要 联邦学习(Federated Learning,FL)是一种先进的隐私保护机器学习技术,其通过多方协作,在无需集中聚合原始数据的情况下,交换模型参数以训练共享模型。尽管在FL中参与方不需要显式地共享数据,但许多研究表明,其仍然面临多种隐私推理攻击,从而导致隐私信息泄露。为应对这一问题,学术界提出了多种解决方案。其中,一种严格保障隐私的方法是将本地化差分隐私(Local Differential Privacy,LDP)技术应用于联邦学习。该技术在参与方上传模型参数前对其添加随机噪声,能有效地抵御恶意攻击者的推理攻击。然而,LDP引入的噪声会造成模型性能下降。同时,最新研究指出,这种性能下降与LDP在客户端之间引入了额外的异构性有关。针对LDP使得FL性能下降的问题,提出了差分隐私保护下基于参数解耦的联邦学习方案(PD-LDPFL):除了服务器下发的基础模型外,每个客户端在本地还额外学习了个性化输入和输出模型。该方案在客户端传输时仅上传添加噪声后的基础模型的参数,而个性化模型的参数被保留在本地,自适应改变客户端本地数据的输入和输出分布,缓解LDP引入的额外异构性以减少精度损失。此外,研究发现,即使在采用较高的隐私预算的情况下,该方案也能天然地抵御一些基于梯度的隐私推理攻击,如深度梯度泄露等攻击方法。在MNIST,FMNIST和CIFAR-10这3个常用数据集上进行了实验,结果表明:相比传统的差分隐私联邦学习方法,该方案不仅可以获得更好的性能,而且还提供了额外的安全性。 Federated learning(FL)is an advanced privacy preserving machine learning technique that exchanges model parameters to train shared models through multi-party collaboration without the need for centralized aggregation of raw data.Although participants in FL do not need to explicitly share data,many studies show that they still face various privacy inference attacks,leading to privacy information leakage.To address this issue,the academic community has proposed various solutions.One of the strict privacy protection methods is to apply Local differential privacy(LDP)technology to federated learning.This technology adds random noise to the model parameters before they are uploaded by participants,to effectively resist inference attacks from malicious attackers.However,the noise introduced by LDP can reduce the model performance.Meanwhile,the latest research suggests that this performance decline is related to the additional heterogeneity introduced by LDP between clients.A parameter decoupling based federated learning scheme(PD-LDPFL)with differential privacy protection is proposed to address the issue of FL performance degradation caused by LDP.In addition to the basic model issued by the server,each client also learns personalized input and output models locally.This scheme only uploads the parameters of the basic model with added noise during client transmission,while the personalized model is retained locally,adaptively changing the input and output distribution of the client's local data to alleviate the additional heterogeneity introduced by LDP and reduce accuracy loss.In addition,research has found that even with a higher privacy budget,this scheme can naturally resist some gradient based privacy inference attacks,such as deep gradient leakage and other attack methods.Through experiments on three commonly used datasets,MNIST,FMNIST,and CIFAR-10,the results show that this scheme not only achieves better performance compared to traditional differential privacy federated learning,but also provides additional security.
作者 王梓行 杨敏 魏子重 WANG Zihang;YANG Min;WEI Zichong(Key Laboratory of Aerospace Information Security and Trusted Computing,Ministry of Education,School of Cyber Science and Engineering,Wuhan University,Wuhan 430072,China;Inspur Group Scientific Research Institute,Jinan 250101,China)
出处 《计算机科学》 CSCD 北大核心 2024年第11期379-388,共10页 Computer Science
基金 国家自然科学基金(62172308) 国家重点基础研究发展计划(2021YFB2700200)。
关键词 联邦学习 差分隐私 异构性 参数解耦 隐私保护 Federated learning Differential privacy Heterogeneity Parameter decoupling Privacy preserving
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部