摘要
联邦学习在保证各分布式客户端训练数据不出本地的情况下,由中心服务器收集梯度协同训练全局网络模型,具有良好的性能与隐私保护优势。但研究表明,联邦学习存在梯度传递引起的数据隐私泄漏问题。针对现有安全联邦学习算法存在的模型学习效果差、计算开销大和防御攻击种类单一等问题,提出了一种抗推理攻击的隐私增强联邦学习算法。首先,构建了逆推得到的训练数据与训练数据距离最大化的优化问题,基于拟牛顿法求解该优化问题,获得具有抗推理攻击能力的新特征。其次,利用新特征生成梯度实现梯度重构,基于重构后的梯度更新网络模型参数,可提升网络模型的隐私保护能力。最后,仿真结果表明所提算法能够同时抵御两类推理攻击,并且相较于其他安全方案,所提算法在保护效果与收敛速度上更具优势。
In federated learning,each distributed client does not need to transmit local training data,the central server jointly trains the global model by gradient collection,it has good performance and privacy protection advantages.However,it has been demonstrated that gradient transmission may lead to the privacy leakage problem in federated learning.Aiming at the existing problems of current secure federated learning algorithms,such as poor model learning effect,high computational cost,and single attack defense,this paper proposes a privacy-enhanced federated learning algorithm against inference attack.First,an optimization problem of maximizing the distance between the training data obtained by inversion and the training data is formulated.The optimization problem is solved based on the quasi-Newton method to obtain new features with anti-inference attack ability.Second,the gradient reconstruction is achieved by using new features to generate gradients.The model parameters are updated based on the reconstructed gradients,which can improve the privacy protection capability of the model.Finally,simulation results show that the proposed algorithm can resist two types of inference attacks simultaneously,and it has significant advantages in protection effect and convergence speed compared with other secure schemes.
作者
赵宇豪
陈思光
苏健
ZHAO Yuhao;CHEN Siguang;SU Jian(School of Internet of Things,Nanjing University of Posts and Telecommunications,Nanjing 210003,China;School of Computer Science,Nanjing University of Information Science and Technology,Nanjing 210044,China)
出处
《计算机科学》
CSCD
北大核心
2023年第9期62-67,共6页
Computer Science
基金
国家自然科学基金(61971235)
江苏省“333高层次人才培养工程”资助
中国博士后科学基金(面上一等资助)(2018M630590)
江苏省博士后科研资助计划(2021K501C)
南京邮电大学“1311”人才计划。
关键词
联邦学习
推理攻击
隐私保护
梯度扰动
Federated learning
Inference attack
Privacy preservation
Gradient perturbation