期刊文献+

支持多数不规则用户的隐私保护联邦学习框架 被引量:2

Privacy-preserving federated learning framework with irregular-majority users
下载PDF
导出
摘要 针对联邦学习存在处理大多数不规则用户易引起聚合效率降低,以及采用明文通信导致参数隐私泄露的问题,基于设计的安全除法协议构建针对不规则用户鲁棒的隐私保护联邦学习框架。该框架通过将模型相关计算外包给两台边缘服务器以减小采用同态加密产生的高额计算开销,不仅允许模型及其相关信息以密文形式在边缘服务器上进行密文聚合,还支持用户在本地进行模型可靠性计算以减小传统方法采用安全乘法协议造成的额外通信开销。在该框架的基础上,为更精准评估模型的泛化性能,用户完成本地模型参数更新后,利用边缘服务器下发的验证集与本地持有的验证集联合计算模型损失值,并结合损失值历史信息动态更新模型可靠性以作为模型权重。进一步,在模型可靠性先验知识指导下进行模型权重缩放,将密文模型与密文权重信息交由边缘服务器对全局模型参数进行聚合更新,保证全局模型变化主要由高质量数据用户贡献,提高收敛速度。通过HybridArgument模型进行安全性分析,论证表明PPRFL(privacy-preserving robust fe-derated learning)可以有效保护模型参数以及包括用户可靠性等中间交互参数的隐私。实验结果表明,当联邦聚合任务中的所有参与方均为不规则用户时,PPRFL方案准确率仍然能达到92%,收敛效率较PPFDL(privacy-preserving federated deep learning with irregular users)提高1.4倍;当联邦聚合任务中80%用户持有的训练数据都为噪声数据时,PPRFL方案准确率仍然能达到89%,收敛效率较PPFDL提高2.3倍。 In response to the existing problems that the federated learning might lead to the reduction of aggregation efficiency by handing the majority of irregular users and the leak of parameter privacy by adopting plaintext communication,a framework of privacy-preserving robust federated learning was proposed for ensuring the robustness of the irregular user based on the designed security division protocol.PPRFL could enable the model and its related information to aggregate in ciphertext on the edge server facilitate users to calculate the model reliability locally for reducing the additional communication overhead caused by the adoption of the security multiplication protocol in conventional methods,apart from lowering the high computational overhead resulted from homomorphic encryption with outsourcing computing to two edge servers.Based on this,user could calculate the loss value of the model through jointly using the verification sets issued by the edge server and that held locally after parameter updating of the local model.Then the model reliability could be dynamically updated as the model weight together with the historic information of the loss value.Further,the model weight was scaled under the guidance of prior knowledge,and the ciphertext model and ciphertext weight information are sent to the edge server to aggregate and update the global model parameters,ensuring that global model changes are contributed by high-quality data users,and improving the convergence speed.Through the security analysis of the Hybrid Argument model,the demonstration shows that PPRFL can effectively protect the privacy of model parameters and intermediate interaction parameters including user reliability.The experimental results show that the PPRFL scheme could still achieve the accuracy of 92%when all the participants in the federated aggregation task are irregular users,with the convergence efficiency 1.4 times higher than that of the PPFDL.Besides,the PPRFL scheme could still reach the accuracy of 89%when training data possessed by 80%of the users in the federated aggregation task were noise data,with the convergence efficiency 2.3 times higher than that of the PPFDL.
作者 陈前昕 毕仁万 林劼 金彪 熊金波 CHEN Qianxin;BI Renwan;LIN Jie;JIN Biao;XIONG Jinbo(College of Computer and Cyber Security,Fujian Normal University,Fuzhou 350117,China;Fujian Provincial Key Laboratory of Network Security and Cryptology,Fujian Normal University,Fuzhou 350007,China)
出处 《网络与信息安全学报》 2022年第1期139-150,共12页 Chinese Journal of Network and Information Security
基金 国家自然科学基金(61872088,61872090,U1905211) 福建省自然科学基金(2019J01276)。
关键词 联邦学习 隐私保护 安全聚合 大多数不规则用户 安全除法协议 federated learning privacy-preserving secure aggregation irregular-majority users security division protocol
  • 相关文献

参考文献7

二级参考文献16

共引文献197

同被引文献7

引证文献2

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部