期刊文献+

Secure Federated Learning over Wireless Communication Networks with Model Compression

下载PDF
导出
摘要 Although federated learning(FL)has become very popular recently,it is vulnerable to gradient leakage attacks.Recent studies have shown that attackers can reconstruct clients’private data from shared models or gradients.Many existing works focus on adding privacy protection mechanisms to prevent user privacy leakages,such as differential privacy(DP)and homomorphic encryption.These defenses may cause an increase in computation and communication costs or degrade the performance of FL.Besides,they do not consider the impact of wireless network resources on the FL training process.Herein,we propose weight compression,a defense method to prevent gradient leakage attacks for FL over wireless networks.The gradient compression matrix is determined by the user’s location and channel conditions.We also add Gaussian noise to the compressed gradients to strengthen the defense.This joint learning of wireless resource allocation and weight compression matrix is formulated as an optimization problem with the objective of minimizing the FL loss function.To find the solution,we first analyze the convergence rate of FL and quantify the effect of the weight matrix on FL convergence.Then,we seek the optimal resource block(RB)allocation by exhaustive search or ant colony optimization(ACO)and then use the CVX toolbox to obtain the optimal weight matrix to minimize the optimization function.The simulation results show that the optimized RB can accelerate the convergence of FL.
出处 《ZTE Communications》 2023年第1期46-54,共9页 中兴通讯技术(英文版)
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部