摘要
Although federated learning(FL)has become very popular recently,it is vulnerable to gradient leakage attacks.Recent studies have shown that attackers can reconstruct clients’private data from shared models or gradients.Many existing works focus on adding privacy protection mechanisms to prevent user privacy leakages,such as differential privacy(DP)and homomorphic encryption.These defenses may cause an increase in computation and communication costs or degrade the performance of FL.Besides,they do not consider the impact of wireless network resources on the FL training process.Herein,we propose weight compression,a defense method to prevent gradient leakage attacks for FL over wireless networks.The gradient compression matrix is determined by the user’s location and channel conditions.We also add Gaussian noise to the compressed gradients to strengthen the defense.This joint learning of wireless resource allocation and weight compression matrix is formulated as an optimization problem with the objective of minimizing the FL loss function.To find the solution,we first analyze the convergence rate of FL and quantify the effect of the weight matrix on FL convergence.Then,we seek the optimal resource block(RB)allocation by exhaustive search or ant colony optimization(ACO)and then use the CVX toolbox to obtain the optimal weight matrix to minimize the optimization function.The simulation results show that the optimized RB can accelerate the convergence of FL.