摘要
Federated learning is an ideal solution to the limitation of not preser-ving the users’privacy information in edge computing.In federated learning,the cloud aggregates local model updates from the devices to generate a global model.To protect devices’privacy,the cloud is designed to have no visibility into how these updates are generated,making detecting and defending malicious model updates a challenging task.Unlike existing works that struggle to tolerate adversarial attacks,the paper manages to exclude malicious updates from the glo-bal model’s aggregation.This paper focuses on Byzantine attack and backdoor attack in the federated learning setting.We propose a federated learning frame-work,which we call Federated Reconstruction Error Probability Distribution(FREPD).FREPD uses a VAE model to compute updates’reconstruction errors.Updates with higher reconstruction errors than the average reconstruction error are deemed as malicious updates and removed.Meanwhile,we apply the Kolmogorov-Smirnov test to choose a proper probability distribution function and tune its parameters to fit the distribution of reconstruction errors from observed benign updates.We then use the distribution function to estimate the probability that an unseen reconstruction error belongs to the benign reconstruction error distribution.Based on the probability,we classify the model updates as benign or malicious.Only benign updates are used to aggregate the global model.FREPD is tested with extensive experiments on independent and identically distributed(IID)and non-IID federated benchmarks,showing a competitive performance over existing aggregation methods under Byzantine attack and backdoor attack.
基金
This research is supported by Education Ministry-China Mobile Research Funding under Grant No.MCM20170404.