摘要
Federated learning is widely used to solve the problem of data decentralization and can provide privacy protectionfor data owners. However, since multiple participants are required in federated learning, this allows attackers tocompromise. Byzantine attacks pose great threats to federated learning. Byzantine attackers upload maliciouslycreated local models to the server to affect the prediction performance and training speed of the global model. Todefend against Byzantine attacks, we propose a Byzantine robust federated learning scheme based on backdoortriggers. In our scheme, backdoor triggers are embedded into benign data samples, and then malicious localmodels can be identified by the server according to its validation dataset. Furthermore, we calculate the adjustmentfactors of local models according to the parameters of their final layers, which are used to defend against datapoisoning-based Byzantine attacks. To further enhance the robustness of our scheme, each localmodel is weightedand aggregated according to the number of times it is identified as malicious. Relevant experimental data showthat our scheme is effective against Byzantine attacks in both independent identically distributed (IID) and nonindependentidentically distributed (non-IID) scenarios.
基金
in part by the National Social Science Foundation of China under Grant 20BTQ058
in part by the Natural Science Foundation of Hunan Province under Grant 2023JJ50033。