期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
DEFEAT:A decentralized federated learning against gradient attacks
1
作者 Guangxi Lu Zuobin Xiong +3 位作者 Ruinian Li Nael Mohammad Yingshu Li Wei Li 《High-Confidence Computing》 2023年第3期22-29,共8页
As one of the most promising machine learning frameworks emerging in recent years,Federated learning(FL)has received lots of attention.The main idea of centralized FL is to train a global model by aggregating local mo... As one of the most promising machine learning frameworks emerging in recent years,Federated learning(FL)has received lots of attention.The main idea of centralized FL is to train a global model by aggregating local model parameters and maintain the private data of users locally.However,recent studies have shown that traditional centralized federated learning is vulnerable to various attacks,such as gradient attacks,where a malicious server collects local model gradients and uses them to recover the private data stored on the client.In this paper,we propose a decentralized federated learning against aTtacks(DEFEAT)framework and use it to defend the gradient attack.The decentralized structure adopted by this paper uses a peer-to-peer network to transmit,aggregate,and update local models.In DEFEAT,the participating clients only need to communicate with their single-hop neighbors to learn the global model,in which the model accuracy and communication cost during the training process of DEFEAT are well balanced.Through a series of experiments and detailed case studies on real datasets,we evaluate the excellent model performance of DEFEAT and the privacy preservation capability against gradient attacks. 展开更多
关键词 Federated learning peer to peer network Privacy protection
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部