期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
FedDAA:a robust federated learning framework to protect privacy and defend against adversarial attack
1
作者 shiwei lu Ruihu LI Wenbin LIU 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第2期107-122,共16页
Federated learning(FL)has emerged to break data-silo and protect clients’privacy in the field of artificial intelligence.However,deep leakage from gradient(DLG)attack can fully reconstruct clients’data from the subm... Federated learning(FL)has emerged to break data-silo and protect clients’privacy in the field of artificial intelligence.However,deep leakage from gradient(DLG)attack can fully reconstruct clients’data from the submitted gradient,which threatens the fundamental privacy of FL.Although cryptology and differential privacy prevent privacy leakage from gradient,they bring negative effect on communication overhead or model performance.Moreover,the original distribution of local gradient has been changed in these schemes,which makes it difficult to defend against adversarial attack.In this paper,we propose a novel federated learning framework with model decomposition,aggregation and assembling(FedDAA),along with a training algorithm,to train federated model,where local gradient is decomposed into multiple blocks and sent to different proxy servers to complete aggregation.To bring better privacy protection performance to FedDAA,an indicator is designed based on image structural similarity to measure privacy leakage under DLG attack and an optimization method is given to protect privacy with the least proxy servers.In addition,we give defense schemes against adversarial attack in FedDAA and design an algorithm to verify the correctness of aggregated results.Experimental results demonstrate that FedDAA can reduce the structural similarity between the reconstructed image and the original image to 0.014 and remain model convergence accuracy as 0.952,thus having the best privacy protection performance and model training effect.More importantly,defense schemes against adversarial attack are compatible with privacy protection in FedDAA and the defense effects are not weaker than those in the traditional FL.Moreover,verification algorithm of aggregation results brings about negligible overhead to FedDAA. 展开更多
关键词 federated learning privacy protection adversarial attacks aggregated rule correctness verification
原文传递
Defense against local model poisoning attacks to byzantine-robust federated learning 被引量:2
2
作者 shiwei lu Ruihu LI +1 位作者 Xuan CHEN Yuena MA 《Frontiers of Computer Science》 SCIE EI CSCD 2022年第6期171-173,共3页
1 Introduction As a new mode of distributed learning,Federated Learning(FL)helps multiple organizations or clients to jointly train an artificial intelligence model without sharing their own datasets.Compared with the... 1 Introduction As a new mode of distributed learning,Federated Learning(FL)helps multiple organizations or clients to jointly train an artificial intelligence model without sharing their own datasets.Compared with the model trained by each client alone,a high-accuracy federated model can be obtained after multiple communication rounds in FL.Due to the characteristics of privacy protection and distributed learning,FL has been applied in many fields,such as the prognosis of pandemicdiseases,smartmanufacturing systems,etc. 展开更多
关键词 CLIENT jointly model
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部