期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
MPHM:Model poisoning attacks on federal learning using historical information momentum 被引量:1
1
作者 Lei Shi Zhen Chen +6 位作者 Yucheng Shi Lin Wei yongcai tao Mengyang He Qingxian Wang Yuan Zhou Yufei Gao 《Security and Safety》 2023年第4期6-18,共13页
Federated learning(FL)development has grown increasingly strong with the increased emphasis on data for individuals and industry.Federated learning allows individual participants to jointly train a global model withou... Federated learning(FL)development has grown increasingly strong with the increased emphasis on data for individuals and industry.Federated learning allows individual participants to jointly train a global model without sharing local data,which significantly enhances data privacy.However,federated learning is vulnerable to poisoning attacks by malicious participants.Since federated learning does not have access to the participants’training process,i.e.,attackers can compromise the global model by uploading elaborate malicious local updates to the server under the guise of normal participants.Current model poisoning attacks usually add small perturbations to the local model after it is trained to craft harmful local updates and the attacker finds the appropriate perturbation size to bypass robust detection methods and corrupt the global model as much as possible.In contrast,we propose a novel model poisoning attack based on the momentum of history information(MPHM),that is,the attacker makes new malicious updates by dynamically crafting perturbations using the historical information in the local training,which will make the new malicious updates more effective and stealthy.Our attack aims to indiscriminately reduce the testing accuracy of the global model with minimal information.Experiments show that in the classical defense case,our attack can significantly corrupt the accuracy of the global model compared to other advanced poisoning attacks. 展开更多
关键词 Federated learning Poisoning attacks Security PRIVACY
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部