The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency.Federated learning o...The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency.Federated learning offers a promising solution by allowing multiple clients to train models collaboratively without sharing private data.However,despite its privacy benefits,federated learning systems are vulnerable to poisoning attacks,where adversaries alter local model parameters on compromised clients and send malicious updates to the server,potentially compromising the global model’s accuracy.In this study,we introduce PMM(Perturbation coefficient Multiplied by Maximum value),a new poisoning attack method that perturbs model updates layer by layer,demonstrating the threat of poisoning attacks faced by federated learning.Extensive experiments across three distinct datasets have demonstrated PMM’s ability to significantly reduce the global model’s accuracy.Additionally,we propose an effective defense method,namely CLBL(Cluster Layer By Layer).Experiment results on three datasets have confirmed CLBL’s effectiveness.展开更多
基金supported by Systematic Major Project of China State Railway Group Corporation Limited(Grant Number:P2023W002).
文摘The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency.Federated learning offers a promising solution by allowing multiple clients to train models collaboratively without sharing private data.However,despite its privacy benefits,federated learning systems are vulnerable to poisoning attacks,where adversaries alter local model parameters on compromised clients and send malicious updates to the server,potentially compromising the global model’s accuracy.In this study,we introduce PMM(Perturbation coefficient Multiplied by Maximum value),a new poisoning attack method that perturbs model updates layer by layer,demonstrating the threat of poisoning attacks faced by federated learning.Extensive experiments across three distinct datasets have demonstrated PMM’s ability to significantly reduce the global model’s accuracy.Additionally,we propose an effective defense method,namely CLBL(Cluster Layer By Layer).Experiment results on three datasets have confirmed CLBL’s effectiveness.