In crowdsourced federated learning,differential privacy is commonly used to prevent the aggregation server from recovering training data from the models uploaded by clients to achieve privacy preservation.However,impr...In crowdsourced federated learning,differential privacy is commonly used to prevent the aggregation server from recovering training data from the models uploaded by clients to achieve privacy preservation.However,improper privacy budget settings and perturbation methods will severely impact model performance.In order to achieve a harmonious equilibrium between privacy preservation and model performance,we propose a novel architecture for crowdsourced federated learning with personalized privacy preservation.In our architecture,to avoid the issue of poor model performance due to excessive privacy preservation requirements,we establish a two-stage dynamic game between the task requestor and clients to formulate the optimal privacy preservation strategy,allowing each client to independently control privacy preservation level.Additionally,we design a differential privacy perturbation mechanism based on weight priorities.It divides the weights based on their relevance with local data,applying different levels of perturbation to different types of weights.Finally,we conduct experiments on the proposed perturbation mechanism,and the experimental results indicate that our approach can achieve better global model performance with the same privacy budget.展开更多
基金This work was supported by the National Natural Science Foundation of China(No.62271072)Beijing Natural Science Foundation(No.4232009).
文摘In crowdsourced federated learning,differential privacy is commonly used to prevent the aggregation server from recovering training data from the models uploaded by clients to achieve privacy preservation.However,improper privacy budget settings and perturbation methods will severely impact model performance.In order to achieve a harmonious equilibrium between privacy preservation and model performance,we propose a novel architecture for crowdsourced federated learning with personalized privacy preservation.In our architecture,to avoid the issue of poor model performance due to excessive privacy preservation requirements,we establish a two-stage dynamic game between the task requestor and clients to formulate the optimal privacy preservation strategy,allowing each client to independently control privacy preservation level.Additionally,we design a differential privacy perturbation mechanism based on weight priorities.It divides the weights based on their relevance with local data,applying different levels of perturbation to different types of weights.Finally,we conduct experiments on the proposed perturbation mechanism,and the experimental results indicate that our approach can achieve better global model performance with the same privacy budget.