摘要
Federated learning is a widely used distributed learning approach in recent years,however,despite model training from collecting data become to gathering parameters,privacy violations may occur when publishing and sharing models.A dynamic approach is pro-posed to add Gaussian noise more effectively and apply differential privacy to federal deep learning.Concretely,it is abandoning the traditional way of equally distributing the privacy budget e and adjusting the privacy budget to accommodate gradient descent federation learning dynamically,where the parameters depend on computation derived to avoid the impact on the algorithm that hyperparameters are created manually.It also incorporates adaptive threshold cropping to control the sensitivity,and finally,moments accountant is used to counting the∈consumed on the privacy‐preserving,and learning is stopped only if the∈_(total)by clients setting is reached,this allows the privacy budget to be adequately explored for model training.The experimental results on real datasets show that the method training has almost the same effect as the model learning of non‐privacy,which is significantly better than the differential privacy method used by TensorFlow.
基金
supported by the National Natural Science Foundation of China under Grant No.62062020 and No.72161005,NO.62002081,NO.62062017
Technology Founda-tion of Guizhou Province(grant no.QianKeHeJiChu‐ZK[2022]‐General184)
Guizhou Provincial Science and Technology Projects[2020]1Y265.