摘要
Recent years have witnessed a spurt of progress in federated learning,which can coordinate multi-participation model training while protecting the data privacy of participants.However,low communication efficiency is a bottleneck when deploying federated learning to edge computing and IoT devices due to the need to transmit a huge number of parameters during co-training.In this paper,we verify that the outputs of the last hidden layer can record the characteristics of training data.Accordingly,we propose a communication-efficient strategy based on model split and representation aggregate.Specifically,we make the client upload the outputs of the last hidden layer instead of all model parameters when participating in the aggregation,and the server distributes gradients according to the global information to revise local models.Empirical evidence from experiments verifies that our method can complete training by uploading less than one-tenth of model parameters,while preserving the usability of the model.
基金
supported by Shenzhen Basic Research (General Project)under Grant No.JCYJ20190806142601687
Shenzhen Stable Supporting Program (General Project) under Grant No.GXWD20201230155427003-20200821160539001
Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies under Grant No.2022B1212010005
Shenzhen Basic Research (Key Project) under Grant No.JCYJ20200109113405927。