摘要
在数据非独立分布的情况下,联邦学习每一轮通信中将局部模型聚合成一个全局模型的过程将会导致异质性,并给联邦学习训练带来巨大的挑战.除此之外,由网络带宽不同或者客户端设备的训练速度的不同导致的部分设备参与加剧了这种异质性.仅通过局部模型的简单加权聚合会导致与真正的全局模型的进一步偏差,后者会收敛到局部极值点并在收敛过程中振荡.基于这些发现,本文提出了一种新的聚合算法--联邦累积学习算法(FedAcc),该算法在服务端通过聚合延迟的客户端的梯度信息以指导下一轮的服务端梯度更新,从而实现更健壮和更精确的聚合.实验结果表明,与FedAvg、FedAdam和FedAsync等几种优秀的联邦学习算法相比,FedAcc算法具有更好的综合性能.
In the case of the non-independent distribution of data,the process of aggregating local models into a global model in each round of communication of federated learning will lead to heterogeneity and bring great challenges to federated learning training.In addition,partial participation caused by differences in network bandwidth and devices′training speed in clients exacerbates this heterogeneity.The simple weighted aggregation of only partial local models leads to further deviation from the true global model which converges to a local extreme point and oscillates in the convergence process.Using these insights,a novel aggregation algorithm,federated accumulated learning algorithm(FedAcc)is proposed,which aggregates the gradient information of the delayed client at the server to guide the next round of server gradient updates,leading to much robust and accurate aggregation.Empirical results show that FedAcc has superior performance over several excellent federated learning algorithms,such as FedAvg,FedAdam,and FedAsync.
作者
邹敏浩
甘中学
ZOU Min-hao;GAN Zhong-xue(Academy for Engineering and Technology,Fudan University,Shanghai 200433,China)
出处
《小型微型计算机系统》
CSCD
北大核心
2023年第6期1121-1127,共7页
Journal of Chinese Computer Systems
基金
上海市科研计划项目(19511132000)资助,上海市科技重大项目(2021SHZDZX0103)资助.
关键词
联邦学习
移动边缘计算
分布式机器学习
人工神经网络
federal learning
mobile edge computing
distributed machine learning
artificial neural network