摘要
提出一种基于动态聚类的个性化联邦学习方法来解决联邦学习下数据异构的问题。此方法将优化目标向量与凝聚聚类算法相结合,在保证节省计算资源的同时,将数据差异较大的客户端动态划分到不同的集群中。此外,出于对训练模型可持续使用的考虑,进一步提出模块可组合策略,新的客户端只需将之前训练模型组合便可以得到一个适合本地任务的初始模型。客户端只需在该初始模型上进行少量训练便可以应用于本地任务。在Cafir-10和Minst数据集上,其模型的精确度要优于本地重新训练模型的精度。
This paper proposes a personalized federated learning method based on dynamic clustering to address the issue of heterogeneous data in Federated Learning.This method combines the optimization target vector with the agglomerative clustering algorithm,dynamically divides clients with significant data differences into different clusters while conserving computing resources.Furthermore,in consideration of the sustainability of training models,the paper further proposes a modular combinatorial strategy,where new clients only need to combine previously trained models to obtain an initial model suitable for local tasks.The client only needs to perform a small amount of training on this initial model to apply it to local tasks.On the Cafir-10 and Minst datasets,the model's accuracy is superior to that of locally retrained models.
作者
周洪炜
马源
马旭
ZHOU Hongwei;MA Yuan;MA Xu(Qufu Normal University,Qufu 273165,China)
出处
《现代信息科技》
2024年第13期61-64,69,共5页
Modern Information Technology
关键词
联邦学习
个性化
深度神经网络
可组合
动态聚类
Federated Learning
personalization
Deep Neural Network
combinatorial
dynamic clustering