摘要
用户的异质性对联邦学习(FL)构成了显著挑战,这可能导致全局模型偏移和收敛速度缓慢。针对此问题,提出一种结合知识蒸馏和潜在空间生成器的联邦学习方法(FedLSG)。该方法通过中央服务器学习一个搭载潜在空间生成器的生成模型,该模型能够提取并模拟不同用户端样本标签的概率分布,从而生成更加丰富和多样的伪样本来指导用户端模型的训练。这种方法旨在有效解决FL中的用户异质性问题。通过理论分析和实验验证,展示了与现有FedGen方法相比,FedLSG普遍比FedGen的测试精度高出1%左右,前20轮的通信效率优于FedGen,同时还能提供一定程度的用户隐私保护。
User heterogeneity poses significant challenges to federated learning(FL),leading to global model bias and slow convergence.To address this problem,this paper proposed a method combining knowledge distillation and a latent space ge-nerator,called FedLSG.This method employed a central server to learn a generative model with a latent space generator that extracted and simulated the probability distribution of sample labels from different user devices,then generated richer and more diverse pseudo-samples to guide the training of user models.This approach aimed to effectively address the problem of user heterogeneity in FL.Theoretical analysis and experimental results show that FedLSG generally achieves about 1%higher test accuracy than the existing FedGen method,improves communication efficiency in the first 20 rounds,and provides a degree of user privacy protection.
作者
王虎
王晓峰
李可
Wang Hu;Wang Xiaofeng;Li Ke(School of Computer Science&Engineering,North Minzu University,Yinchuan 750021,China;The Key Laboratory of Images&Graphics Intelligent Processing of State Ethnic Affairs Commission,North Minzu University,Yinchuan 750021,China)
出处
《计算机应用研究》
CSCD
北大核心
2024年第11期3281-3287,共7页
Application Research of Computers
基金
国家自然科学基金资助项目(62062001)
宁夏青年拔尖人才项目(2021)。
关键词
用户异质性
联邦学习
知识蒸馏
潜在空间生成器
概率分布
user heterogeneity
federated learning
knowledge distillation
latent space generator
probability distribution