期刊文献+

基于合作博弈和知识蒸馏的个性化联邦学习算法 被引量:2

Personalized Federated Learning Method Based on Collation Game and Knowledge Distillation
下载PDF
导出
摘要 为克服联邦学习(FL)客户端数据和模型均需同构的局限性并且提高训练精度,该文提出一种基于合作博弈和知识蒸馏的个性化联邦学习(pFedCK)算法。在该算法中,每个客户端将在公共数据集上训练得到的局部软预测上传到中心服务器并根据余弦相似度从服务器下载最相近的k个软预测形成一个联盟,然后利用合作博弈中的夏普利值(SV)来衡量客户端之间多重协作的影响,量化所下载软预测对本地个性化学习效果的累计贡献值,以此确定联盟中每个客户端的最佳聚合系数,从而得到更优的聚合模型。最后采用知识蒸馏(KD)将聚合模型的知识迁移到本地模型,并在隐私数据集上进行本地训练。仿真结果表明,与其他算法相比,pFedCK算法可以将个性化精度提升约10%。 To overcome the limitation of the Federated Learning(FL)when the data and model of each client are all heterogenous and improve the accuracy,a personalized Federated learning algorithm with Collation game and Knowledge distillation(pFedCK)is proposed.Firstly,each client uploads its soft-predict on public dataset and downloads the most correlative of the k soft-predict.Then,the Shapley Value(SV)from collation game is applied to measure the multi-wise influences among clients and their marginal contribution to others on personalized learning performance is quantified.Lastly,each client identify it’s optimal coalition and then the Knowledge Distillation(KD)is used to local model and local training is conduct on the privacy dataset.The results show that compared with the state-of-the-art algorithm,this approach can achieve superior personalized accuracy and can improve by about 10%.
作者 孙艳华 史亚会 李萌 杨睿哲 司鹏搏 SUN Yanhua;SHI Yahui;LI Meng;YANG Ruizhe;SI Pengbo(Faculty of Information Technology,Beijing University of Technology,Beijing 100124,China;Beijing Laboratory of Advanced Information Networks,Beijing University of Technology,Beijing 100124,China)
出处 《电子与信息学报》 EI CSCD 北大核心 2023年第10期3702-3709,共8页 Journal of Electronics & Information Technology
基金 北京市教委科技计划(KM202010005017)。
关键词 个性化联邦学习 合作博弈论 知识蒸馏 异构性 Personalized Federated Learning(PFL) Collation game Knowledge Distillation(KD) Heterogeneity
  • 相关文献

同被引文献10

引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部