Federated learning(FL)is a novel technique in deep learning that enables clients to collaboratively train a shared model while retaining their decentralized data.However,researchers working on FL face several unique c...Federated learning(FL)is a novel technique in deep learning that enables clients to collaboratively train a shared model while retaining their decentralized data.However,researchers working on FL face several unique challenges,especially in the context of heterogeneity.Heterogeneity in data distributions,computational capabilities,and scenarios among clients necessitates the development of customized models and objectives in FL.Unfortunately,existing works such as FedAvg may not effectively accommodate the specific needs of each client.To address the challenges arising from heterogeneity in FL,we provide an overview of the heterogeneities in data,model,and objective(DMO).Furthermore,we propose a novel framework called federated mutual learning(FML),which enables each client to train a personalized model that accounts for the data heterogeneity(DH).A“meme model”serves as an intermediary between the personalized and global models to address model heterogeneity(MH).We introduce a knowledge distillation technique called deep mutual learning(DML)to transfer knowledge between these two models on local data.To overcome objective heterogeneity(OH),we design a shared global model that includes only certain parts,and the personalized model is task-specific and enhanced through mutual learning with the meme model.We evaluate the performance of FML in addressing DMO heterogeneities through experiments and compare it with other commonly used FL methods in similar scenarios.The results demonstrate that FML outperforms other methods and effectively addresses the DMO challenges encountered in the FL setting.展开更多
基金supported by the National Natural Science Foundation of China(Nos.U20A20387,62006207,and 62037001)the Young Elite Scientists Sponsorship Program by China Association for Science and Technology(No.2021QNRC001)+4 种基金the Zhejiang Provincial Natural Science Foundation,China(No.LQ21F020020)the Project by Shanghai AI Laboratory,China(No.P22KS00111)the Program of Zhejiang Province Science and Technology(No.2022C01044)the StarryNight Science Fund of Zhejiang University Shanghai Institute for Advanced Study,China(No.SN-ZJU-SIAS-0010)the Fundamental Research Funds for the Central Universities,China(Nos.226-2022-00142 and 226-2022-00051)。
文摘Federated learning(FL)is a novel technique in deep learning that enables clients to collaboratively train a shared model while retaining their decentralized data.However,researchers working on FL face several unique challenges,especially in the context of heterogeneity.Heterogeneity in data distributions,computational capabilities,and scenarios among clients necessitates the development of customized models and objectives in FL.Unfortunately,existing works such as FedAvg may not effectively accommodate the specific needs of each client.To address the challenges arising from heterogeneity in FL,we provide an overview of the heterogeneities in data,model,and objective(DMO).Furthermore,we propose a novel framework called federated mutual learning(FML),which enables each client to train a personalized model that accounts for the data heterogeneity(DH).A“meme model”serves as an intermediary between the personalized and global models to address model heterogeneity(MH).We introduce a knowledge distillation technique called deep mutual learning(DML)to transfer knowledge between these two models on local data.To overcome objective heterogeneity(OH),we design a shared global model that includes only certain parts,and the personalized model is task-specific and enhanced through mutual learning with the meme model.We evaluate the performance of FML in addressing DMO heterogeneities through experiments and compare it with other commonly used FL methods in similar scenarios.The results demonstrate that FML outperforms other methods and effectively addresses the DMO challenges encountered in the FL setting.