In the context of edge computing environments in general and the metaverse in particular,federated learning(FL)has emerged as a distributed machine learning paradigm that allows multiple users to collaborate on traini...In the context of edge computing environments in general and the metaverse in particular,federated learning(FL)has emerged as a distributed machine learning paradigm that allows multiple users to collaborate on training a shared machine learning model locally,eliminating the need for uploading raw data to a central server.It is perhaps the only training paradigm that preserves the privacy of user data,which is essential for computing environments as personal as the metaverse.However,the original FL architecture proposed is not scalable to a large number of user devices in the metaverse community.To mitigate this problem,hierarchical federated learning(HFL)has been introduced as a general distributed learning paradigm,inspiring a number of research works.In this paper,we present several types of HFL architectures,with a special focus on the three-layer client-edge-cloud HFL architecture,which is most pertinent to the metaverse due to its delay-sensitive nature.We also examine works that take advantage of the natural layered organization of three-layer client-edge-cloud HFL to tackle some of the most challenging problems in FL within the metaverse.Finally,we outline some future research directions of HFL in the metaverse.展开更多
Federated learning(FL)is a distributed machine learning(ML)framework where several clients cooperatively train an ML model by exchanging the model parameters without directly sharing their local data.In FL,the limited...Federated learning(FL)is a distributed machine learning(ML)framework where several clients cooperatively train an ML model by exchanging the model parameters without directly sharing their local data.In FL,the limited number of participants for model aggregation and communication latency are two major bottlenecks.Hierarchical federated learning(HFL),with a cloud-edge-client hierarchy,can leverage the large coverage of cloud servers and the low transmission latency of edge servers.There are growing research interests in implementing FL in vehicular networks due to the requirements of timely ML training for intelligent vehicles.However,the limited number of participants in vehicular networks and vehicle mobility degrade the performance of FL training.In this context,HFL,which stands out for lower latency,wider coverage and more participants,is promising in vehicular networks.In this paper,we begin with the background and motivation of HFL and the feasibility of implementing HFL in vehicular networks.Then,the architecture of HFL is illustrated.Next,we clarify new issues in HFL and review several existing solutions.Furthermore,we introduce some typical use cases in vehicular networks as well as our initial efforts on implementing HFL in vehicular networks.Finally,we conclude with future research directions.展开更多
文摘In the context of edge computing environments in general and the metaverse in particular,federated learning(FL)has emerged as a distributed machine learning paradigm that allows multiple users to collaborate on training a shared machine learning model locally,eliminating the need for uploading raw data to a central server.It is perhaps the only training paradigm that preserves the privacy of user data,which is essential for computing environments as personal as the metaverse.However,the original FL architecture proposed is not scalable to a large number of user devices in the metaverse community.To mitigate this problem,hierarchical federated learning(HFL)has been introduced as a general distributed learning paradigm,inspiring a number of research works.In this paper,we present several types of HFL architectures,with a special focus on the three-layer client-edge-cloud HFL architecture,which is most pertinent to the metaverse due to its delay-sensitive nature.We also examine works that take advantage of the natural layered organization of three-layer client-edge-cloud HFL to tackle some of the most challenging problems in FL within the metaverse.Finally,we outline some future research directions of HFL in the metaverse.
基金sponsored in part by the National Key R&D Program of China under Grant No. 2020YFB1806605the National Natural Science Foundation of China under Grant Nos. 62022049, 62111530197, and 61871254+1 种基金OPPOsupported by the Fundamental Research Funds for the Central Universities under Grant No. 2022JBXT001
文摘Federated learning(FL)is a distributed machine learning(ML)framework where several clients cooperatively train an ML model by exchanging the model parameters without directly sharing their local data.In FL,the limited number of participants for model aggregation and communication latency are two major bottlenecks.Hierarchical federated learning(HFL),with a cloud-edge-client hierarchy,can leverage the large coverage of cloud servers and the low transmission latency of edge servers.There are growing research interests in implementing FL in vehicular networks due to the requirements of timely ML training for intelligent vehicles.However,the limited number of participants in vehicular networks and vehicle mobility degrade the performance of FL training.In this context,HFL,which stands out for lower latency,wider coverage and more participants,is promising in vehicular networks.In this paper,we begin with the background and motivation of HFL and the feasibility of implementing HFL in vehicular networks.Then,the architecture of HFL is illustrated.Next,we clarify new issues in HFL and review several existing solutions.Furthermore,we introduce some typical use cases in vehicular networks as well as our initial efforts on implementing HFL in vehicular networks.Finally,we conclude with future research directions.