Federated learning(FL)is a distributed machine learning(ML)framework where several clients cooperatively train an ML model by exchanging the model parameters without directly sharing their local data.In FL,the limited...Federated learning(FL)is a distributed machine learning(ML)framework where several clients cooperatively train an ML model by exchanging the model parameters without directly sharing their local data.In FL,the limited number of participants for model aggregation and communication latency are two major bottlenecks.Hierarchical federated learning(HFL),with a cloud-edge-client hierarchy,can leverage the large coverage of cloud servers and the low transmission latency of edge servers.There are growing research interests in implementing FL in vehicular networks due to the requirements of timely ML training for intelligent vehicles.However,the limited number of participants in vehicular networks and vehicle mobility degrade the performance of FL training.In this context,HFL,which stands out for lower latency,wider coverage and more participants,is promising in vehicular networks.In this paper,we begin with the background and motivation of HFL and the feasibility of implementing HFL in vehicular networks.Then,the architecture of HFL is illustrated.Next,we clarify new issues in HFL and review several existing solutions.Furthermore,we introduce some typical use cases in vehicular networks as well as our initial efforts on implementing HFL in vehicular networks.Finally,we conclude with future research directions.展开更多
面向国家"碳达峰"与"碳中和"的"双碳"战略需求,移动通信与网络需要在满足不断增长的业务需求前提下大幅度降低全网能耗,因此需要研究使用更少的能量传递更多信息(SMILE, send more information bits with...面向国家"碳达峰"与"碳中和"的"双碳"战略需求,移动通信与网络需要在满足不断增长的业务需求前提下大幅度降低全网能耗,因此需要研究使用更少的能量传递更多信息(SMILE, send more information bits with less energy)的理论与技术。为了应对该挑战,仅靠无线传输技术的改进和硬件实现水平的提高是远远不够的,需要从系统和网络的角度探索能量的高效利用机理与方法。从能量的"节流"和"开源"2个维度展开,并针对日益增长的计算能耗给出解决方案。具体地,通过引入超蜂窝网络架构实现网络的柔性覆盖与弹性接入,使业务基站和边缘服务器在业务量较低时可以进入休眠状态,减少能量的浪费(即"节流")。同时,大量引入可再生绿色能源(即"开源"),通过能量流与信息流的智能适配,大幅降低电网的能耗。进一步地,通过网络功能虚拟化、通信与计算资源的高能效协同,以及移动智能体的分布式计算与协同等手段,实现绿色计算与人工智能算法。展开更多
Due to the increasing need for massive data analysis and machine learning model training at the network edge, as well as the rising concerns about data privacy, a new distrib?uted training framework called federated l...Due to the increasing need for massive data analysis and machine learning model training at the network edge, as well as the rising concerns about data privacy, a new distrib?uted training framework called federated learning (FL) has emerged and attracted much at?tention from both academia and industry. In FL, participating devices iteratively update the local models based on their own data and contribute to the global training by uploading mod?el updates until the training converges. Therefore, the computation capabilities of mobile de?vices can be utilized and the data privacy can be preserved. However, deploying FL in re?source-constrained wireless networks encounters several challenges, including the limited energy of mobile devices, weak onboard computing capability, and scarce wireless band?width. To address these challenges, recent solutions have been proposed to maximize the convergence rate or minimize the energy consumption under heterogeneous constraints. In this overview, we first introduce the backgrounds and fundamentals of FL. Then, the key challenges in deploying FL in wireless networks are discussed, and several existing solu?tions are reviewed. Finally, we highlight the open issues and future research directions in FL scheduling.展开更多
基金sponsored in part by the National Key R&D Program of China under Grant No. 2020YFB1806605the National Natural Science Foundation of China under Grant Nos. 62022049, 62111530197, and 61871254+1 种基金OPPOsupported by the Fundamental Research Funds for the Central Universities under Grant No. 2022JBXT001
文摘Federated learning(FL)is a distributed machine learning(ML)framework where several clients cooperatively train an ML model by exchanging the model parameters without directly sharing their local data.In FL,the limited number of participants for model aggregation and communication latency are two major bottlenecks.Hierarchical federated learning(HFL),with a cloud-edge-client hierarchy,can leverage the large coverage of cloud servers and the low transmission latency of edge servers.There are growing research interests in implementing FL in vehicular networks due to the requirements of timely ML training for intelligent vehicles.However,the limited number of participants in vehicular networks and vehicle mobility degrade the performance of FL training.In this context,HFL,which stands out for lower latency,wider coverage and more participants,is promising in vehicular networks.In this paper,we begin with the background and motivation of HFL and the feasibility of implementing HFL in vehicular networks.Then,the architecture of HFL is illustrated.Next,we clarify new issues in HFL and review several existing solutions.Furthermore,we introduce some typical use cases in vehicular networks as well as our initial efforts on implementing HFL in vehicular networks.Finally,we conclude with future research directions.
文摘面向国家"碳达峰"与"碳中和"的"双碳"战略需求,移动通信与网络需要在满足不断增长的业务需求前提下大幅度降低全网能耗,因此需要研究使用更少的能量传递更多信息(SMILE, send more information bits with less energy)的理论与技术。为了应对该挑战,仅靠无线传输技术的改进和硬件实现水平的提高是远远不够的,需要从系统和网络的角度探索能量的高效利用机理与方法。从能量的"节流"和"开源"2个维度展开,并针对日益增长的计算能耗给出解决方案。具体地,通过引入超蜂窝网络架构实现网络的柔性覆盖与弹性接入,使业务基站和边缘服务器在业务量较低时可以进入休眠状态,减少能量的浪费(即"节流")。同时,大量引入可再生绿色能源(即"开源"),通过能量流与信息流的智能适配,大幅降低电网的能耗。进一步地,通过网络功能虚拟化、通信与计算资源的高能效协同,以及移动智能体的分布式计算与协同等手段,实现绿色计算与人工智能算法。
基金supported in part by the National Key R&D Program of China under Grant No.2018YFB1800800 and the Nature Science Foundation of China under Grant Nos.61871254,91638204 and 61861136003.
文摘Due to the increasing need for massive data analysis and machine learning model training at the network edge, as well as the rising concerns about data privacy, a new distrib?uted training framework called federated learning (FL) has emerged and attracted much at?tention from both academia and industry. In FL, participating devices iteratively update the local models based on their own data and contribute to the global training by uploading mod?el updates until the training converges. Therefore, the computation capabilities of mobile de?vices can be utilized and the data privacy can be preserved. However, deploying FL in re?source-constrained wireless networks encounters several challenges, including the limited energy of mobile devices, weak onboard computing capability, and scarce wireless band?width. To address these challenges, recent solutions have been proposed to maximize the convergence rate or minimize the energy consumption under heterogeneous constraints. In this overview, we first introduce the backgrounds and fundamentals of FL. Then, the key challenges in deploying FL in wireless networks are discussed, and several existing solu?tions are reviewed. Finally, we highlight the open issues and future research directions in FL scheduling.