期刊文献+

Stochastic Gradient Compression for Federated Learning over Wireless Network

下载PDF
导出
摘要 As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks.
出处 《China Communications》 SCIE CSCD 2024年第4期230-247,共18页 中国通信(英文版)
基金 supported in part by the National Key Research and Development Program of China under Grant 2020YFB1807700 in part by the National Science Foundation of China under Grant U200120122
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部