As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dim...As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks.展开更多
The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning...The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning being difficult to process large-scale unlabeled data. The existing federated self-supervision framework has problems with low communication efficiency and high communication delay between clients and central servers. Therefore, we added edge servers to the federated self-supervision framework to reduce the pressure on the central server caused by frequent communication between both ends. A communication compression scheme using gradient quantization and sparsification was proposed to optimize the communication of the entire framework, and the algorithm of the sparse communication compression module was improved. Experiments have proved that the learning rate changes of the improved sparse communication compression module are smoother and more stable. Our communication compression scheme effectively reduced the overall communication overhead.展开更多
Image semantic segmentation has become an essential part of autonomous driving.To further improve the generalization ability and the robustness of semantic segmentation algorithms,a lightweight algorithm network based...Image semantic segmentation has become an essential part of autonomous driving.To further improve the generalization ability and the robustness of semantic segmentation algorithms,a lightweight algorithm network based on Squeeze-and-Excitation Attention Mechanism(SE)and Depthwise Separable Convolution(DSC)is designed.Meanwhile,Adam-GC,an Adam optimization algorithm based on Gradient Compression(GC),is proposed to improve the training speed,segmentation accuracy,generalization ability and stability of the algorithm network.To verify and compare the effectiveness of the algorithm network proposed in this paper,the trained networkmodel is used for experimental verification and comparative test on the Cityscapes semantic segmentation dataset.The validation and comparison results show that the overall segmentation results of the algorithmnetwork can achieve 78.02%MIoU on Cityscapes validation set,which is better than the basic algorithm network and the other latest semantic segmentation algorithms network.Besides meeting the stability and accuracy requirements,it has a particular significance for the development of image semantic segmentation.展开更多
基金supported in part by the National Key Research and Development Program of China under Grant 2020YFB1807700in part by the National Science Foundation of China under Grant U200120122
文摘As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks.
文摘The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning being difficult to process large-scale unlabeled data. The existing federated self-supervision framework has problems with low communication efficiency and high communication delay between clients and central servers. Therefore, we added edge servers to the federated self-supervision framework to reduce the pressure on the central server caused by frequent communication between both ends. A communication compression scheme using gradient quantization and sparsification was proposed to optimize the communication of the entire framework, and the algorithm of the sparse communication compression module was improved. Experiments have proved that the learning rate changes of the improved sparse communication compression module are smoother and more stable. Our communication compression scheme effectively reduced the overall communication overhead.
基金supported by Qingdao People’s Livelihood Science and Technology Plan (Grant 19-6-1-88-nsh).
文摘Image semantic segmentation has become an essential part of autonomous driving.To further improve the generalization ability and the robustness of semantic segmentation algorithms,a lightweight algorithm network based on Squeeze-and-Excitation Attention Mechanism(SE)and Depthwise Separable Convolution(DSC)is designed.Meanwhile,Adam-GC,an Adam optimization algorithm based on Gradient Compression(GC),is proposed to improve the training speed,segmentation accuracy,generalization ability and stability of the algorithm network.To verify and compare the effectiveness of the algorithm network proposed in this paper,the trained networkmodel is used for experimental verification and comparative test on the Cityscapes semantic segmentation dataset.The validation and comparison results show that the overall segmentation results of the algorithmnetwork can achieve 78.02%MIoU on Cityscapes validation set,which is better than the basic algorithm network and the other latest semantic segmentation algorithms network.Besides meeting the stability and accuracy requirements,it has a particular significance for the development of image semantic segmentation.