期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
A Weighted Average Consensus Approach for Decentralized Federated Learning
1
作者 Alessandro Giuseppi Sabato Manfredi Antonio Pietrabissa 《Machine Intelligence Research》 EI CSCD 2022年第4期319-330,共12页
Federated learning(FedL)is a machine learning(ML)technique utilized to train deep neural networks(DeepNNs)in a distributed way without the need to share data among the federated training clients.FedL was proposed for ... Federated learning(FedL)is a machine learning(ML)technique utilized to train deep neural networks(DeepNNs)in a distributed way without the need to share data among the federated training clients.FedL was proposed for edge computing and Internet of things(IoT)tasks in which a centralized server was responsible for coordinating and governing the training process.To remove the design limitation implied by the centralized entity,this work proposes two different solutions to decentralize existing FedL algorithms,enabling the application of FedL on networks with arbitrary communication topologies,and thus extending the domain of application of FedL to more complex scenarios and new tasks.Of the two proposed algorithms,one,called FedLCon,is developed based on results from discrete-time weighted average consensus theory and is able to reconstruct the performances of the standard centralized FedL solutions,as also shown by the reported validation tests. 展开更多
关键词 federated learning(FedL) deep learning federated averaging(FedAvg) machine learning(ML) artificial intelligence discrete-time consensus distributed systems.
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部