摘要
Distributed optimization has been well developed in recent years due to its wide applications in machine learning and signal processing.In this paper,we focus on investigating distributed optimization to minimize a global objective.The objective is a sum of smooth and strongly convex local cost functions which are distributed over an undirected network of n nodes.In contrast to existing works,we apply a distributed heavy-ball term to improve the convergence performance of the proposed algorithm.To accelerate the convergence of existing distributed stochastic first-order gradient methods,a momentum term is combined with a gradient-tracking technique.It is shown that the proposed algorithm has better acceleration ability than GT-SAGA without increasing the complexity.Extensive experiments on real-world datasets verify the effectiveness and correctness of the proposed algorithm.
基金
Project supported by the Open Research Fund Program of Data Recovery Key Laboratory of Sichuan Province,China(No.DRN2001)
the National Natural Science Foundation of China(Nos.61773321 and 61762020)
the Science and Technology Top-Notch Talents Support Project of Colleges and Universities in Guizhou Province,China(No.QJHKY2016065)
the Science and Technology Foundation of Guizhou Province,China(No.QKHJC20181083)
the Science and Technology Talents Fund for Excellent Young of Guizhou Province,China(No.QKHPTRC20195669)。