期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Push-Pull Finite-Time Convergence Distributed Optimization Algorithm 被引量:1
1
作者 xiaobiao chen Kaixin Yan +3 位作者 Yu Gao Xuefeng Xu Kang Yan Jing Wang 《American Journal of Computational Mathematics》 2020年第1期118-146,共29页
With the widespread application of distributed systems, many problems need to be solved urgently. How to design distributed optimization strategies has become a research hotspot. This article focuses on the solution r... With the widespread application of distributed systems, many problems need to be solved urgently. How to design distributed optimization strategies has become a research hotspot. This article focuses on the solution rate of the distributed convex optimization algorithm. Each agent in the network has its own convex cost function. We consider a gradient-based distributed method and use a push-pull gradient algorithm to minimize the total cost function. Inspired by the current multi-agent consensus cooperation protocol for distributed convex optimization algorithm, a distributed convex optimization algorithm with finite time convergence is proposed and studied. In the end, based on a fixed undirected distributed network topology, a fast convergent distributed cooperative learning method based on a linear parameterized neural network is proposed, which is different from the existing distributed convex optimization algorithms that can achieve exponential convergence. The algorithm can achieve finite-time convergence. The convergence of the algorithm can be guaranteed by the Lyapunov method. The corresponding simulation examples also show the effectiveness of the algorithm intuitively. Compared with other algorithms, this algorithm is competitive. 展开更多
关键词 DISTRIBUTED Optimization FINITE Time CONVERGENCE Linear Parameterized Neural Network PUSH-PULL Algorithm Undirected GRAPH
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部