期刊文献+

分布式一致性最优化的梯度算法与收敛分析 被引量:1

Distributed gradient-based consensus optimization algorithm and convergence analysis
下载PDF
导出
摘要 研究了多智能体网络中受集合约束的一致性最优化问题,提出了基于原始–对偶梯度的定步长分布式算法.算法中包括步长在内的参数会影响收敛性,需要先进行收敛分析,再根据收敛条件设置合适的参数.本文首先针对一般的定步长迭代格式,提出一种基于李雅普诺夫函数的收敛分析范式,它类似于一般微分方程关于李雅普诺夫稳定的分析方法.然后,针对所考虑的分布式梯度算法,构造了合适的李雅普诺夫函数,并根据收敛条件得到了算法参数设定范围,避免了繁冗复杂的分析论证.本文提出的理论与方法也为其他类型的分布式算法提供了一个框架性、系统性的论证方法. A distributed optimization problem is cooperatively solved by a network of agents, which have significant applications in many science and engineering fields, such as metallurgical engineering. For complex industrial processes with multiple-level characteristics, varying working conditions, and long processes, numerous optimization decision-making micro and macro control problems, such as product quality control, production planning, scheduling, and energy comprehensive deployment, are encountered.The theory and method of distributed optimization are keys to promoting the strategic decision-making of the integration of industrialization and new-generation industrial revolution. Their development enhances the ability to deal with large-scale and complex problems of big data, which have important practical value and economic benefits. In this study, consensus optimization with set constraints in multi-agent networks was explored. A distributed algorithm with a fixed step size was proposed on the basis of a primaldual gradient scheme. Parameters such as step size affect the convergence of the algorithm. As such, convergence should be analyzed first, and appropriate parameters should be subsequently set in accordance with convergence conditions. Existing works have constructed different Lyapunov functions by exploiting the specific iteration scheme of this algorithm and analyzing convergence. Conversely, a convergence analysis paradigm based on a Lyapunov function was proposed in this study for general fixed step size iteration schemes,which were similar to the analysis method of Lyapunov convergence for general differential equations. A suitable Lyapunov function was constructed for the distributed gradient algorithm, and a parameter setting range was obtained in accordance with the convergence conditions. The proposed method avoids the tedious and complicated analysis of algorithm convergence and parameter assignment. The theory and method presented in this study also provide a framework and systematic demonstration method for other types of distributed algorithms and may be regarded as future directions of distributed optimization.
作者 梁舒 彭开香 LIANG Shu;PENG Kaixing(Key Lbralory o1 Kowloleo Auomalion for Idustial I!Poceses of Ministy orEdue tin Tenol Beini,Being 10 Chima School of Aulomation,Univesity ofScience and Technology Beijing,Beijing 100083,China)
出处 《工程科学学报》 EI CSCD 北大核心 2020年第4期434-440,共7页 Chinese Journal of Engineering
基金 国家自然科学基金资助项目(61903027,61873024)。
关键词 分布式优化 原始–对偶 收敛分析 李雅普诺夫函数 一致性 distributed optimization primal-dual convergence analysis Lyapunov function consensus
  • 相关文献

参考文献2

二级参考文献10

共引文献29

同被引文献16

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部