期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
An accelerated augmented Lagrangian method for linearly constrained convex programming with the rate of convergence O(1/k^2) 被引量:1
1
作者 KE Yi-fen MA Chang-feng 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2017年第1期117-126,共10页
In this paper, we propose and analyze an accelerated augmented Lagrangian method(denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k^2) whil... In this paper, we propose and analyze an accelerated augmented Lagrangian method(denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k^2) while the convergence rate of the classical augmented Lagrangian method(ALM) is O1 k. Numerical experiments on the linearly constrained 1-2minimization problem are presented to demonstrate the effectiveness of AALM. 展开更多
关键词 convex augmented constrained minimization accelerated Lagrangian linearly iteration sparse stopping
下载PDF
Global and Local Convergence of a New Affine Scaling Trust Region Algorithm for Linearly Constrained Optimization 被引量:1
2
作者 Chao GU De Tong ZHU 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2016年第10期1203-1213,共11页
Chen and Zhang [Sci. China, Set. A, 45, 1390-1397 (2002)] introduced an affine scaling trust region algorithm for linearly constrained optimization and analyzed its global convergence. In this paper, we derive a new... Chen and Zhang [Sci. China, Set. A, 45, 1390-1397 (2002)] introduced an affine scaling trust region algorithm for linearly constrained optimization and analyzed its global convergence. In this paper, we derive a new affine scaling trust region algorithm with dwindling filter for linearly constrained optimization. Different from Chen and Zhang's work, the trial points generated by the new algorithm axe accepted if they improve the objective function or improve the first order necessary optimality conditions. Under mild conditions, we discuss both the global and local convergence of the new algorithm. Preliminary numerical results are reported. 展开更多
关键词 linearly constrained optimization affine scaling trust region dwindling filter~ conver-gence
原文传递
A BLOCK-COORDINATE DESCENT METHOD FOR LINEARLY CONSTRAINED MINIMIZATION PROBLEM 被引量:1
3
作者 Xuefang Liu Zheng Peng 《Annals of Applied Mathematics》 2018年第2期138-152,共15页
In this paper, a block coordinate descent method is developed to solve a linearly constrained separable convex optimization problem. The proposed method divides the decision variable into a few blocks based on certain... In this paper, a block coordinate descent method is developed to solve a linearly constrained separable convex optimization problem. The proposed method divides the decision variable into a few blocks based on certain rules. Then the candidate solution is iteratively obtained by updating one block at each iteration. The problem, whether or not there are overlapping regions between two immediately adjacent blocks, is investigated. The global convergence of the proposed method is established under some suitable assumptions. Numerical results show that the proposed method is effective compared with some "full-type" methods. 展开更多
关键词 linearly constrained optimization block coordinate descent Gauss-Seidel fashion
原文传递
A POTENTIAL REDUCTION ALGORITHM FOR LINEARLY CONSTRAINED CONVEX PROGRAMMING
4
作者 Liang XimingCollege of Information Science & Engineering,Central South Univ.,Changsha 410083. 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2001年第4期439-445,共7页
A potential reduction algorithm is proposed for optimization of a convex function subject to linear constraints.At each step of the algorithm,a system of linear equations is solved to get a search direction and the Ar... A potential reduction algorithm is proposed for optimization of a convex function subject to linear constraints.At each step of the algorithm,a system of linear equations is solved to get a search direction and the Armijo's rule is used to determine a stepsize.It is proved that the algorithm is globally convergent.Computational results are reported. 展开更多
关键词 Potential reduction algorithm linearly constrained convex programming global convergence numerical experiments.
下载PDF
A ROBUST SUPERLINEARLY CONVERGENT ALGORITHM FOR LINEARLY CONSTRAINED OPTIMIZATION PROBLEMS UNDER DEGENERACY
5
作者 曾庆光 贺国平 吴方 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 1998年第4期363-373,共11页
In this paper, the problem of minimizing a convex function subject to general linear constraints is considered. An algorithm which is an extension of the method described in [4] is presented. And a new dual simplex pr... In this paper, the problem of minimizing a convex function subject to general linear constraints is considered. An algorithm which is an extension of the method described in [4] is presented. And a new dual simplex procedure with lexicographic scheme is proposed to deal with the degenerative case in the sense that the gradients of active constraints at the iteration point are dependent. Unlike other methods, the new algorithm possesses the following important property that, at any iteration point generated by the algorithm, one can choose a set of the most suitable basis and from it one can drop all constraints which can be relaxed, not only one constraint once. This property will be helpful in decreasing the computation amount of the algorithm. The global convergence and superlinear convergence of this algorithm are proved,without any assumption of linear independence of the gradients of active constraints. 展开更多
关键词 linearly constrained optimization problem DEGENERACY dual simplex method superlinear convergence
全文增补中
PPA-Like Contraction Methods for Convex Optimization:A Framework Using Variational Inequality Approach 被引量:1
6
作者 Bing-Sheng He 《Journal of the Operations Research Society of China》 EI CSCD 2015年第4期391-420,共30页
Linearly constrained convex optimization has many applications.The first-order optimal condition of the linearly constrained convex optimization is a monotone variational inequality(VI).For solving VI,the proximal poi... Linearly constrained convex optimization has many applications.The first-order optimal condition of the linearly constrained convex optimization is a monotone variational inequality(VI).For solving VI,the proximal point algorithm(PPA)in Euclideannorm is classical but abstract.Hence,the classical PPA only plays an important theoretical role and it is rarely used in the practical scientific computation.In this paper,we give a review on the recently developed customized PPA in Hnorm(H is a positive definite matrix).In the frame of customized PPA,it is easy to construct the contraction-type methods for convex optimization with different linear constraints.In each iteration of the proposed methods,we need only to solve the proximal subproblems which have the closed form solutions or can be efficiently solved up to a high precision.Some novel applications and numerical experiments are reported.Additionally,the original primaldual hybrid gradient method is modified to a convergent algorithm by using a prediction-correction uniform framework.Using the variational inequality approach,the contractive convergence and convergence rate proofs of the framework are more general and quite simple. 展开更多
关键词 linearly constrained convex optimization Variational inequality Proximal point algorithm
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部