Chen and Zhang [Sci. China, Set. A, 45, 1390-1397 (2002)] introduced an affine scaling trust region algorithm for linearly constrained optimization and analyzed its global convergence. In this paper, we derive a new...Chen and Zhang [Sci. China, Set. A, 45, 1390-1397 (2002)] introduced an affine scaling trust region algorithm for linearly constrained optimization and analyzed its global convergence. In this paper, we derive a new affine scaling trust region algorithm with dwindling filter for linearly constrained optimization. Different from Chen and Zhang's work, the trial points generated by the new algorithm axe accepted if they improve the objective function or improve the first order necessary optimality conditions. Under mild conditions, we discuss both the global and local convergence of the new algorithm. Preliminary numerical results are reported.展开更多
In this paper, the problem of minimizing a convex function subject to general linear constraints is considered. An algorithm which is an extension of the method described in [4] is presented. And a new dual simplex pr...In this paper, the problem of minimizing a convex function subject to general linear constraints is considered. An algorithm which is an extension of the method described in [4] is presented. And a new dual simplex procedure with lexicographic scheme is proposed to deal with the degenerative case in the sense that the gradients of active constraints at the iteration point are dependent. Unlike other methods, the new algorithm possesses the following important property that, at any iteration point generated by the algorithm, one can choose a set of the most suitable basis and from it one can drop all constraints which can be relaxed, not only one constraint once. This property will be helpful in decreasing the computation amount of the algorithm. The global convergence and superlinear convergence of this algorithm are proved,without any assumption of linear independence of the gradients of active constraints.展开更多
In this paper, a primal-dual path-following interior-point algorithm for linearly constrained convex optimization(LCCO) is presented.The algorithm is based on a new technique for finding a class of search directions a...In this paper, a primal-dual path-following interior-point algorithm for linearly constrained convex optimization(LCCO) is presented.The algorithm is based on a new technique for finding a class of search directions and the strategy of the central path.At each iteration, only full-Newton steps are used.Finally, the favorable polynomial complexity bound for the algorithm with the small-update method is deserved, namely, O(√n log n /ε).展开更多
In this paper, a block coordinate descent method is developed to solve a linearly constrained separable convex optimization problem. The proposed method divides the decision variable into a few blocks based on certain...In this paper, a block coordinate descent method is developed to solve a linearly constrained separable convex optimization problem. The proposed method divides the decision variable into a few blocks based on certain rules. Then the candidate solution is iteratively obtained by updating one block at each iteration. The problem, whether or not there are overlapping regions between two immediately adjacent blocks, is investigated. The global convergence of the proposed method is established under some suitable assumptions. Numerical results show that the proposed method is effective compared with some "full-type" methods.展开更多
Linearly constrained convex optimization has many applications.The first-order optimal condition of the linearly constrained convex optimization is a monotone variational inequality(VI).For solving VI,the proximal poi...Linearly constrained convex optimization has many applications.The first-order optimal condition of the linearly constrained convex optimization is a monotone variational inequality(VI).For solving VI,the proximal point algorithm(PPA)in Euclideannorm is classical but abstract.Hence,the classical PPA only plays an important theoretical role and it is rarely used in the practical scientific computation.In this paper,we give a review on the recently developed customized PPA in Hnorm(H is a positive definite matrix).In the frame of customized PPA,it is easy to construct the contraction-type methods for convex optimization with different linear constraints.In each iteration of the proposed methods,we need only to solve the proximal subproblems which have the closed form solutions or can be efficiently solved up to a high precision.Some novel applications and numerical experiments are reported.Additionally,the original primaldual hybrid gradient method is modified to a convergent algorithm by using a prediction-correction uniform framework.Using the variational inequality approach,the contractive convergence and convergence rate proofs of the framework are more general and quite simple.展开更多
基金Supported by National Natural Science Foundation of China(Grant Nos.11201304 and 11371253)the Innovation Program of Shanghai Municipal Education Commission
文摘Chen and Zhang [Sci. China, Set. A, 45, 1390-1397 (2002)] introduced an affine scaling trust region algorithm for linearly constrained optimization and analyzed its global convergence. In this paper, we derive a new affine scaling trust region algorithm with dwindling filter for linearly constrained optimization. Different from Chen and Zhang's work, the trial points generated by the new algorithm axe accepted if they improve the objective function or improve the first order necessary optimality conditions. Under mild conditions, we discuss both the global and local convergence of the new algorithm. Preliminary numerical results are reported.
文摘In this paper, the problem of minimizing a convex function subject to general linear constraints is considered. An algorithm which is an extension of the method described in [4] is presented. And a new dual simplex procedure with lexicographic scheme is proposed to deal with the degenerative case in the sense that the gradients of active constraints at the iteration point are dependent. Unlike other methods, the new algorithm possesses the following important property that, at any iteration point generated by the algorithm, one can choose a set of the most suitable basis and from it one can drop all constraints which can be relaxed, not only one constraint once. This property will be helpful in decreasing the computation amount of the algorithm. The global convergence and superlinear convergence of this algorithm are proved,without any assumption of linear independence of the gradients of active constraints.
基金supported by the Shanghai Pujiang Program (Grant No.06PJ14039)the Science Foundation of Shanghai Municipal Commission of Education (Grant No.06NS031)
文摘In this paper, a primal-dual path-following interior-point algorithm for linearly constrained convex optimization(LCCO) is presented.The algorithm is based on a new technique for finding a class of search directions and the strategy of the central path.At each iteration, only full-Newton steps are used.Finally, the favorable polynomial complexity bound for the algorithm with the small-update method is deserved, namely, O(√n log n /ε).
基金supported by the Natural Science Foundation of China(Grant No.11571074)the Natural Science Foundation of Fujian Province(Grant No.2015J01010)
文摘In this paper, a block coordinate descent method is developed to solve a linearly constrained separable convex optimization problem. The proposed method divides the decision variable into a few blocks based on certain rules. Then the candidate solution is iteratively obtained by updating one block at each iteration. The problem, whether or not there are overlapping regions between two immediately adjacent blocks, is investigated. The global convergence of the proposed method is established under some suitable assumptions. Numerical results show that the proposed method is effective compared with some "full-type" methods.
文摘Linearly constrained convex optimization has many applications.The first-order optimal condition of the linearly constrained convex optimization is a monotone variational inequality(VI).For solving VI,the proximal point algorithm(PPA)in Euclideannorm is classical but abstract.Hence,the classical PPA only plays an important theoretical role and it is rarely used in the practical scientific computation.In this paper,we give a review on the recently developed customized PPA in Hnorm(H is a positive definite matrix).In the frame of customized PPA,it is easy to construct the contraction-type methods for convex optimization with different linear constraints.In each iteration of the proposed methods,we need only to solve the proximal subproblems which have the closed form solutions or can be efficiently solved up to a high precision.Some novel applications and numerical experiments are reported.Additionally,the original primaldual hybrid gradient method is modified to a convergent algorithm by using a prediction-correction uniform framework.Using the variational inequality approach,the contractive convergence and convergence rate proofs of the framework are more general and quite simple.