This paper proposes two kinds of approximate proximal point algorithms (APPA) for monotone variational inequalities, both of which can be viewed as two extended versions of Solodov and Svaiter's APPA in the paper ...This paper proposes two kinds of approximate proximal point algorithms (APPA) for monotone variational inequalities, both of which can be viewed as two extended versions of Solodov and Svaiter's APPA in the paper "Error bounds for proximal point subproblems and associated inexact proximal point algorithms" published in 2000. They are both prediction- correction methods which use the same inexactness restriction; the only difference is that they use different search directions in the correction steps. This paper also chooses an optimal step size in the two versions of the APPA to improve the profit at each iteration. Analysis also shows that the two APPAs are globally convergent under appropriate assumptions, and we can expect algorithm 2 to get more progress in every iteration than algorithm 1. Numerical experiments indicate that algorithm 2 is more efficient than algorithm 1 with the same correction step size,展开更多
Proximal point algorithms (PPA) are attractive methods for solving monotone variational inequalities (MVI). Since solving the sub-problem exactly in each iteration is costly or sometimes impossible, various approx...Proximal point algorithms (PPA) are attractive methods for solving monotone variational inequalities (MVI). Since solving the sub-problem exactly in each iteration is costly or sometimes impossible, various approximate versions ofPPA (APPA) are developed for practical applications. In this paper, we compare two APPA methods, both of which can be viewed as prediction-correction methods. The only difference is that they use different search directions in the correction-step. By extending the general forward-backward splitting methods, we obtain Algorithm Ⅰ; in the same way, Algorithm Ⅱ is proposed by spreading the general extra-gradient methods. Our analysis explains theoretically why Algorithm Ⅱ usually outperforms Algorithm Ⅰ. For computation practice, we consider a class of MVI with a special structure, and choose the extending Algorithm Ⅱ to implement, which is inspired by the idea of Gauss-Seidel iteration method making full use of information about the latest iteration. And in particular, self-adaptive techniques are adopted to adjust relevant parameters for faster convergence. Finally, some numerical experiments are reported on the separated MVI. Numerical results showed that the extending Algorithm II is feasible and easy to implement with relatively low computation load.展开更多
The problem of correcting simultaneously mass and stiffness matrices of finite element model of undamped structural systems using vibration tests is considered in this paper.The desired matrix properties,including sat...The problem of correcting simultaneously mass and stiffness matrices of finite element model of undamped structural systems using vibration tests is considered in this paper.The desired matrix properties,including satisfaction of the characteristic equation,symmetry,positive semidefiniteness and sparsity,are imposed as side constraints to form the optimal matrix pencil approximation problem.Using partial Lagrangian multipliers,we transform the nonlinearly constrained optimization problem into an equivalent matrix linear variational inequality,develop a proximal point-like method for solving the matrix linear variational inequality,and analyze its global convergence.Numerical results are included to illustrate the performance and application of the proposed method.展开更多
The alternating direction method of multipliers(ADMM)is a benchmark for solving convex programming problems with separable objective functions and linear constraints.In the literature it has been illustrated as an app...The alternating direction method of multipliers(ADMM)is a benchmark for solving convex programming problems with separable objective functions and linear constraints.In the literature it has been illustrated as an application of the proximal point algorithm(PPA)to the dual problem of the model under consideration.This paper shows that ADMM can also be regarded as an application of PPA to the primal model with a customized choice of the proximal parameter.This primal illustration of ADMM is thus complemental to its dual illustration in the literature.This PPA revisit on ADMM from the primal perspective also enables us to recover the generalized ADMM proposed by Eckstein and Bertsekas easily.A worst-case O(1/t)convergence rate in ergodic sense is established for a slight extension of Eckstein and Bertsekas’s generalized ADMM.展开更多
This paper introduces and considers a new system of generalized mixed variational inequal- ities in a Hilbert space, which includes many new and known systems of variational inequalities and generalized variational in...This paper introduces and considers a new system of generalized mixed variational inequal- ities in a Hilbert space, which includes many new and known systems of variational inequalities and generalized variational inequalities as special cases. By using the two concepts of η-subdifferential and η-proximal mappings of a proper function, the authors try to demonstrate that the system of generalized mixed variational inequalities is equivalence with a fixed point problem. By applying the equivalence, a new and innovative η-proximal point algorithm for finding approximate solutions of the system of generalized mixed variational inequalities will be suggested and analyzed. The authors also study the convergence analysis of the new iterative method under much weaker conditions. The results can be viewed as a refinement and improvement of the previously known results for variational inequalities.展开更多
文摘This paper proposes two kinds of approximate proximal point algorithms (APPA) for monotone variational inequalities, both of which can be viewed as two extended versions of Solodov and Svaiter's APPA in the paper "Error bounds for proximal point subproblems and associated inexact proximal point algorithms" published in 2000. They are both prediction- correction methods which use the same inexactness restriction; the only difference is that they use different search directions in the correction steps. This paper also chooses an optimal step size in the two versions of the APPA to improve the profit at each iteration. Analysis also shows that the two APPAs are globally convergent under appropriate assumptions, and we can expect algorithm 2 to get more progress in every iteration than algorithm 1. Numerical experiments indicate that algorithm 2 is more efficient than algorithm 1 with the same correction step size,
基金Project (No. 1027054) supported by the National Natural Science Foundation of China
文摘Proximal point algorithms (PPA) are attractive methods for solving monotone variational inequalities (MVI). Since solving the sub-problem exactly in each iteration is costly or sometimes impossible, various approximate versions ofPPA (APPA) are developed for practical applications. In this paper, we compare two APPA methods, both of which can be viewed as prediction-correction methods. The only difference is that they use different search directions in the correction-step. By extending the general forward-backward splitting methods, we obtain Algorithm Ⅰ; in the same way, Algorithm Ⅱ is proposed by spreading the general extra-gradient methods. Our analysis explains theoretically why Algorithm Ⅱ usually outperforms Algorithm Ⅰ. For computation practice, we consider a class of MVI with a special structure, and choose the extending Algorithm Ⅱ to implement, which is inspired by the idea of Gauss-Seidel iteration method making full use of information about the latest iteration. And in particular, self-adaptive techniques are adopted to adjust relevant parameters for faster convergence. Finally, some numerical experiments are reported on the separated MVI. Numerical results showed that the extending Algorithm II is feasible and easy to implement with relatively low computation load.
基金The work was supported by the National Natural Science Foundation of China(No.11571171)。
文摘The problem of correcting simultaneously mass and stiffness matrices of finite element model of undamped structural systems using vibration tests is considered in this paper.The desired matrix properties,including satisfaction of the characteristic equation,symmetry,positive semidefiniteness and sparsity,are imposed as side constraints to form the optimal matrix pencil approximation problem.Using partial Lagrangian multipliers,we transform the nonlinearly constrained optimization problem into an equivalent matrix linear variational inequality,develop a proximal point-like method for solving the matrix linear variational inequality,and analyze its global convergence.Numerical results are included to illustrate the performance and application of the proposed method.
基金supported by National Natural Science Foundation of China(Grant Nos.11001124 and 91130007)the Doctoral Fund of Ministry of Eduction of China(Grant No.20110091110004)the General Research Fund from Hong Kong Research Grants Council(Grant No.HKBU 203712)
文摘The alternating direction method of multipliers(ADMM)is a benchmark for solving convex programming problems with separable objective functions and linear constraints.In the literature it has been illustrated as an application of the proximal point algorithm(PPA)to the dual problem of the model under consideration.This paper shows that ADMM can also be regarded as an application of PPA to the primal model with a customized choice of the proximal parameter.This primal illustration of ADMM is thus complemental to its dual illustration in the literature.This PPA revisit on ADMM from the primal perspective also enables us to recover the generalized ADMM proposed by Eckstein and Bertsekas easily.A worst-case O(1/t)convergence rate in ergodic sense is established for a slight extension of Eckstein and Bertsekas’s generalized ADMM.
基金supported by the Natural Science Foundation of China under Grant No.11001287the Natural Science Foundation Project of CSTC under Grant No.2010BB9254
文摘This paper introduces and considers a new system of generalized mixed variational inequal- ities in a Hilbert space, which includes many new and known systems of variational inequalities and generalized variational inequalities as special cases. By using the two concepts of η-subdifferential and η-proximal mappings of a proper function, the authors try to demonstrate that the system of generalized mixed variational inequalities is equivalence with a fixed point problem. By applying the equivalence, a new and innovative η-proximal point algorithm for finding approximate solutions of the system of generalized mixed variational inequalities will be suggested and analyzed. The authors also study the convergence analysis of the new iterative method under much weaker conditions. The results can be viewed as a refinement and improvement of the previously known results for variational inequalities.