Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS met...Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.展开更多
In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of FR method,a class of new restarting three terms conjugate gradient methods is presented.Global conv...In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of FR method,a class of new restarting three terms conjugate gradient methods is presented.Global convergence properties of the new method with two kinds of common line searches are proved.展开更多
In this paper we present a new type of Restarted Krylov methods for calculating peripheral eigenvalues of symmetric matrices. The new framework avoids the Lanczos tridiagonalization process, and the use of polynomial ...In this paper we present a new type of Restarted Krylov methods for calculating peripheral eigenvalues of symmetric matrices. The new framework avoids the Lanczos tridiagonalization process, and the use of polynomial filtering. This simplifies the restarting mechanism and allows the introduction of several modifications. Convergence is assured by a monotonicity property that pushes the eigenvalues toward their limits. The Krylov matrices that we use lead to fast rate of convergence. Numerical experiments illustrate the usefulness of the proposed approach.展开更多
In this paper, we used an efficient algorithm to obtain an analytic approximation for Volterra’s model for population growth of a species within a closed system, called the Restarted Adomian decomposition method (RAD...In this paper, we used an efficient algorithm to obtain an analytic approximation for Volterra’s model for population growth of a species within a closed system, called the Restarted Adomian decomposition method (RADM) to solve the model. The numerical results illustrate that RADM has the good accuracy.展开更多
The purpose of this paper is to employ the Adomian Decomposition Method (ADM) and Restarted Adomian Decomposition Method (RADM) with new useful techniques to resolve Bratu’s boundary value problem by using a new inte...The purpose of this paper is to employ the Adomian Decomposition Method (ADM) and Restarted Adomian Decomposition Method (RADM) with new useful techniques to resolve Bratu’s boundary value problem by using a new integral operator. The solutions obtained in this way require the use of the boundary conditions directly. The obtained results indicate that the new techniques give more suitable and accurate solutions for the Bratu-type problem, compared with those for the ADM and its modification.展开更多
The Galerkin and least-squares methods are two classes of the most popular Krylov subspace methOds for solving large linear systems of equations. Unfortunately, both the methods may suffer from serious breakdowns of t...The Galerkin and least-squares methods are two classes of the most popular Krylov subspace methOds for solving large linear systems of equations. Unfortunately, both the methods may suffer from serious breakdowns of the same type: In a breakdown situation the Galerkin method is unable to calculate an approximate solution, while the least-squares method, although does not really break down, is unsucessful in reducing the norm of its residual. In this paper we first establish a unified theorem which gives a relationship between breakdowns in the two methods. We further illustrate theoretically and experimentally that if the coefficient matrix of a lienar system is of high defectiveness with the associated eigenvalues less than 1, then the restarted Galerkin and least-squares methods will be in great risks of complete breakdowns. It appears that our findings may help to understand phenomena observed practically and to derive treatments for breakdowns of this type.展开更多
Two new versions of accelerated first-order methods for minimizing convex composite functions are proposed. In this paper, we first present an accelerated first-order method which chooses the step size 1/ Lk to be 1/ ...Two new versions of accelerated first-order methods for minimizing convex composite functions are proposed. In this paper, we first present an accelerated first-order method which chooses the step size 1/ Lk to be 1/ L0 at the beginning of each iteration and preserves the computational simplicity of the fast iterative shrinkage-thresholding algorithm. The first proposed algorithm is a non-monotone algorithm. To avoid this behavior, we present another accelerated monotone first-order method. The proposed two accelerated first-order methods are proved to have a better convergence rate for minimizing convex composite functions. Numerical results demonstrate the efficiency of the proposed two accelerated first-order methods.展开更多
文摘Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.
基金Supported by the National Natural Science Foundation of China(10571106) Supported by the Fundamental Research Funds for the Central Universities(10CX04044A)
文摘In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of FR method,a class of new restarting three terms conjugate gradient methods is presented.Global convergence properties of the new method with two kinds of common line searches are proved.
文摘In this paper we present a new type of Restarted Krylov methods for calculating peripheral eigenvalues of symmetric matrices. The new framework avoids the Lanczos tridiagonalization process, and the use of polynomial filtering. This simplifies the restarting mechanism and allows the introduction of several modifications. Convergence is assured by a monotonicity property that pushes the eigenvalues toward their limits. The Krylov matrices that we use lead to fast rate of convergence. Numerical experiments illustrate the usefulness of the proposed approach.
文摘In this paper, we used an efficient algorithm to obtain an analytic approximation for Volterra’s model for population growth of a species within a closed system, called the Restarted Adomian decomposition method (RADM) to solve the model. The numerical results illustrate that RADM has the good accuracy.
文摘The purpose of this paper is to employ the Adomian Decomposition Method (ADM) and Restarted Adomian Decomposition Method (RADM) with new useful techniques to resolve Bratu’s boundary value problem by using a new integral operator. The solutions obtained in this way require the use of the boundary conditions directly. The obtained results indicate that the new techniques give more suitable and accurate solutions for the Bratu-type problem, compared with those for the ADM and its modification.
文摘The Galerkin and least-squares methods are two classes of the most popular Krylov subspace methOds for solving large linear systems of equations. Unfortunately, both the methods may suffer from serious breakdowns of the same type: In a breakdown situation the Galerkin method is unable to calculate an approximate solution, while the least-squares method, although does not really break down, is unsucessful in reducing the norm of its residual. In this paper we first establish a unified theorem which gives a relationship between breakdowns in the two methods. We further illustrate theoretically and experimentally that if the coefficient matrix of a lienar system is of high defectiveness with the associated eigenvalues less than 1, then the restarted Galerkin and least-squares methods will be in great risks of complete breakdowns. It appears that our findings may help to understand phenomena observed practically and to derive treatments for breakdowns of this type.
基金Sponsored by the National Natural Science Foundation of China(Grant No.11461021)the Natural Science Basic Research Plan in Shaanxi Province of China(Grant No.2017JM1014)
文摘Two new versions of accelerated first-order methods for minimizing convex composite functions are proposed. In this paper, we first present an accelerated first-order method which chooses the step size 1/ Lk to be 1/ L0 at the beginning of each iteration and preserves the computational simplicity of the fast iterative shrinkage-thresholding algorithm. The first proposed algorithm is a non-monotone algorithm. To avoid this behavior, we present another accelerated monotone first-order method. The proposed two accelerated first-order methods are proved to have a better convergence rate for minimizing convex composite functions. Numerical results demonstrate the efficiency of the proposed two accelerated first-order methods.