Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS met...Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.展开更多
In this paper,by utilizing the angle of arrivals(AOAs) and imprecise positions of the sensors,a novel modified Levenberg-Marquardt algorithm to solve the source localization problem is proposed.Conventional source loc...In this paper,by utilizing the angle of arrivals(AOAs) and imprecise positions of the sensors,a novel modified Levenberg-Marquardt algorithm to solve the source localization problem is proposed.Conventional source localization algorithms,like Gauss-Newton algorithm and Conjugate gradient algorithm are subjected to the problems of local minima and good initial guess.This paper presents a new optimization technique to find the descent directions to avoid divergence,and a trust region method is introduced to accelerate the convergence rate.Compared with conventional methods,the new algorithm offers increased stability and is more robust,allowing for stronger non-linearity and wider convergence field to be identified.Simulation results demonstrate that the proposed algorithm improves the typical methods in both speed and robustness,and is able to avoid local minima.展开更多
Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of n...Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of nonmonotone line search is used. Under mildassumptions, we prove the global convergence of the method. Some numerical results arealso presented.展开更多
The matrix rank minimization problem arises in many engineering applications. As this problem is NP-hard, a nonconvex relaxation of matrix rank minimization, called the Schatten-p quasi-norm minimization(0 < p <...The matrix rank minimization problem arises in many engineering applications. As this problem is NP-hard, a nonconvex relaxation of matrix rank minimization, called the Schatten-p quasi-norm minimization(0 < p < 1), has been developed to approximate the rank function closely. We study the performance of projected gradient descent algorithm for solving the Schatten-p quasi-norm minimization(0 < p < 1) problem.Based on the matrix restricted isometry property(M-RIP), we give the convergence guarantee and error bound for this algorithm and show that the algorithm is robust to noise with an exponential convergence rate.展开更多
文摘Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.
基金Supported by the National High Technology Research and Development Programme of China(No.2011AA7014061)
文摘In this paper,by utilizing the angle of arrivals(AOAs) and imprecise positions of the sensors,a novel modified Levenberg-Marquardt algorithm to solve the source localization problem is proposed.Conventional source localization algorithms,like Gauss-Newton algorithm and Conjugate gradient algorithm are subjected to the problems of local minima and good initial guess.This paper presents a new optimization technique to find the descent directions to avoid divergence,and a trust region method is introduced to accelerate the convergence rate.Compared with conventional methods,the new algorithm offers increased stability and is more robust,allowing for stronger non-linearity and wider convergence field to be identified.Simulation results demonstrate that the proposed algorithm improves the typical methods in both speed and robustness,and is able to avoid local minima.
基金the National Natural Science Foundation of China(19801033,10171104).
文摘Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of nonmonotone line search is used. Under mildassumptions, we prove the global convergence of the method. Some numerical results arealso presented.
基金supported by National Natural Science Foundation of China(Grant No.11171299)
文摘The matrix rank minimization problem arises in many engineering applications. As this problem is NP-hard, a nonconvex relaxation of matrix rank minimization, called the Schatten-p quasi-norm minimization(0 < p < 1), has been developed to approximate the rank function closely. We study the performance of projected gradient descent algorithm for solving the Schatten-p quasi-norm minimization(0 < p < 1) problem.Based on the matrix restricted isometry property(M-RIP), we give the convergence guarantee and error bound for this algorithm and show that the algorithm is robust to noise with an exponential convergence rate.