In this paper, we present a family of general New to n-like methods with a parametric function for finding a zero of a univariate fu nction, permitting f′(x)=0 in some points. The case of multiple roots is n ot treat...In this paper, we present a family of general New to n-like methods with a parametric function for finding a zero of a univariate fu nction, permitting f′(x)=0 in some points. The case of multiple roots is n ot treated. The methods are proved to be quadratically convergent provided the w eak condition. Thus the methods remove the severe condition f′(x)≠0. Based on the general form of the Newton-like methods, a family of new iterative meth ods with a variable parameter are developed.展开更多
The paper develops the local convergence of Inexact Newton-Like Method(INLM)for approximating solutions of nonlinear equations in Banach space setting.We employ weak Lipschitz and center-weak Lipschitz conditions to p...The paper develops the local convergence of Inexact Newton-Like Method(INLM)for approximating solutions of nonlinear equations in Banach space setting.We employ weak Lipschitz and center-weak Lipschitz conditions to perform the error analysis.The obtained results compare favorably with earlier ones such as[7,13,14,18,19].A numerical example is also provided.展开更多
A conic Newton method is attractive because it converges to a local minimizzer rapidly from any sufficiently good initial guess. However, it may be expensive to solve the conic Newton equation at each iterate. In this...A conic Newton method is attractive because it converges to a local minimizzer rapidly from any sufficiently good initial guess. However, it may be expensive to solve the conic Newton equation at each iterate. In this paper we consider an inexact conic Newton method, which solves the couic Newton equation oldy approximately and in sonm unspecified manner. Furthermore, we show that such method is locally convergent and characterizes the order of convergence in terms of the rate of convergence of the relative residuals.展开更多
As a generalization of the two-term conjugate gradient method(CGM),the spectral CGM is one of the effective methods for solving unconstrained optimization.In this paper,we enhance the JJSL conjugate parameter,initiall...As a generalization of the two-term conjugate gradient method(CGM),the spectral CGM is one of the effective methods for solving unconstrained optimization.In this paper,we enhance the JJSL conjugate parameter,initially proposed by Jiang et al.(Computational and Applied Mathematics,2021,40:174),through the utilization of a convex combination technique.And this improvement allows for an adaptive search direction by integrating a newly constructed spectral gradient-type restart strategy.Then,we develop a new spectral CGM by employing an inexact line search to determine the step size.With the application of the weak Wolfe line search,we establish the sufficient descent property of the proposed search direction.Moreover,under general assumptions,including the employment of the strong Wolfe line search for step size calculation,we demonstrate the global convergence of our new algorithm.Finally,the given unconstrained optimization test results show that the new algorithm is effective.展开更多
In this paper, a unified matrix recovery model was proposed for diverse corrupted matrices. Resulting from the separable structure of the proposed model, the convex optimization problem can be solved efficiently by ad...In this paper, a unified matrix recovery model was proposed for diverse corrupted matrices. Resulting from the separable structure of the proposed model, the convex optimization problem can be solved efficiently by adopting an inexact augmented Lagrange multiplier (IALM) method. Additionally, a random projection accelerated technique (IALM+RP) was adopted to improve the success rate. From the preliminary numerical comparisons, it was indicated that for the standard robust principal component analysis (PCA) problem, IALM+RP was at least two to six times faster than IALM with an insignificant reduction in accuracy; and for the outlier pursuit (OP) problem, IALM+RP was at least 6.9 times faster, even up to 8.3 times faster when the size of matrix was 2 000×2 000.展开更多
This paper proposes an inexact Newton method via the Lanczos decomposed technique for solving the box-constrained nonlinear systems. An iterative direction is obtained by solving an affine scaling quadratic model with...This paper proposes an inexact Newton method via the Lanczos decomposed technique for solving the box-constrained nonlinear systems. An iterative direction is obtained by solving an affine scaling quadratic model with the Lanczos decomposed technique. By using the interior backtracking line search technique, an acceptable trial step length is found along this direction. The global convergence and the fast local convergence rate of the proposed algorithm are established under some reasonable conditions. Furthermore, the results of the numerical experiments show the effectiveness of the pro- posed algorithm.展开更多
Classical quasi-Newton methods are widely used to solve nonlinear problems in which the first-order information is exact.In some practical problems,we can only obtain approximate values of the objective function and i...Classical quasi-Newton methods are widely used to solve nonlinear problems in which the first-order information is exact.In some practical problems,we can only obtain approximate values of the objective function and its gradient.It is necessary to design optimization algorithms that can utilize inexact first-order information.In this paper,we propose an adaptive regularized quasi-Newton method to solve such problems.Under some mild conditions,we prove the global convergence and establish the convergence rate of the adaptive regularized quasi-Newton method.Detailed implementations of our method,including the subspace technique to reduce the amount of computation,are presented.Encouraging numerical results demonstrate that the adaptive regularized quasi-Newton method is a promising method,which can utilize the inexact first-order information effectively.展开更多
In full waveform inversion (FWI), Hessian information of the misfit function is of vital importance for accelerating the convergence of the inversion; however, it usually is not feasible to directly calculate the He...In full waveform inversion (FWI), Hessian information of the misfit function is of vital importance for accelerating the convergence of the inversion; however, it usually is not feasible to directly calculate the Hessian matrix and its inverse. Although the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) or Hessian-free inexact Newton (HFN) methods are able to use approximate Hessian information, the information they collect is limited. The two methods can be interlaced because they are able to provide Hessian information for each other; however, the performance of the hybrid iterative method is dependent on the effective switch between the two methods. We have designed a new scheme to realize the dynamic switch between the two methods based on the decrease ratio (DR) of the misfit function (objective function), and we propose a modified hybrid iterative optimization method. In the new scheme, we compare the DR of the two methods for a given computational cost, and choose the method with a faster DR. Using these steps, the modified method always implements the most efficient method. The results of Marmousi and overthrust model testings indicate that the convergence with our modified method is significantly faster than that in the L-BFGS method with no loss of inversion quality. Moreover, our modified outperforms the enriched method by a little speedup of the convergence. It also exhibits better efficiency than the HFN method.展开更多
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the ...In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.展开更多
A trust-region algorithm is presented for a nonlinear optimization problem of equality-constraints. The characterization of the algorithm is using inexact gradient information. Global convergence results are demonstra...A trust-region algorithm is presented for a nonlinear optimization problem of equality-constraints. The characterization of the algorithm is using inexact gradient information. Global convergence results are demonstrated where the gradient values are obeyed a simple relative error condition.展开更多
Bilevel programming problems are a class of optimization problems with hierarchical structure where one of the con-straints is also an optimization problem. Inexact restoration methods were introduced for solving nonl...Bilevel programming problems are a class of optimization problems with hierarchical structure where one of the con-straints is also an optimization problem. Inexact restoration methods were introduced for solving nonlinear programming problems a few years ago. They generate a sequence of, generally, infeasible iterates with intermediate iterations that consist of inexactly restored points. In this paper we present a software environment for solving bilevel program-ming problems using an inexact restoration technique without replacing the lower level problem by its KKT optimality conditions. With this strategy we maintain the minimization structure of the lower level problem and avoid spurious solutions. The environment is a user-friendly set of Fortran 90 modules which is easily and highly configurable. It is prepared to use two well-tested minimization solvers and different formulations in one of the minimization subproblems. We validate our implementation using a set of test problems from the literature, comparing different formulations and the use of the minimization solvers.展开更多
Inexact Newton methods are constructed by combining Newton's method with another iterative method that is used to solve the Newton equations inexactly. In this paper, we establish two semilocal convergence theorems f...Inexact Newton methods are constructed by combining Newton's method with another iterative method that is used to solve the Newton equations inexactly. In this paper, we establish two semilocal convergence theorems for the inexact Newton methods. When these two theorems are specified to Newton's method, we obtain a different Newton-Kantorovich theorem about Newton's method. When the iterative method for solving the Newton equations is specified to be the splitting method, we get two estimates about the iteration steps for the special inexact Newton methods.展开更多
文摘In this paper, we present a family of general New to n-like methods with a parametric function for finding a zero of a univariate fu nction, permitting f′(x)=0 in some points. The case of multiple roots is n ot treated. The methods are proved to be quadratically convergent provided the w eak condition. Thus the methods remove the severe condition f′(x)≠0. Based on the general form of the Newton-like methods, a family of new iterative meth ods with a variable parameter are developed.
文摘The paper develops the local convergence of Inexact Newton-Like Method(INLM)for approximating solutions of nonlinear equations in Banach space setting.We employ weak Lipschitz and center-weak Lipschitz conditions to perform the error analysis.The obtained results compare favorably with earlier ones such as[7,13,14,18,19].A numerical example is also provided.
文摘A conic Newton method is attractive because it converges to a local minimizzer rapidly from any sufficiently good initial guess. However, it may be expensive to solve the conic Newton equation at each iterate. In this paper we consider an inexact conic Newton method, which solves the couic Newton equation oldy approximately and in sonm unspecified manner. Furthermore, we show that such method is locally convergent and characterizes the order of convergence in terms of the rate of convergence of the relative residuals.
基金supported by the National Natural Science Foundation of China (No.72071202)the Key Laboratory of Mathematics and Engineering ApplicationsMinistry of Education。
文摘As a generalization of the two-term conjugate gradient method(CGM),the spectral CGM is one of the effective methods for solving unconstrained optimization.In this paper,we enhance the JJSL conjugate parameter,initially proposed by Jiang et al.(Computational and Applied Mathematics,2021,40:174),through the utilization of a convex combination technique.And this improvement allows for an adaptive search direction by integrating a newly constructed spectral gradient-type restart strategy.Then,we develop a new spectral CGM by employing an inexact line search to determine the step size.With the application of the weak Wolfe line search,we establish the sufficient descent property of the proposed search direction.Moreover,under general assumptions,including the employment of the strong Wolfe line search for step size calculation,we demonstrate the global convergence of our new algorithm.Finally,the given unconstrained optimization test results show that the new algorithm is effective.
基金Project supported by Key Industrial Projects of Major Science and Technology Projects of Zhejiang(No.2009C11023)Foundation of Zhejiang Educational Committee(No.Y200907886)Major High-Tech Industrialization Project of Jiaxing(No.2009BY10004)
基金Supported by National Natural Science Foundation of China (No.51275348)College Students Innovation and Entrepreneurship Training Program of Tianjin University (No.201210056339)
文摘In this paper, a unified matrix recovery model was proposed for diverse corrupted matrices. Resulting from the separable structure of the proposed model, the convex optimization problem can be solved efficiently by adopting an inexact augmented Lagrange multiplier (IALM) method. Additionally, a random projection accelerated technique (IALM+RP) was adopted to improve the success rate. From the preliminary numerical comparisons, it was indicated that for the standard robust principal component analysis (PCA) problem, IALM+RP was at least two to six times faster than IALM with an insignificant reduction in accuracy; and for the outlier pursuit (OP) problem, IALM+RP was at least 6.9 times faster, even up to 8.3 times faster when the size of matrix was 2 000×2 000.
基金Project supported by the National Natural Science Foundation of China (No. 10871130)the Ph. D.Programs Foundation of Ministry of Education of China (No. 20093127110005)the Shanghai Leading Academic Discipline Project (No. T0401)
文摘This paper proposes an inexact Newton method via the Lanczos decomposed technique for solving the box-constrained nonlinear systems. An iterative direction is obtained by solving an affine scaling quadratic model with the Lanczos decomposed technique. By using the interior backtracking line search technique, an acceptable trial step length is found along this direction. The global convergence and the fast local convergence rate of the proposed algorithm are established under some reasonable conditions. Furthermore, the results of the numerical experiments show the effectiveness of the pro- posed algorithm.
基金supported by the National Natural Science Foundation of China(Grant No.NSFC-11971118).
文摘Classical quasi-Newton methods are widely used to solve nonlinear problems in which the first-order information is exact.In some practical problems,we can only obtain approximate values of the objective function and its gradient.It is necessary to design optimization algorithms that can utilize inexact first-order information.In this paper,we propose an adaptive regularized quasi-Newton method to solve such problems.Under some mild conditions,we prove the global convergence and establish the convergence rate of the adaptive regularized quasi-Newton method.Detailed implementations of our method,including the subspace technique to reduce the amount of computation,are presented.Encouraging numerical results demonstrate that the adaptive regularized quasi-Newton method is a promising method,which can utilize the inexact first-order information effectively.
基金financially supported by the National Important and Special Project on Science and Technology(2011ZX05005-005-007HZ)the National Natural Science Foundation of China(No.41274116)
文摘In full waveform inversion (FWI), Hessian information of the misfit function is of vital importance for accelerating the convergence of the inversion; however, it usually is not feasible to directly calculate the Hessian matrix and its inverse. Although the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) or Hessian-free inexact Newton (HFN) methods are able to use approximate Hessian information, the information they collect is limited. The two methods can be interlaced because they are able to provide Hessian information for each other; however, the performance of the hybrid iterative method is dependent on the effective switch between the two methods. We have designed a new scheme to realize the dynamic switch between the two methods based on the decrease ratio (DR) of the misfit function (objective function), and we propose a modified hybrid iterative optimization method. In the new scheme, we compare the DR of the two methods for a given computational cost, and choose the method with a faster DR. Using these steps, the modified method always implements the most efficient method. The results of Marmousi and overthrust model testings indicate that the convergence with our modified method is significantly faster than that in the L-BFGS method with no loss of inversion quality. Moreover, our modified outperforms the enriched method by a little speedup of the convergence. It also exhibits better efficiency than the HFN method.
文摘In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.
文摘A trust-region algorithm is presented for a nonlinear optimization problem of equality-constraints. The characterization of the algorithm is using inexact gradient information. Global convergence results are demonstrated where the gradient values are obeyed a simple relative error condition.
文摘Bilevel programming problems are a class of optimization problems with hierarchical structure where one of the con-straints is also an optimization problem. Inexact restoration methods were introduced for solving nonlinear programming problems a few years ago. They generate a sequence of, generally, infeasible iterates with intermediate iterations that consist of inexactly restored points. In this paper we present a software environment for solving bilevel program-ming problems using an inexact restoration technique without replacing the lower level problem by its KKT optimality conditions. With this strategy we maintain the minimization structure of the lower level problem and avoid spurious solutions. The environment is a user-friendly set of Fortran 90 modules which is easily and highly configurable. It is prepared to use two well-tested minimization solvers and different formulations in one of the minimization subproblems. We validate our implementation using a set of test problems from the literature, comparing different formulations and the use of the minimization solvers.
基金Supported by State Key Laboratory of Scientific/Engineering Computing,Chinese Academy of Sciencesthe National Natural Science Foundation of China (10571059,10571060).
文摘Inexact Newton methods are constructed by combining Newton's method with another iterative method that is used to solve the Newton equations inexactly. In this paper, we establish two semilocal convergence theorems for the inexact Newton methods. When these two theorems are specified to Newton's method, we obtain a different Newton-Kantorovich theorem about Newton's method. When the iterative method for solving the Newton equations is specified to be the splitting method, we get two estimates about the iteration steps for the special inexact Newton methods.