In this paper, we present a nonmonotone smoothing Newton algorithm for solving the circular cone programming(CCP) problem in which a linear function is minimized or maximized over the intersection of an affine space w...In this paper, we present a nonmonotone smoothing Newton algorithm for solving the circular cone programming(CCP) problem in which a linear function is minimized or maximized over the intersection of an affine space with the circular cone. Based on the relationship between the circular cone and the second-order cone(SOC), we reformulate the CCP problem as the second-order cone problem(SOCP). By extending the nonmonotone line search for unconstrained optimization to the CCP, a nonmonotone smoothing Newton method is proposed for solving the CCP. Under suitable assumptions, the proposed algorithm is shown to be globally and locally quadratically convergent. Some preliminary numerical results indicate the effectiveness of the proposed algorithm for solving the CCP.展开更多
This paper presents a unified bination algorithms (such as FrankWolfe problems. Global convergence results are framework of the nonmonotone convex comAlgorithm) for solving the traffic assignment established under m...This paper presents a unified bination algorithms (such as FrankWolfe problems. Global convergence results are framework of the nonmonotone convex comAlgorithm) for solving the traffic assignment established under mild conditions. The line search procedure used in our algorithm includes the nonmonotone Armijo rule, the non- monotone Goldstein rule and the nonmonotone Wolfe rule as special cases. So, the new algorithm can be viewed as a generalization of the regular convex combination algorithm.展开更多
The minimax optimization model introduced in this paper is an important model which has received some attention over the past years. In this paper, the application of minimax model on how to select the distribution ce...The minimax optimization model introduced in this paper is an important model which has received some attention over the past years. In this paper, the application of minimax model on how to select the distribution center location is first introduced. Then a new algorithm with nonmonotone line search to solve the non-decomposable minimax optimization is proposed. We prove that the new algorithm is global Convergent. Numerical results show the proposed algorithm is effective.展开更多
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessi...The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.展开更多
In the paper, a new mixed algorithm combined with schemes of nonmonotone line search, the systems of linear equations for higher order modification and sequential quadratic programming for constrained optimizations is...In the paper, a new mixed algorithm combined with schemes of nonmonotone line search, the systems of linear equations for higher order modification and sequential quadratic programming for constrained optimizations is presented. Under some weaker assumptions,without strict complementary condition, the algorithm is globally and superlinearly convergent.展开更多
In [3] Liu et al. investigated global convergence of conjugate gradient methods. In that paper they allowed βκ to be selected in a wider range and the global convergence of the corresponding algorithm without suffic...In [3] Liu et al. investigated global convergence of conjugate gradient methods. In that paper they allowed βκ to be selected in a wider range and the global convergence of the corresponding algorithm without sufficient decrease condition was proved. This paper investigates global convergence of nonmonotone conjugate gradient method under the same conditions.展开更多
This paper discusses the global convergence of a class of nonmonotone conjugate gra- dient methods(NM methods) for nonconvex object functions.This class of methods includes the nonmonotone counterpart of modified Po...This paper discusses the global convergence of a class of nonmonotone conjugate gra- dient methods(NM methods) for nonconvex object functions.This class of methods includes the nonmonotone counterpart of modified Polak- Ribière method and modified Hestenes- Stiefel method as special cases展开更多
In this paper, an alternating direction nonmonotone approximate Newton algorithm (ADNAN) based on nonmonotone line search is developed for solving inverse problems. It is shown that ADNAN converges to a solution of th...In this paper, an alternating direction nonmonotone approximate Newton algorithm (ADNAN) based on nonmonotone line search is developed for solving inverse problems. It is shown that ADNAN converges to a solution of the inverse problems and numerical results provide the effectiveness of the proposed algorithm.展开更多
In this work we consider an extension of the classical scalar-valued projected gradient method for multiobjective problems on convex sets.As in Fazzio et al.(Optim Lett 13:1365-1379,2019)a parameter which controls the...In this work we consider an extension of the classical scalar-valued projected gradient method for multiobjective problems on convex sets.As in Fazzio et al.(Optim Lett 13:1365-1379,2019)a parameter which controls the step length is considered and an updating rule based on the spectral gradient method from the scalar case is proposed.In the present paper,we consider an extension of the traditional nonmonotone approach of Grippo et al.(SIAM J Numer Anal 23:707-716,1986)based on the maximum of some previous function values as suggested in Mita et al.(J Glob Optim 75:539-559,2019)for unconstrained multiobjective optimization problems.We prove the accumulation points of sequences generated by the proposed algorithm,if they exist,are stationary points of the original problem.Numerical experiments are reported.展开更多
A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, th...A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, the descent search direction is generated by inverse limited memory SSR1 update, thus simplifying the computation. Numerical comparison of the algorithm and the famous limited memory BFGS algorithm is given. Comparison results indicate that the new algorithm can process a kind of large-scale unconstrained optimization problems.展开更多
Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of n...Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of nonmonotone line search is used. Under mildassumptions, we prove the global convergence of the method. Some numerical results arealso presented.展开更多
In this paper,we present a QP-free algorithm without a penalty function or a filter for nonlinear semidefinite programming.At each iteration,two systems of linear equations with the same coefficient matrix are solved ...In this paper,we present a QP-free algorithm without a penalty function or a filter for nonlinear semidefinite programming.At each iteration,two systems of linear equations with the same coefficient matrix are solved to determine search direction;the nonmonotone line search ensures that the objective function or constraint violation function is sufficiently reduced.There is no feasibility restoration phase in our algorithm,which is necessary for traditional filter methods.The proposed algorithm is globally convergent under some mild conditions.Preliminary numerical results indicate that the proposed algorithm is comparable.展开更多
Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining wit...Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining with nonmonotonic line search,we prove its global convergence.Furthermore,the proposed algorithms have sublinear convergence rate for general convex functions,and R-linear convergence rate for strongly convex problems.Numerical experiments show that the proposed methods are effective compared to the state of the arts.展开更多
A new adaptive subspace minimization three-term conjugate gradient algorithm with nonmonotone line search is introduced and analyzed in this paper.The search directions are computed by minimizing a quadratic approxima...A new adaptive subspace minimization three-term conjugate gradient algorithm with nonmonotone line search is introduced and analyzed in this paper.The search directions are computed by minimizing a quadratic approximation of the objective function on special subspaces,and we also proposed an adaptive rule for choosing different searching directions at each iteration.We obtain a significant conclusion that the each choice of the search directions satisfies the sufficient descent condition.With the used nonmonotone line search,we prove that the new algorithm is globally convergent for general nonlinear functions under some mild assumptions.Numerical experiments show that the proposed algorithm is promising for the given test problem set.展开更多
基金supported by the National Natural Science Foundation of China(11401126,71471140 and 11361018)Guangxi Natural Science Foundation(2016GXNSFBA380102 and 2014GXNSFFA118001)+2 种基金Guangxi Key Laboratory of Cryptography and Information Security(GCIS201618)Guangxi Key Laboratory of Automatic Detecting Technology and Instruments(YQ15112 and YQ16112)China
文摘In this paper, we present a nonmonotone smoothing Newton algorithm for solving the circular cone programming(CCP) problem in which a linear function is minimized or maximized over the intersection of an affine space with the circular cone. Based on the relationship between the circular cone and the second-order cone(SOC), we reformulate the CCP problem as the second-order cone problem(SOCP). By extending the nonmonotone line search for unconstrained optimization to the CCP, a nonmonotone smoothing Newton method is proposed for solving the CCP. Under suitable assumptions, the proposed algorithm is shown to be globally and locally quadratically convergent. Some preliminary numerical results indicate the effectiveness of the proposed algorithm for solving the CCP.
基金This research is partly supported by National Outstanding Young Investigator Grant(70225005) of National Natural Science Foundation of China and the Project(70471088) of National Natural Science Foundation of China.
文摘This paper presents a unified bination algorithms (such as FrankWolfe problems. Global convergence results are framework of the nonmonotone convex comAlgorithm) for solving the traffic assignment established under mild conditions. The line search procedure used in our algorithm includes the nonmonotone Armijo rule, the non- monotone Goldstein rule and the nonmonotone Wolfe rule as special cases. So, the new algorithm can be viewed as a generalization of the regular convex combination algorithm.
基金Supported by the Fundamental Research Funds for the Central Universities(2014JBM044)
文摘The minimax optimization model introduced in this paper is an important model which has received some attention over the past years. In this paper, the application of minimax model on how to select the distribution center location is first introduced. Then a new algorithm with nonmonotone line search to solve the non-decomposable minimax optimization is proposed. We prove that the new algorithm is global Convergent. Numerical results show the proposed algorithm is effective.
基金supported by NSFC 10001031 and 70472074supported by NSERC Grant 283103
文摘The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.
文摘In the paper, a new mixed algorithm combined with schemes of nonmonotone line search, the systems of linear equations for higher order modification and sequential quadratic programming for constrained optimizations is presented. Under some weaker assumptions,without strict complementary condition, the algorithm is globally and superlinearly convergent.
基金Supported by the National Science Foundation of China(10171055)
文摘In [3] Liu et al. investigated global convergence of conjugate gradient methods. In that paper they allowed βκ to be selected in a wider range and the global convergence of the corresponding algorithm without sufficient decrease condition was proved. This paper investigates global convergence of nonmonotone conjugate gradient method under the same conditions.
基金Supported by the National Natural Science Foundation of China(1 0 1 6 1 0 0 2 ) and Guangxi Natural Sci-ence Foundation (0 1 3 5 0 0 4 )
文摘This paper discusses the global convergence of a class of nonmonotone conjugate gra- dient methods(NM methods) for nonconvex object functions.This class of methods includes the nonmonotone counterpart of modified Polak- Ribière method and modified Hestenes- Stiefel method as special cases
文摘In this paper, an alternating direction nonmonotone approximate Newton algorithm (ADNAN) based on nonmonotone line search is developed for solving inverse problems. It is shown that ADNAN converges to a solution of the inverse problems and numerical results provide the effectiveness of the proposed algorithm.
基金ANPCyT(Nos.PICT 2016-0921 and PICT 2019-02172),Argentina.
文摘In this work we consider an extension of the classical scalar-valued projected gradient method for multiobjective problems on convex sets.As in Fazzio et al.(Optim Lett 13:1365-1379,2019)a parameter which controls the step length is considered and an updating rule based on the spectral gradient method from the scalar case is proposed.In the present paper,we consider an extension of the traditional nonmonotone approach of Grippo et al.(SIAM J Numer Anal 23:707-716,1986)based on the maximum of some previous function values as suggested in Mita et al.(J Glob Optim 75:539-559,2019)for unconstrained multiobjective optimization problems.We prove the accumulation points of sequences generated by the proposed algorithm,if they exist,are stationary points of the original problem.Numerical experiments are reported.
基金the National Natural Science Foundation of China(10471062)the Natural Science Foundation of Jiangsu Province(BK2006184)~~
文摘A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, the descent search direction is generated by inverse limited memory SSR1 update, thus simplifying the computation. Numerical comparison of the algorithm and the famous limited memory BFGS algorithm is given. Comparison results indicate that the new algorithm can process a kind of large-scale unconstrained optimization problems.
基金the National Natural Science Foundation of China(19801033,10171104).
文摘Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of nonmonotone line search is used. Under mildassumptions, we prove the global convergence of the method. Some numerical results arealso presented.
基金supported by the National Natural Science Foundation(No.11561005)the National Science Foundation of Guangxi(No.2016GXNSFAA380248)。
文摘In this paper,we present a QP-free algorithm without a penalty function or a filter for nonlinear semidefinite programming.At each iteration,two systems of linear equations with the same coefficient matrix are solved to determine search direction;the nonmonotone line search ensures that the objective function or constraint violation function is sufficiently reduced.There is no feasibility restoration phase in our algorithm,which is necessary for traditional filter methods.The proposed algorithm is globally convergent under some mild conditions.Preliminary numerical results indicate that the proposed algorithm is comparable.
基金supported by the National Natural Science Foundation of China(Nos.12171051 and 11871115)。
文摘Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining with nonmonotonic line search,we prove its global convergence.Furthermore,the proposed algorithms have sublinear convergence rate for general convex functions,and R-linear convergence rate for strongly convex problems.Numerical experiments show that the proposed methods are effective compared to the state of the arts.
基金the editor and the anonymous referees for their valuable comments,which are helpful to improve the quality of this paper.We would like to thank Professor Dai Y.H.and Dr.Kou C.X.for their CGOPT code,and thank Professors Hager W.W.and Zhang H.C.for their CG_DESCENT(5.3)codeThis research is supported by National Science Foundation of China(No.11901561),China Postdoctoral Science Foundation(2019M660833)Guangxi Natural Science Foundation(No.2018GXNSFBA281180).
文摘A new adaptive subspace minimization three-term conjugate gradient algorithm with nonmonotone line search is introduced and analyzed in this paper.The search directions are computed by minimizing a quadratic approximation of the objective function on special subspaces,and we also proposed an adaptive rule for choosing different searching directions at each iteration.We obtain a significant conclusion that the each choice of the search directions satisfies the sufficient descent condition.With the used nonmonotone line search,we prove that the new algorithm is globally convergent for general nonlinear functions under some mild assumptions.Numerical experiments show that the proposed algorithm is promising for the given test problem set.