Some properties of a class of quasi-differentiable functions(the difference of two finite convex functions) are considered in this paper. And the convergence of the steepest descent algorithm for unconstrained and c...Some properties of a class of quasi-differentiable functions(the difference of two finite convex functions) are considered in this paper. And the convergence of the steepest descent algorithm for unconstrained and constrained quasi-differentiable programming is proved.展开更多
In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search directi...In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.展开更多
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ...In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.展开更多
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol...In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.展开更多
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new de...Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.展开更多
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi...A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.展开更多
The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe ...The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]).展开更多
The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by ...The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems.展开更多
We introduce the fractional-order global optimal backpropagation machine,which is trained by an improved fractionalorder steepest descent method(FSDM).This is a fractional-order backpropagation neural network(FBPNN),a...We introduce the fractional-order global optimal backpropagation machine,which is trained by an improved fractionalorder steepest descent method(FSDM).This is a fractional-order backpropagation neural network(FBPNN),a state-of-the-art fractional-order branch of the family of backpropagation neural networks(BPNNs),different from the majority of the previous classic first-order BPNNs which are trained by the traditional first-order steepest descent method.The reverse incremental search of the proposed FBPNN is in the negative directions of the approximate fractional-order partial derivatives of the square error.First,the theoretical concept of an FBPNN trained by an improved FSDM is described mathematically.Then,the mathematical proof of fractional-order global optimal convergence,an assumption of the structure,and fractional-order multi-scale global optimization of the FBPNN are analyzed in detail.Finally,we perform three(types of)experiments to compare the performances of an FBPNN and a classic first-order BPNN,i.e.,example function approximation,fractional-order multi-scale global optimization,and comparison of global search and error fitting abilities with real data.The higher optimal search ability of an FBPNN to determine the global optimal solution is the major advantage that makes the FBPNN superior to a classic first-order BPNN.展开更多
基金Supported by the State Foundations of Ph.D.Units(20020141013)Supported by the NSF of China(10001007)
文摘Some properties of a class of quasi-differentiable functions(the difference of two finite convex functions) are considered in this paper. And the convergence of the steepest descent algorithm for unconstrained and constrained quasi-differentiable programming is proved.
文摘In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.
文摘In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.
基金Supported by the Fund of Chongqing Education Committee(KJ091104)
文摘In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.
基金Supported by The Youth Project Foundation of Chongqing Three Gorges University(13QN17)Supported by the Fund of Scientific Research in Southeast University(the Support Project of Fundamental Research)
文摘Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.
基金Supported by Research Council of Semnan University
文摘A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.
文摘The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]).
文摘The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems.
基金Project supported by the National Key Research and Development Program of China(No.2018YFC0830300)the National Natural Science Foundation of China(No.61571312)。
文摘We introduce the fractional-order global optimal backpropagation machine,which is trained by an improved fractionalorder steepest descent method(FSDM).This is a fractional-order backpropagation neural network(FBPNN),a state-of-the-art fractional-order branch of the family of backpropagation neural networks(BPNNs),different from the majority of the previous classic first-order BPNNs which are trained by the traditional first-order steepest descent method.The reverse incremental search of the proposed FBPNN is in the negative directions of the approximate fractional-order partial derivatives of the square error.First,the theoretical concept of an FBPNN trained by an improved FSDM is described mathematically.Then,the mathematical proof of fractional-order global optimal convergence,an assumption of the structure,and fractional-order multi-scale global optimization of the FBPNN are analyzed in detail.Finally,we perform three(types of)experiments to compare the performances of an FBPNN and a classic first-order BPNN,i.e.,example function approximation,fractional-order multi-scale global optimization,and comparison of global search and error fitting abilities with real data.The higher optimal search ability of an FBPNN to determine the global optimal solution is the major advantage that makes the FBPNN superior to a classic first-order BPNN.