期刊文献+
共找到47篇文章
< 1 2 3 >
每页显示 20 50 100
Bayesian network learning algorithm based on unconstrained optimization and ant colony optimization 被引量:3
1
作者 Chunfeng Wang Sanyang Liu Mingmin Zhu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2012年第5期784-790,共7页
Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony opt... Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony optimization(U-ACO-B) to solve the drawbacks of the ant colony optimization(ACO-B).In this algorithm,firstly,an unconstrained optimization problem is solved to obtain an undirected skeleton,and then the ACO algorithm is used to orientate the edges,thus returning the final structure.In the experimental part of the paper,we compare the performance of the proposed algorithm with ACO-B algorithm.The experimental results show that our method is effective and greatly enhance convergence speed than ACO-B algorithm. 展开更多
关键词 Bayesian network structure learning ant colony optimization unconstrained optimization
下载PDF
GLOBAL COVERGENCE OF THE NON-QUASI-NEWTON METHOD FOR UNCONSTRAINED OPTIMIZATION PROBLEMS 被引量:2
2
作者 Liu Hongwei Wang Mingjie +1 位作者 Li Jinshan Zhang Xiangsun 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2006年第3期276-288,共13页
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the ... In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient. 展开更多
关键词 non-quasi-Newton method inexact line search global convergence unconstrained optimization superlinear convergence.
下载PDF
New type of conjugate gradient algorithms for unconstrained optimization problems
3
作者 Caiying Wu Guoqing Chen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2010年第6期1000-1007,共8页
Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient metho... Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient methods, the new methods take both available gradient and function value information. Furthermore, their modifications are proposed. These methods are shown to be global convergent under some assumptions. Numerical results are also reported. 展开更多
关键词 conjugate gradient unconstrained optimization global convergence conjugacy condition.
下载PDF
Subspace Minimization Conjugate Gradient Method Based on Cubic Regularization Model for Unconstrained Optimization
4
作者 Ting Zhao Hongwei Liu 《Journal of Harbin Institute of Technology(New Series)》 CAS 2021年第5期61-69,共9页
Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology ... Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology has become particularly important and widely used in the field of optimization.In this study,a new CG method was put forward,which combined subspace technology and a cubic regularization model.Besides,a special scaled norm in a cubic regularization model was analyzed.Under certain conditions,some significant characteristics of the search direction were given and the convergence of the algorithm was built.Numerical comparisons show that for the 145 test functions under the CUTEr library,the proposed method is better than two classical CG methods and two new subspaces conjugate gradient methods. 展开更多
关键词 cubic regularization model conjugate gradient method subspace technique unconstrained optimization
下载PDF
An Improved Quasi-Newton Method for Unconstrained Optimization
5
作者 Fei Pusheng Chen Zhong (Department of Mathematics, Wuhan University, Wuhan 430072, China) 《Wuhan University Journal of Natural Sciences》 CAS 1996年第1期35-37,共3页
We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
关键词 quasi-Newton method superlinear convergence unconstrained optimization
下载PDF
Global Convergence of Curve Search Methods for Unconstrained Optimization
6
作者 Zhiwei Xu Yongning Tang Zhen-Jun Shi 《Applied Mathematics》 2016年第7期721-735,共15页
In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line... In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems. 展开更多
关键词 unconstrained optimization Curve Search Method Global Convergence Convergence Rate
下载PDF
CURVILINEAR PATHS AND TRUST REGION METHODS WITH NONMONOTONIC BACK TRACKING TECHNIQUE FOR UNCONSTRAINED OPTIMIZATION 被引量:26
7
作者 De-tong Zhu (Department of Mathematics, Shanghai Normal University, Shanghai 200234, China) 《Journal of Computational Mathematics》 SCIE EI CSCD 2001年第3期241-258,共18页
Focuses on a study which examined the modification of type approximate trust region methods via two curvilinear paths for unconstrained optimization. Properties of the curvilinear paths; Description of a method which ... Focuses on a study which examined the modification of type approximate trust region methods via two curvilinear paths for unconstrained optimization. Properties of the curvilinear paths; Description of a method which combines line search technique with an approximate trust region algorithm; Information on the convergence analysis; Details on the numerical experiments. 展开更多
关键词 curvilinear paths trust region methods nonmonotonic technique unconstrained optimization
原文传递
TESTING DIFFERENT CONJUGATE GRADIENT METHODS FOR LARGE-SCALE UNCONSTRAINED OPTIMIZATION 被引量:10
8
作者 Yu-hongDai QinNi 《Journal of Computational Mathematics》 SCIE CSCD 2003年第3期311-320,共10页
In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and th... In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and the second five hybrid CG methods. A collection of medium-scale and large-scale test problems are drawn from a standard code of test problems, CUTE. The conjugate gradient methods are ranked according to the numerical results. Some remarks are given. 展开更多
关键词 Conjugate gradient methods LARGE-SCALE unconstrained optimization Numerical tests.
原文传递
NON-QUASI-NEWTON UPDATES FOR UNCONSTRAINED OPTIMIZATION 被引量:25
9
作者 YUAN, YX BYRD, RH 《Journal of Computational Mathematics》 SCIE CSCD 1995年第2期95-107,共13页
In this report we present some new numerical methods for unconstrained optimization. These methods apply update formulae that do not satisfy the quasi-Newton equation. We derive these new formulae by considering diffe... In this report we present some new numerical methods for unconstrained optimization. These methods apply update formulae that do not satisfy the quasi-Newton equation. We derive these new formulae by considering different techniques of approximating the objective function. Theoretical analyses are given to show the advantages of using non-quasi-Newton updates. Under mild conditions we prove that our new update formulae preserve global convergence properties. Numerical results are also presented. 展开更多
关键词 NON-QUASI-NEWTON UPDATES FOR unconstrained optimization TH
原文传递
A NEW FAMILY OF TRUST REGION ALGORITHMS FOR UNCONSTRAINED OPTIMIZATION 被引量:5
10
作者 Yuhong Dai Dachuan Xu(State Key Laboratory of Scientific/Engineering Computing, Institute of Computational Mathematicsand Scientific/Engineering Computing, Academy of Mathematics and System Sciences, ChineseAcademy of Sciences, P.O. Box 2719, Beijing 100080, China) 《Journal of Computational Mathematics》 SCIE CSCD 2003年第2期221-228,共8页
Trust region (TR) algorithms are a class of recently developed algorithms for nonlinear optimization. A new family of TR algorithms for unconstrained optimization, which is the extension of the usual TR method, is pre... Trust region (TR) algorithms are a class of recently developed algorithms for nonlinear optimization. A new family of TR algorithms for unconstrained optimization, which is the extension of the usual TR method, is presented in this paper. When the objective function is bounded below and continuously, differentiable, and the norm of the Hesse approximations increases at most linearly with the iteration number, we prove the global convergence of the algorithms. Limited numerical results are reported, which indicate that our new TR algorithm is competitive. 展开更多
关键词 trust region method global convergence quasi-Newton method unconstrained optimization nonlinear programming.
原文传递
A QUASI-NEWTON ALGORITHM WITHOUT CALCULATING DERIVATIVES FOR UNCONSTRAINED OPTIMIZATION 被引量:1
11
作者 Sun Lin-ping(Depatheent of Mathematics, Nanjing Universitg, Jiangsu, China) 《Journal of Computational Mathematics》 SCIE CSCD 1994年第4期380-386,共7页
A new algorithm for unconstrained optimization is developed, by using the product form of the OCSSR1 update. The implementation is especially useful when gradient information is estimated by difference formulae. Preli... A new algorithm for unconstrained optimization is developed, by using the product form of the OCSSR1 update. The implementation is especially useful when gradient information is estimated by difference formulae. Preliminary tests show that new algorithm can perform well. 展开更多
关键词 TF LINE A QUASI-NEWTON ALGORITHM WITHOUT CALCULATING DERIVATIVES FOR unconstrained optimization MATH
原文传递
Unconstrained Optimization Reformulations of Equilibrium Problems
12
作者 Li Ping ZHANG Ji Ye HAN 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2009年第3期343-354,共12页
We generalize the D-gap function developed in the literature for variational inequalities to a general equilibrium problem (EP). Through the D-gap function, the equilibrium problem is cast as an unconstrained minimi... We generalize the D-gap function developed in the literature for variational inequalities to a general equilibrium problem (EP). Through the D-gap function, the equilibrium problem is cast as an unconstrained minimization problem. We give conditions under which any stationary point of the D-gap function is a solution of EP and conditions under which it provides a global error bound for EP. Finally, these results are applied to box-constrained EP and then weaker conditions are established to obtain the desired results for box-constrained EP. 展开更多
关键词 equilibrium problems D-gap function error bound unconstrained optimization
原文传递
A New Restarting Adaptive Trust-Region Method for Unconstrained Optimization
13
作者 Morteza Kimiaei Susan Ghaderi 《Journal of the Operations Research Society of China》 EI CSCD 2017年第4期487-507,共21页
In this paper,we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems.More precisely,a trust-region radius based on a nonmonotone technique uses an approximation of Hes... In this paper,we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems.More precisely,a trust-region radius based on a nonmonotone technique uses an approximation of Hessian which is adaptively chosen.We produce a suitable trust-region radius;preserve the global convergence under classical assumptions to the first-order critical points;improve the practical performance of the new algorithm compared to other exiting variants.Moreover,the quadratic convergence rate is established under suitable conditions.Computational results on the CUTEst test collection of unconstrained problems are presented to show the effectiveness of the proposed algorithm compared with some exiting methods. 展开更多
关键词 unconstrained optimization Trust-region methods Nonmonotone technique Adaptive radius Theoretical convergence
原文传递
A NEW NONMONOTONE TRUST REGION ALGORITHM FOR SOLVING UNCONSTRAINED OPTIMIZATION PROBLEMS
14
作者 Jinghui Liu Changfeng Ma 《Journal of Computational Mathematics》 SCIE CSCD 2014年第4期476-490,共15页
Based on the nonmonotone line search technique proposed by Gu and Mo (Appl. Math. Comput. 55, (2008) pp. 2158-2172), a new nonmonotone trust region algorithm is proposed for solving unconstrained optimization prob... Based on the nonmonotone line search technique proposed by Gu and Mo (Appl. Math. Comput. 55, (2008) pp. 2158-2172), a new nonmonotone trust region algorithm is proposed for solving unconstrained optimization problems in this paper. The new algorithm is developed by resetting the ratio ρk for evaluating the trial step dk whenever acceptable. The global and superlinear convergence of the algorithm are proved under suitable conditions. Numerical results show that the new algorithm is effective for solving unconstrained optimization problems. 展开更多
关键词 unconstrained optimization problems Nonmonotone trust region method Global convergence Superlinear convergence.
原文传递
A randomized nonmonotone adaptive trust region method based on the simulated annealing strategy for unconstrained optimization
15
作者 Saman Babaie-Kafaki Saeed Rezaee 《International Journal of Intelligent Computing and Cybernetics》 EI 2019年第3期389-399,共11页
Purpose–The purpose of this paper is to employ stochastic techniques to increase efficiency of the classical algorithms for solving nonlinear optimization problems.Design/methodology/approach–The well-known simulate... Purpose–The purpose of this paper is to employ stochastic techniques to increase efficiency of the classical algorithms for solving nonlinear optimization problems.Design/methodology/approach–The well-known simulated annealing strategy is employed to search successive neighborhoods of the classical trust region(TR)algorithm.Findings–An adaptive formula for computing the TR radius is suggested based on an eigenvalue analysis conducted on the memoryless Broyden-Fletcher-Goldfarb-Shanno updating formula.Also,a(heuristic)randomized adaptive TR algorithm is developed for solving unconstrained optimization problems.Results of computational experiments on a set of CUTEr test problems show that the proposed randomization scheme can enhance efficiency of the TR methods.Practical implications–The algorithm can be effectively used for solving the optimization problems which appear in engineering,economics,management,industry and other areas.Originality/value–The proposed randomization scheme improves computational costs of the classical TR algorithm.Especially,the suggested algorithm avoids resolving the TR subproblems for many times. 展开更多
关键词 Nonlinear programming Simulated annealing Adaptive radius Trust region method unconstrained optimization
原文传递
A Regularized Newton Method with Correction for Unconstrained Convex Optimization
16
作者 Liming Li Mei Qin Heng Wang 《Open Journal of Optimization》 2016年第1期44-52,共9页
In this paper, we present a regularized Newton method (M-RNM) with correction for minimizing a convex function whose Hessian matrices may be singular. At every iteration, not only a RNM step is computed but also two c... In this paper, we present a regularized Newton method (M-RNM) with correction for minimizing a convex function whose Hessian matrices may be singular. At every iteration, not only a RNM step is computed but also two correction steps are computed. We show that if the objective function is LC<sup>2</sup>, then the method posses globally convergent. Numerical results show that the new algorithm performs very well. 展开更多
关键词 Regularied Newton Method Correction Technique Trust Region Technique unconstrained Convex optimization
下载PDF
A NEW DERIVATIVE FREE OPTIMIZATION METHOD BASED ON CONIC INTERPOLATION MODEL 被引量:9
17
作者 倪勤 胡书华 《Acta Mathematica Scientia》 SCIE CSCD 2004年第2期281-290,共10页
In this paper, a new derivative free trust region method is developed based on the conic interpolation model for the unconstrained optimization. The conic interpolation model is built by means of the quadratic model f... In this paper, a new derivative free trust region method is developed based on the conic interpolation model for the unconstrained optimization. The conic interpolation model is built by means of the quadratic model function, the collinear scaling formula, quadratic approximation and interpolation. All the parameters in this model are determined by objective function interpolation condition. A new derivative free method is developed based upon this model and the global convergence of this new method is proved without any information on gradient. 展开更多
关键词 Derivative free optimization method conic interpolation model quadratic interpolation model trust region method unconstrained optimization
下载PDF
A New Subdivision Algorithm for the Bernstein Polynomial Approach to Global Optimization 被引量:6
18
作者 P.S.V.Nataraj M.Arounassalame 《International Journal of Automation and computing》 EI 2007年第4期342-352,共11页
In this paper, an improved algorithm is proposed for unconstrained global optimization to tackle non-convex nonlinear multivariate polynomial programming problems. The proposed algorithm is based on the Bernstein poly... In this paper, an improved algorithm is proposed for unconstrained global optimization to tackle non-convex nonlinear multivariate polynomial programming problems. The proposed algorithm is based on the Bernstein polynomial approach. Novel features of the proposed algorithm are that it uses a new rule for the selection of the subdivision point, modified rules for the selection of the subdivision direction, and a new acceleration device to avoid some unnecessary subdivisions. The performance of the proposed algorithm is numerically tested on a collection of 16 test problems. The results of the tests show the proposed algorithm to be superior to the existing Bernstein algorithm in terms of the chosen performance metrics. 展开更多
关键词 Bernstein polynomials global optimization nonlinear optimization polynomial optimization unconstrained optimization.
下载PDF
Chaotic Aquila Optimization Algorithm for Solving Phase Equilibrium Problems and Parameter Estimation of Semi-empirical Models
19
作者 Oguz Emrah Turgut Mert Sinan Turgut Erhan Kırtepe 《Journal of Bionic Engineering》 SCIE EI CSCD 2024年第1期486-526,共41页
This research study aims to enhance the optimization performance of a newly emerged Aquila Optimization algorithm by incorporating chaotic sequences rather than using uniformly generated Gaussian random numbers.This w... This research study aims to enhance the optimization performance of a newly emerged Aquila Optimization algorithm by incorporating chaotic sequences rather than using uniformly generated Gaussian random numbers.This work employs 25 different chaotic maps under the framework of Aquila Optimizer.It considers the ten best chaotic variants for performance evaluation on multidimensional test functions composed of unimodal and multimodal problems,which have yet to be studied in past literature works.It was found that Ikeda chaotic map enhanced Aquila Optimization algorithm yields the best predictions and becomes the leading method in most of the cases.To test the effectivity of this chaotic variant on real-world optimization problems,it is employed on two constrained engineering design problems,and its effectiveness has been verified.Finally,phase equilibrium and semi-empirical parameter estimation problems have been solved by the proposed method,and respective solutions have been compared with those obtained from state-of-art optimizers.It is observed that CH01 can successfully cope with the restrictive nonlinearities and nonconvexities of parameter estimation and phase equilibrium problems,showing the capabilities of yielding minimum prediction error values of no more than 0.05 compared to the remaining algorithms utilized in the performance benchmarking process. 展开更多
关键词 Aquila optimization algorithm Chaotic maps Parameter estimation Phase equilibrium unconstrained optimization
原文传递
A modified three–term conjugate gradient method with sufficient descent property 被引量:1
20
作者 Saman Babaie–Kafaki 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2015年第3期263-272,共10页
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi... A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method. 展开更多
关键词 unconstrained optimization conjugate gradient method eigenvalue sufficient descent condition global convergence
下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部