期刊文献+
共找到1,003篇文章
< 1 2 51 >
每页显示 20 50 100
A Two-Layer Encoding Learning Swarm Optimizer Based on Frequent Itemsets for Sparse Large-Scale Multi-Objective Optimization
1
作者 Sheng Qi Rui Wang +3 位作者 Tao Zhang Xu Yang Ruiqing Sun Ling Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第6期1342-1357,共16页
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.... Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed. 展开更多
关键词 Evolutionary algorithms learning swarm optimiza-tion sparse large-scale optimization sparse large-scale multi-objec-tive problems two-layer encoding.
下载PDF
Large-Scale Multi-Objective Optimization Algorithm Based on Weighted Overlapping Grouping of Decision Variables
2
作者 Liang Chen Jingbo Zhang +2 位作者 Linjie Wu Xingjuan Cai Yubin Xu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期363-383,共21页
The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the intera... The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage. 展开更多
关键词 Decision variable grouping large-scale multi-objective optimization algorithms weighted overlapping grouping direction-guided evolution
下载PDF
TESTING DIFFERENT CONJUGATE GRADIENT METHODS FOR LARGE-SCALE UNCONSTRAINED OPTIMIZATION 被引量:10
3
作者 Yu-hongDai QinNi 《Journal of Computational Mathematics》 SCIE CSCD 2003年第3期311-320,共10页
In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and th... In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and the second five hybrid CG methods. A collection of medium-scale and large-scale test problems are drawn from a standard code of test problems, CUTE. The conjugate gradient methods are ranked according to the numerical results. Some remarks are given. 展开更多
关键词 Conjugate gradient methods large-scale unconstrained optimization Numerical tests.
原文传递
GLOBAL COVERGENCE OF THE NON-QUASI-NEWTON METHOD FOR UNCONSTRAINED OPTIMIZATION PROBLEMS 被引量:6
4
作者 Liu Hongwei Wang Mingjie +1 位作者 Li Jinshan Zhang Xiangsun 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2006年第3期276-288,共13页
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the ... In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient. 展开更多
关键词 non-quasi-Newton method inexact line search global convergence unconstrained optimization superlinear convergence.
下载PDF
Bayesian network learning algorithm based on unconstrained optimization and ant colony optimization 被引量:3
5
作者 Chunfeng Wang Sanyang Liu Mingmin Zhu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2012年第5期784-790,共7页
Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony opt... Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony optimization(U-ACO-B) to solve the drawbacks of the ant colony optimization(ACO-B).In this algorithm,firstly,an unconstrained optimization problem is solved to obtain an undirected skeleton,and then the ACO algorithm is used to orientate the edges,thus returning the final structure.In the experimental part of the paper,we compare the performance of the proposed algorithm with ACO-B algorithm.The experimental results show that our method is effective and greatly enhance convergence speed than ACO-B algorithm. 展开更多
关键词 Bayesian network structure learning ant colony optimization unconstrained optimization
下载PDF
Integrating Conjugate Gradients Into Evolutionary Algorithms for Large-Scale Continuous Multi-Objective Optimization 被引量:3
6
作者 Ye Tian Haowen Chen +3 位作者 Haiping Ma Xingyi Zhang Kay Chen Tan Yaochu Jin 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第10期1801-1817,共17页
Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms a... Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms are good at solving small-scale multi-objective optimization problems,they are criticized for low efficiency in converging to the optimums of LSMOPs.By contrast,mathematical programming methods offer fast convergence speed on large-scale single-objective optimization problems,but they have difficulties in finding diverse solutions for LSMOPs.Currently,how to integrate evolutionary algorithms with mathematical programming methods to solve LSMOPs remains unexplored.In this paper,a hybrid algorithm is tailored for LSMOPs by coupling differential evolution and a conjugate gradient method.On the one hand,conjugate gradients and differential evolution are used to update different decision variables of a set of solutions,where the former drives the solutions to quickly converge towards the Pareto front and the latter promotes the diversity of the solutions to cover the whole Pareto front.On the other hand,objective decomposition strategy of evolutionary multi-objective optimization is used to differentiate the conjugate gradients of solutions,and the line search strategy of mathematical programming is used to ensure the higher quality of each offspring than its parent.In comparison with state-of-the-art evolutionary algorithms,mathematical programming methods,and hybrid algorithms,the proposed algorithm exhibits better convergence and diversity performance on a variety of benchmark and real-world LSMOPs. 展开更多
关键词 Conjugate gradient differential evolution evolutionary computation large-scale multi-objective optimization mathematical programming
下载PDF
Subspace Minimization Conjugate Gradient Method Based on Cubic Regularization Model for Unconstrained Optimization 被引量:1
7
作者 Ting Zhao Hongwei Liu 《Journal of Harbin Institute of Technology(New Series)》 CAS 2021年第5期61-69,共9页
Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology ... Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology has become particularly important and widely used in the field of optimization.In this study,a new CG method was put forward,which combined subspace technology and a cubic regularization model.Besides,a special scaled norm in a cubic regularization model was analyzed.Under certain conditions,some significant characteristics of the search direction were given and the convergence of the algorithm was built.Numerical comparisons show that for the 145 test functions under the CUTEr library,the proposed method is better than two classical CG methods and two new subspaces conjugate gradient methods. 展开更多
关键词 cubic regularization model conjugate gradient method subspace technique unconstrained optimization
下载PDF
A Line Search Algorithm for Unconstrained Optimization 被引量:1
8
作者 Gonglin Yuan Sha Lu Zengxin Wei 《Journal of Software Engineering and Applications》 2010年第5期503-509,共7页
It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, thi... It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, this method possesses global convergence and R-linear convergence for nonconvex function and convex function, respectively. Moreover, the given search direction has sufficiently descent property and belongs to a trust region without carrying out any line search rule. Numerical results show that the new method is effective. 展开更多
关键词 LINE SEARCH unconstrained optimization Global CONVERGENCE R-linear CONVERGENCE
下载PDF
A DERIVATIVE-FREE ALGORITHM FOR UNCONSTRAINED OPTIMIZATION 被引量:1
9
作者 Peng Yehui Liu Zhenhai 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2005年第4期491-498,共8页
在这篇论文,联合模式搜索方法的一个混合算法和为非强迫的优化的基因算法被介绍。算法是一个确定的模式搜索算法,但是模式搜索算法在搜索走,试用点被象基因算法一样的一个方法生产。在各个重申,由加倍,转线路和变化,点的一个有限... 在这篇论文,联合模式搜索方法的一个混合算法和为非强迫的优化的基因算法被介绍。算法是一个确定的模式搜索算法,但是模式搜索算法在搜索走,试用点被象基因算法一样的一个方法生产。在各个重申,由加倍,转线路和变化,点的一个有限集合能被使用。在理论,算法是全球性会聚的。Themost 搅动是证明它能为一些问题发现全球 minimizer 的数字结果,另外的模式搜索算法不忍受它。 展开更多
关键词 无约束优化 模式搜索法 遗传算法 倒数算法
下载PDF
A Filled Function with Adjustable Parameters for Unconstrained Global Optimization 被引量:1
10
作者 SHANGYou-lin LIXiao-yan 《Chinese Quarterly Journal of Mathematics》 CSCD 2004年第3期232-239,共8页
A filled function with adjustable parameters is suggested in this paper for finding a global minimum point of a general class of nonlinear programming problems with a bounded and closed domain. This function has two a... A filled function with adjustable parameters is suggested in this paper for finding a global minimum point of a general class of nonlinear programming problems with a bounded and closed domain. This function has two adjustable parameters. We will discuss the properties of the proposed filled function. Conditions on this function and on the values of parameters are given so that the constructed function has the desired properties of traditional filled function. 展开更多
关键词 非线性规划 几何规划 满函数 整体最佳化
下载PDF
New type of conjugate gradient algorithms for unconstrained optimization problems
11
作者 Caiying Wu Guoqing Chen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2010年第6期1000-1007,共8页
Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient metho... Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient methods, the new methods take both available gradient and function value information. Furthermore, their modifications are proposed. These methods are shown to be global convergent under some assumptions. Numerical results are also reported. 展开更多
关键词 conjugate gradient unconstrained optimization global convergence conjugacy condition.
下载PDF
A New Nonlinear Conjugate Gradient Method for Unconstrained Optimization Problems 被引量:1
12
作者 LIU Jin-kui WANG Kai-rong +1 位作者 SONG Xiao-qian DU Xiang-lin 《Chinese Quarterly Journal of Mathematics》 CSCD 2010年第3期444-450,共7页
在这份报纸,一有效结合坡度方法被给解决一般非强迫的优化问题,它能保证有强壮的沃尔夫线的足够的降下性质和全球集中寻找新方法由与 PRP+method 作比较有效、静止的 conditions.Numerical 结果表演,它能广泛地因此在科学计算被使用。
关键词 非强迫的优化 结合坡度方法 强壮的沃尔夫线搜索 足够的降下性质 全球集中
下载PDF
On the Global Convergence of the PERRY-SHANNO Method for Nonconvex Unconstrained Optimization Problems
13
作者 Linghua Huang Qingjun Wu Gonglin Yuan 《Applied Mathematics》 2011年第3期315-320,共6页
In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary nu... In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary numerical results show that the PSMQN with the particularly line search conditions are very promising. 展开更多
关键词 unconstrained optimization NONCONVEX optimization GLOBAL CONVERGENCE
下载PDF
A Retrospective Filter Trust Region Algorithm for Unconstrained Optimization
14
作者 Yue Lu Zhongwen Chen 《Applied Mathematics》 2010年第3期179-188,共10页
In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the... In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the multi-dimensional filter. The new algorithm gives a good estimation of trust region radius, relaxes the condition of accepting a trial step for the usual trust region methods. Under reasonable assumptions, we analyze the global convergence of the new method and report the preliminary results of numerical tests. We compare the results with those of the basic trust region algorithm, the filter trust region algorithm and the retrospective trust region algorithm, which shows the effectiveness of the new algorithm. 展开更多
关键词 unconstrained optimization RETROSPECTIVE TRUST Region Method MULTI-DIMENSIONAL FILTER Technique
下载PDF
Modified LS Method for Unconstrained Optimization
15
作者 Jinkui Liu Li Zheng 《Applied Mathematics》 2011年第6期779-782,共4页
In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucid... In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucidi line search, the global convergence property of the given method is discussed. The numerical results show that the new method is efficient for the given test problems. 展开更多
关键词 unconstrained optimization CONJUGATE GRADIENT Method Grippo-Lucidi Line SEARCH Global CONVERGENCE
下载PDF
A New Descent Nonlinear Conjugate Gradient Method for Unconstrained Optimization
16
作者 Hao Fan Zhibin Zhu Anwa Zhou 《Applied Mathematics》 2011年第9期1119-1123,共5页
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ... In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis. 展开更多
关键词 Large Scale unconstrained optimization CONJUGATE Gradient Method SUFFICIENT DESCENT Property Globally CONVERGENT
下载PDF
An Improved Quasi-Newton Method for Unconstrained Optimization
17
作者 Fei Pusheng Chen Zhong (Department of Mathematics, Wuhan University, Wuhan 430072, China) 《Wuhan University Journal of Natural Sciences》 CAS 1996年第1期35-37,共3页
We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
关键词 quasi-Newton method superlinear convergence unconstrained optimization
下载PDF
A SPARSE SUBSPACE TRUNCATED NEWTON METHOD FOR LARGE-SCALE BOUND CONSTRAINED NONLINEAR OPTIMIZATION
18
作者 倪勤 《Numerical Mathematics A Journal of Chinese Universities(English Series)》 SCIE 1997年第1期27-37,共11页
In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices ou... In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given. 展开更多
关键词 The TRUNCATED NEWTON method large-scale SPARSE problems BOUND constrained nonlinear optimization.
下载PDF
Global Convergence of an Extended Descent Algorithm without Line Search for Unconstrained Optimization
19
作者 Cuiling Chen Liling Luo +1 位作者 Caihong Han Yu Chen 《Journal of Applied Mathematics and Physics》 2018年第1期130-137,共8页
In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search directi... In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective. 展开更多
关键词 unconstrained optimization DESCENT Method Line SEARCH Global CONVERGENCE
下载PDF
New Variants of Newton’s Method for Nonlinear Unconstrained Optimization Problems
20
作者 V. KANWAR Kapil K. SHARMA Ramandeep BEHL 《Intelligent Information Management》 2010年第1期40-45,共6页
In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the propos... In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the proposed family is three. Numerical comparisons are made to show the performance of the presented methods. Furthermore, numerical experiments demonstrate that the logarithmic mean Newton’s method outperform the classical Newton’s and other variants of Newton’s method. MSC: 65H05. 展开更多
关键词 unconstrained optimization Newton’s method order of CONVERGENCE power MEANS INITIAL GUESS
下载PDF
上一页 1 2 51 下一页 到第
使用帮助 返回顶部