期刊文献+
共找到1,023篇文章
< 1 2 52 >
每页显示 20 50 100
A Two-Layer Encoding Learning Swarm Optimizer Based on Frequent Itemsets for Sparse Large-Scale Multi-Objective Optimization
1
作者 Sheng Qi Rui Wang +3 位作者 Tao Zhang Xu Yang Ruiqing Sun Ling Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第6期1342-1357,共16页
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.... Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed. 展开更多
关键词 Evolutionary algorithms learning swarm optimiza-tion sparse large-scale optimization sparse large-scale multi-objec-tive problems two-layer encoding.
下载PDF
Enhancing Evolutionary Algorithms With Pattern Mining for Sparse Large-Scale Multi-Objective Optimization Problems
2
作者 Sheng Qi Rui Wang +3 位作者 Tao Zhang Weixiong Huang Fan Yu Ling Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第8期1786-1801,共16页
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr... Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges. 展开更多
关键词 Evolutionary algorithms pattern mining sparse large-scale multi-objective problems(SLMOPs) sparse large-scale optimization.
下载PDF
Large-Scale Multi-Objective Optimization Algorithm Based on Weighted Overlapping Grouping of Decision Variables
3
作者 Liang Chen Jingbo Zhang +2 位作者 Linjie Wu Xingjuan Cai Yubin Xu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期363-383,共21页
The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the intera... The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage. 展开更多
关键词 Decision variable grouping large-scale multi-objective optimization algorithms weighted overlapping grouping direction-guided evolution
下载PDF
TESTING DIFFERENT CONJUGATE GRADIENT METHODS FOR LARGE-SCALE UNCONSTRAINED OPTIMIZATION 被引量:10
4
作者 Yu-hongDai QinNi 《Journal of Computational Mathematics》 SCIE CSCD 2003年第3期311-320,共10页
In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and th... In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and the second five hybrid CG methods. A collection of medium-scale and large-scale test problems are drawn from a standard code of test problems, CUTE. The conjugate gradient methods are ranked according to the numerical results. Some remarks are given. 展开更多
关键词 Conjugate gradient methods large-scale unconstrained optimization Numerical tests.
原文传递
GLOBAL COVERGENCE OF THE NON-QUASI-NEWTON METHOD FOR UNCONSTRAINED OPTIMIZATION PROBLEMS 被引量:6
5
作者 Liu Hongwei Wang Mingjie +1 位作者 Li Jinshan Zhang Xiangsun 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2006年第3期276-288,共13页
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the ... In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient. 展开更多
关键词 non-quasi-Newton method inexact line search global convergence unconstrained optimization superlinear convergence.
下载PDF
Bayesian network learning algorithm based on unconstrained optimization and ant colony optimization 被引量:3
6
作者 Chunfeng Wang Sanyang Liu Mingmin Zhu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2012年第5期784-790,共7页
Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony opt... Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony optimization(U-ACO-B) to solve the drawbacks of the ant colony optimization(ACO-B).In this algorithm,firstly,an unconstrained optimization problem is solved to obtain an undirected skeleton,and then the ACO algorithm is used to orientate the edges,thus returning the final structure.In the experimental part of the paper,we compare the performance of the proposed algorithm with ACO-B algorithm.The experimental results show that our method is effective and greatly enhance convergence speed than ACO-B algorithm. 展开更多
关键词 Bayesian network structure learning ant colony optimization unconstrained optimization
下载PDF
Integrating Conjugate Gradients Into Evolutionary Algorithms for Large-Scale Continuous Multi-Objective Optimization 被引量:4
7
作者 Ye Tian Haowen Chen +3 位作者 Haiping Ma Xingyi Zhang Kay Chen Tan Yaochu Jin 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第10期1801-1817,共17页
Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms a... Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms are good at solving small-scale multi-objective optimization problems,they are criticized for low efficiency in converging to the optimums of LSMOPs.By contrast,mathematical programming methods offer fast convergence speed on large-scale single-objective optimization problems,but they have difficulties in finding diverse solutions for LSMOPs.Currently,how to integrate evolutionary algorithms with mathematical programming methods to solve LSMOPs remains unexplored.In this paper,a hybrid algorithm is tailored for LSMOPs by coupling differential evolution and a conjugate gradient method.On the one hand,conjugate gradients and differential evolution are used to update different decision variables of a set of solutions,where the former drives the solutions to quickly converge towards the Pareto front and the latter promotes the diversity of the solutions to cover the whole Pareto front.On the other hand,objective decomposition strategy of evolutionary multi-objective optimization is used to differentiate the conjugate gradients of solutions,and the line search strategy of mathematical programming is used to ensure the higher quality of each offspring than its parent.In comparison with state-of-the-art evolutionary algorithms,mathematical programming methods,and hybrid algorithms,the proposed algorithm exhibits better convergence and diversity performance on a variety of benchmark and real-world LSMOPs. 展开更多
关键词 Conjugate gradient differential evolution evolutionary computation large-scale multi-objective optimization mathematical programming
下载PDF
Modified Augmented Lagrange Multiplier Methods for Large-Scale Chemical Process Optimization 被引量:6
8
作者 梁昔明 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2001年第2期167-172,共6页
Chemical process optimization can be described as large-scale nonlinear constrained minimization. The modified augmented Lagrange multiplier methods (MALMM) for large-scale nonlinear constrained minimization are studi... Chemical process optimization can be described as large-scale nonlinear constrained minimization. The modified augmented Lagrange multiplier methods (MALMM) for large-scale nonlinear constrained minimization are studied in this paper. The Lagrange function contains the penalty terms on equality and inequality constraints and the methods can be applied to solve a series of bound constrained sub-problems instead of a series of unconstrained sub-problems. The steps of the methods are examined in full detail. Numerical experiments are made for a variety of problems, from small to very large-scale, which show the stability and effectiveness of the methods in large-scale problems. 展开更多
关键词 modified augmented Lagrange multiplier methods chemical engineering optimization large-scale non- linear constrained minimization numerical experiment
下载PDF
Applying Analytical Derivative and Sparse Matrix Techniques to Large-Scale Process Optimization Problems 被引量:2
9
作者 仲卫涛 邵之江 +1 位作者 张余岳 钱积新 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2000年第3期212-217,共6页
The performance of analytical derivative and sparse matrix techniques applied to a traditional dense sequential quadratic programming (SQP) is studied, and the strategy utilizing those techniques is also presented.Com... The performance of analytical derivative and sparse matrix techniques applied to a traditional dense sequential quadratic programming (SQP) is studied, and the strategy utilizing those techniques is also presented.Computational results on two typical chemical optimization problems demonstrate significant enhancement in efficiency, which shows this strategy is promising and suitable for large-scale process optimization problems. 展开更多
关键词 large-scale optimization open-equation sequential quadratic programming analytical derivative sparse matrix technique
下载PDF
A DERIVATIVE-FREE ALGORITHM FOR UNCONSTRAINED OPTIMIZATION 被引量:1
10
作者 Peng Yehui Liu Zhenhai 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2005年第4期491-498,共8页
In this paper a hybrid algorithm which combines the pattern search method and the genetic algorithm for unconstrained optimization is presented. The algorithm is a deterministic pattern search algorithm,but in the sea... In this paper a hybrid algorithm which combines the pattern search method and the genetic algorithm for unconstrained optimization is presented. The algorithm is a deterministic pattern search algorithm,but in the search step of pattern search algorithm,the trial points are produced by a way like the genetic algorithm. At each iterate, by reduplication,crossover and mutation, a finite set of points can be used. In theory,the algorithm is globally convergent. The most stir is the numerical results showing that it can find the global minimizer for some problems ,which other pattern search algorithms don't bear. 展开更多
关键词 unconstrained optimization pattern search method genetic algorithm global minimizer.
下载PDF
A Filled Function with Adjustable Parameters for Unconstrained Global Optimization 被引量:1
11
作者 SHANGYou-lin LIXiao-yan 《Chinese Quarterly Journal of Mathematics》 CSCD 2004年第3期232-239,共8页
A filled function with adjustable parameters is suggested in this paper for finding a global minimum point of a general class of nonlinear programming problems with a bounded and closed domain. This function has two a... A filled function with adjustable parameters is suggested in this paper for finding a global minimum point of a general class of nonlinear programming problems with a bounded and closed domain. This function has two adjustable parameters. We will discuss the properties of the proposed filled function. Conditions on this function and on the values of parameters are given so that the constructed function has the desired properties of traditional filled function. 展开更多
关键词 filled function global optimization global minimizer unconstrained problem BASIN HILL
下载PDF
A New Nonlinear Conjugate Gradient Method for Unconstrained Optimization Problems 被引量:1
12
作者 LIU Jin-kui WANG Kai-rong +1 位作者 SONG Xiao-qian DU Xiang-lin 《Chinese Quarterly Journal of Mathematics》 CSCD 2010年第3期444-450,共7页
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol... In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation. 展开更多
关键词 unconstrained optimization conjugate gradient method strong Wolfe line search sufficient descent property global convergence
下载PDF
Subspace Minimization Conjugate Gradient Method Based on Cubic Regularization Model for Unconstrained Optimization 被引量:1
13
作者 Ting Zhao Hongwei Liu 《Journal of Harbin Institute of Technology(New Series)》 CAS 2021年第5期61-69,共9页
Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology ... Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology has become particularly important and widely used in the field of optimization.In this study,a new CG method was put forward,which combined subspace technology and a cubic regularization model.Besides,a special scaled norm in a cubic regularization model was analyzed.Under certain conditions,some significant characteristics of the search direction were given and the convergence of the algorithm was built.Numerical comparisons show that for the 145 test functions under the CUTEr library,the proposed method is better than two classical CG methods and two new subspaces conjugate gradient methods. 展开更多
关键词 cubic regularization model conjugate gradient method subspace technique unconstrained optimization
下载PDF
A Line Search Algorithm for Unconstrained Optimization 被引量:1
14
作者 Gonglin Yuan Sha Lu Zengxin Wei 《Journal of Software Engineering and Applications》 2010年第5期503-509,共7页
It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, thi... It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, this method possesses global convergence and R-linear convergence for nonconvex function and convex function, respectively. Moreover, the given search direction has sufficiently descent property and belongs to a trust region without carrying out any line search rule. Numerical results show that the new method is effective. 展开更多
关键词 LINE SEARCH unconstrained optimization GLOBAL CONVERGENCE R-linear CONVERGENCE
下载PDF
New type of conjugate gradient algorithms for unconstrained optimization problems
15
作者 Caiying Wu Guoqing Chen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2010年第6期1000-1007,共8页
Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient metho... Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient methods, the new methods take both available gradient and function value information. Furthermore, their modifications are proposed. These methods are shown to be global convergent under some assumptions. Numerical results are also reported. 展开更多
关键词 conjugate gradient unconstrained optimization global convergence conjugacy condition.
下载PDF
On the Global Convergence of the PERRY-SHANNO Method for Nonconvex Unconstrained Optimization Problems
16
作者 Linghua Huang Qingjun Wu Gonglin Yuan 《Applied Mathematics》 2011年第3期315-320,共6页
In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary nu... In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary numerical results show that the PSMQN with the particularly line search conditions are very promising. 展开更多
关键词 unconstrained optimization NONCONVEX optimization GLOBAL CONVERGENCE
下载PDF
A Retrospective Filter Trust Region Algorithm for Unconstrained Optimization
17
作者 Yue Lu Zhongwen Chen 《Applied Mathematics》 2010年第3期179-188,共10页
In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the... In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the multi-dimensional filter. The new algorithm gives a good estimation of trust region radius, relaxes the condition of accepting a trial step for the usual trust region methods. Under reasonable assumptions, we analyze the global convergence of the new method and report the preliminary results of numerical tests. We compare the results with those of the basic trust region algorithm, the filter trust region algorithm and the retrospective trust region algorithm, which shows the effectiveness of the new algorithm. 展开更多
关键词 unconstrained optimization RETROSPECTIVE TRUST Region Method MULTI-DIMENSIONAL FILTER Technique
下载PDF
Modified LS Method for Unconstrained Optimization
18
作者 Jinkui Liu Li Zheng 《Applied Mathematics》 2011年第6期779-782,共4页
In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucid... In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucidi line search, the global convergence property of the given method is discussed. The numerical results show that the new method is efficient for the given test problems. 展开更多
关键词 unconstrained optimization CONJUGATE GRADIENT Method Grippo-Lucidi Line SEARCH Global CONVERGENCE
下载PDF
A New Descent Nonlinear Conjugate Gradient Method for Unconstrained Optimization
19
作者 Hao Fan Zhibin Zhu Anwa Zhou 《Applied Mathematics》 2011年第9期1119-1123,共5页
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ... In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis. 展开更多
关键词 Large Scale unconstrained optimization CONJUGATE Gradient Method SUFFICIENT DESCENT Property Globally CONVERGENT
下载PDF
An Improved Quasi-Newton Method for Unconstrained Optimization
20
作者 Fei Pusheng Chen Zhong (Department of Mathematics, Wuhan University, Wuhan 430072, China) 《Wuhan University Journal of Natural Sciences》 CAS 1996年第1期35-37,共3页
We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
关键词 quasi-Newton method superlinear convergence unconstrained optimization
下载PDF
上一页 1 2 52 下一页 到第
使用帮助 返回顶部