An improved genetic algorithm(IGA) based on a novel selection strategy to handle nonlinear programming problems is proposed.Each individual in selection process is represented as a three-dimensional feature vector w...An improved genetic algorithm(IGA) based on a novel selection strategy to handle nonlinear programming problems is proposed.Each individual in selection process is represented as a three-dimensional feature vector which is composed of objective function value,the degree of constraints violations and the number of constraints violations.It is easy to distinguish excellent individuals from general individuals by using an individuals' feature vector.Additionally,a local search(LS) process is incorporated into selection operation so as to find feasible solutions located in the neighboring areas of some infeasible solutions.The combination of IGA and LS should offer the advantage of both the quality of solutions and diversity of solutions.Experimental results over a set of benchmark problems demonstrate that IGA has better performance than other algorithms.展开更多
In this paper, a class of augmented Lagrangiaus of Di Pillo and Grippo (DGALs) was considered, for solving equality-constrained problems via unconstrained minimization techniques. The relationship was further discus...In this paper, a class of augmented Lagrangiaus of Di Pillo and Grippo (DGALs) was considered, for solving equality-constrained problems via unconstrained minimization techniques. The relationship was further discussed between the uneonstrained minimizers of DGALs on the product space of problem variables and multipliers, and the solutions of the eonstrained problem and the corresponding values of the Lagrange multipliers. The resulting properties indicate more precisely that this class of DGALs is exact multiplier penalty functions. Therefore, a solution of the equslity-constralned problem and the corresponding values of the Lagrange multipliers can be found by performing a single unconstrained minimization of a DGAL on the product space of problem variables and multipliers.展开更多
In this paper, on the basis of the logarithmic barrier function and KKT conditions, we propose a combined homotopy infeasible interior-point method (CHIIP) for convex nonlinear programming problems. For any convex n...In this paper, on the basis of the logarithmic barrier function and KKT conditions, we propose a combined homotopy infeasible interior-point method (CHIIP) for convex nonlinear programming problems. For any convex nonlinear programming, without strict convexity for the logarithmic barrier function, we get different solutions of the convex programming in different cases by CHIIP method.展开更多
A penalized interior point approach for constrained nonlinear programming is examined in this work. To overcome the difficulty of initialization for the interior point method, a problem equivalent to the primal proble...A penalized interior point approach for constrained nonlinear programming is examined in this work. To overcome the difficulty of initialization for the interior point method, a problem equivalent to the primal problem via incorporating an auxiliary variable is constructed. A combined approach of logarithm barrier and quadratic penalty function is proposed to solve the problem. Based on Newton's method, the global convergence of interior point and line search algorithm is proven. Only a finite number of iterations is required to reach an approximate optimal solution. Numerical tests are given to show the effectiveness of the method.展开更多
By redefining the multiplier associated with inequality constraint as a positive definite function of the originally-defined multiplier, say, u2_i, i=1, 2, ..., m, nonnegative constraints imposed on inequality constra...By redefining the multiplier associated with inequality constraint as a positive definite function of the originally-defined multiplier, say, u2_i, i=1, 2, ..., m, nonnegative constraints imposed on inequality constraints in Karush-Kuhn-Tucker necessary conditions are removed. For constructing the Lagrange neural network and Lagrange multiplier method, it is no longer necessary to convert inequality constraints into equality constraints by slack variables in order to reuse those results dedicated to equality constraints, and they can be similarly proved with minor modification. Utilizing this technique, a new type of Lagrange neural network and a new type of Lagrange multiplier method are devised, which both handle inequality constraints directly. Also, their stability and convergence are analyzed rigorously.展开更多
An exact augmented Lagrangian function for the nonlinear nonconvex programming problems with inequality constraints was discussed. Under suitable hypotheses, the relationship was established between the local unconstr...An exact augmented Lagrangian function for the nonlinear nonconvex programming problems with inequality constraints was discussed. Under suitable hypotheses, the relationship was established between the local unconstrained minimizers of the augmented Lagrangian function on the space of problem variables and the local minimizers of the original constrained problem. Furthermore, under some assumptions, the relationship was also established between the global solutions of the augmented Lagrangian function on some compact subset of the space of problem variables and the global solutions of the constrained problem. Therefore, f^om the theoretical point of view, a solution of the inequality constrained problem and the corresponding values of the Lagrange multipliers can be found by the well-known method of multipliers which resort to the unconstrained minimization of the augmented Lagrangian function presented.展开更多
A universal numerical approach for nonlinear mathematic programming problems is presented with an application of ratios of first-order differentials/differences of objective functions to constraint functions with resp...A universal numerical approach for nonlinear mathematic programming problems is presented with an application of ratios of first-order differentials/differences of objective functions to constraint functions with respect to design variables. This approach can be efficiently used to solve continuous and, in particular, discrete programmings with arbitrary design variables and constraints. As a search method, this approach requires only computations of the functions and their partial derivatives or differences with respect to design variables, rather than any solution of mathematic equations. The present approach has been applied on many numerical examples as well as on some classical operational problems such as one-dimensional and two-dimensional knap-sack problems, one-dimensional and two-dimensional resource-distribution problems, problems of working reliability of composite systems and loading problems of machine, and more efficient and reliable solutions are obtained than traditional methods. The present approach can be used without limitation of modeling scales of the problem. Optimum solutions can be guaranteed as long as the objective function, constraint functions and their First-order derivatives/differences exist in the feasible domain or feasible set. There are no failures of convergence and instability when this approach is adopted.展开更多
In this paper,we improve the algorithm proposed by T.F.Colemen and A.R.Conn in paper [1]. It is shown that the improved algorithm is possessed of global convergence and under some conditions it can obtain locally supp...In this paper,we improve the algorithm proposed by T.F.Colemen and A.R.Conn in paper [1]. It is shown that the improved algorithm is possessed of global convergence and under some conditions it can obtain locally supperlinear convergence which is not possessed by the original algorithm.展开更多
In this paper, we propose a primal-dual interior point method for solving general constrained nonlinear programming problems. To avoid the situation that the algorithm we use may converge to a saddle point or a local ...In this paper, we propose a primal-dual interior point method for solving general constrained nonlinear programming problems. To avoid the situation that the algorithm we use may converge to a saddle point or a local maximum, we utilize a merit function to guide the iterates toward a local minimum. Especially, we add the parameter ε to the Newton system when calculating the decrease directions. The global convergence is achieved by the decrease of a merit function. Furthermore, the numerical results confirm that the algorithm can solve this kind of problems in an efficient way.展开更多
A new preamble structure and design method for orthogonal frequency division multiplexing(OFDM)systems is described,which results a two-symbol long training preamble.The preamble contains four parts,the first part i...A new preamble structure and design method for orthogonal frequency division multiplexing(OFDM)systems is described,which results a two-symbol long training preamble.The preamble contains four parts,the first part is the same as the third,and the four parts are calculated by using nonlinear programming(NLP)model such that the moving correlation of the preamble results a steep rectangular-like pulse of certain width,whose step-down indicates the timing offset.Simulation results in AWGN channel are given to evaluate the perf o rmance of the proposed preamble design.展开更多
A method for solving nonlinear programming using genetic algorithm is presented. In the operations of crossover and mutation in each generation, to ensure the new solutions are all feasible, we present a method in whi...A method for solving nonlinear programming using genetic algorithm is presented. In the operations of crossover and mutation in each generation, to ensure the new solutions are all feasible, we present a method in which the bounds of every variable in the solution are estimated beforehand according to the constrained conditions. For the operation of mutation, we present two methods of cube bounding and variable bounding. The experimental results are given and analyzed. They show that the method is efficient and can obtain the results in less generation.展开更多
The sparse nonlinear programming (SNP) is to minimize a general continuously differentiable func- tion subject to sparsity, nonlinear equality and inequality constraints. We first define two restricted constraint qu...The sparse nonlinear programming (SNP) is to minimize a general continuously differentiable func- tion subject to sparsity, nonlinear equality and inequality constraints. We first define two restricted constraint qualifications and show how these constraint qualifications can be applied to obtain the decomposition properties of the Frechet, Mordukhovich and Clarke normal cones to the sparsity constrained feasible set. Based on the decomposition properties of the normal cones, we then present and analyze three classes of Karush-Kuhn- Tucker (KKT) conditions for the SNP. At last, we establish the second-order necessary optimality condition and sufficient optimality condition for the SNP.展开更多
We propose a multidimensional filter SQP algorithm.The multidimensional filter technique proposed by Gould et al.[SIAM J.Optim.,2005]is extended to solve constrained optimization problems.In our proposed algorithm,the...We propose a multidimensional filter SQP algorithm.The multidimensional filter technique proposed by Gould et al.[SIAM J.Optim.,2005]is extended to solve constrained optimization problems.In our proposed algorithm,the constraints are partitioned into several parts,and the entry of our filter consists of these different parts.Not only the criteria for accepting a trial step would be relaxed,but the individual behavior of each part of constraints is considered.One feature is that the undesirable link between the objective function and the constraint violation in the filter acceptance criteria disappears.The other is that feasibility restoration phases are unnecessary because a consistent quadratic programming subproblem is used.We prove that our algorithm is globally convergent to KKT points under the constant positive generators(CPG)condition which is weaker than the well-known Mangasarian-Fromovitz constraint qualification(MFCQ)and the constant positive linear dependence(CPLD).Numerical results are presented to show the efficiency of the algorithm.展开更多
In this paper,an optimality condition for nonlinear programming problems with box constraints is given by using linear transformation and Lagrange interpolating polynomials.Based on this condition,two new local optim...In this paper,an optimality condition for nonlinear programming problems with box constraints is given by using linear transformation and Lagrange interpolating polynomials.Based on this condition,two new local optimization methods are developed.The solution points obtained by the new local optimization methods can improve the Karush–Kuhn–Tucker(KKT)points in general.Two global optimization methods then are proposed by combining the two new local optimization methods with a filled function method.Some numerical examples are reported to show the effectiveness of the proposed methods.展开更多
We propose a new method for finding the local optimal points of the constrained nonlinear programming by Ordinary Differential Equations (ODE) , and prove asymptotic stability of the singular points of partial vari...We propose a new method for finding the local optimal points of the constrained nonlinear programming by Ordinary Differential Equations (ODE) , and prove asymptotic stability of the singular points of partial variables in this paper. The condition of overall uniform, asymptotic stability is also given.展开更多
An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable fo...An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable for handling sparse data structure and pos sesses Q-quadratic convergence rate. The global convergence of this new method is proved, the convergence rate is further analysed, and the detailed implementation is discussed in this paper. Some numerical tests for solving truss optimization and large sparse problems are reported. The theoretical and numerical results show that the new method is efficient for solving large-scale sparse NLP problems.展开更多
Provides information on a study which presented a trust region approach for solving nonlinear constrained optimization. Algorithm of the trust region approach; Information on the global convergence of the algorithm; N...Provides information on a study which presented a trust region approach for solving nonlinear constrained optimization. Algorithm of the trust region approach; Information on the global convergence of the algorithm; Numerical results of the study.展开更多
Since the point-to-set maps were introduced by Zangwill in the study of conceptual algorithms, various sufficient conditions for the algorithms to be of global convergence have been established.In this paper, the rela...Since the point-to-set maps were introduced by Zangwill in the study of conceptual algorithms, various sufficient conditions for the algorithms to be of global convergence have been established.In this paper, the relations among all these conditions are illustrated by a unified approach;still more, unlike the sufficient conditions previously given in the literature,a new necessary condition is put forward at the end of the paper, so that it implies more applications.展开更多
The penalty function method, presented many years ago, is an important nu- merical method for the mathematical programming problems. In this article, we propose a dual-relax penalty function approach, which is signifi...The penalty function method, presented many years ago, is an important nu- merical method for the mathematical programming problems. In this article, we propose a dual-relax penalty function approach, which is significantly different from penalty func- tion approach existing for solving the bilevel programming, to solve the nonlinear bilevel programming with linear lower level problem. Our algorithm will redound to the error analysis for computing an approximate solution to the bilevel programming. The error estimate is obtained among the optimal objective function value of the dual-relax penalty problem and of the original bilevel programming problem. An example is illustrated to show the feasibility of the proposed approach.展开更多
In this paper a canonical neural network with adaptively changing synaptic weights and activation function parameters is presented to solve general nonlinear programming problems. The basic part of the model is a sub-...In this paper a canonical neural network with adaptively changing synaptic weights and activation function parameters is presented to solve general nonlinear programming problems. The basic part of the model is a sub-network used to find a solution of quadratic programming problems with simple upper and lower bounds. By sequentially activating the sub-network under the control of an external computer or a special analog or digital processor that adjusts the weights and parameters, one then solves general nonlinear programming problems. Convergence proof and numerical results are given.展开更多
基金supported by the National Natural Science Foundation of China (60632050)National Basic Research Program of Jiangsu Province University (08KJB520003)
文摘An improved genetic algorithm(IGA) based on a novel selection strategy to handle nonlinear programming problems is proposed.Each individual in selection process is represented as a three-dimensional feature vector which is composed of objective function value,the degree of constraints violations and the number of constraints violations.It is easy to distinguish excellent individuals from general individuals by using an individuals' feature vector.Additionally,a local search(LS) process is incorporated into selection operation so as to find feasible solutions located in the neighboring areas of some infeasible solutions.The combination of IGA and LS should offer the advantage of both the quality of solutions and diversity of solutions.Experimental results over a set of benchmark problems demonstrate that IGA has better performance than other algorithms.
文摘In this paper, a class of augmented Lagrangiaus of Di Pillo and Grippo (DGALs) was considered, for solving equality-constrained problems via unconstrained minimization techniques. The relationship was further discussed between the uneonstrained minimizers of DGALs on the product space of problem variables and multipliers, and the solutions of the eonstrained problem and the corresponding values of the Lagrange multipliers. The resulting properties indicate more precisely that this class of DGALs is exact multiplier penalty functions. Therefore, a solution of the equslity-constralned problem and the corresponding values of the Lagrange multipliers can be found by performing a single unconstrained minimization of a DGAL on the product space of problem variables and multipliers.
文摘In this paper, on the basis of the logarithmic barrier function and KKT conditions, we propose a combined homotopy infeasible interior-point method (CHIIP) for convex nonlinear programming problems. For any convex nonlinear programming, without strict convexity for the logarithmic barrier function, we get different solutions of the convex programming in different cases by CHIIP method.
基金supported by the National Natural Science Foundation of China (Grant No.10771133)the Shanghai Leading Academic Discipline Project (Grant Nos.J50101, S30104)
文摘A penalized interior point approach for constrained nonlinear programming is examined in this work. To overcome the difficulty of initialization for the interior point method, a problem equivalent to the primal problem via incorporating an auxiliary variable is constructed. A combined approach of logarithm barrier and quadratic penalty function is proposed to solve the problem. Based on Newton's method, the global convergence of interior point and line search algorithm is proven. Only a finite number of iterations is required to reach an approximate optimal solution. Numerical tests are given to show the effectiveness of the method.
文摘By redefining the multiplier associated with inequality constraint as a positive definite function of the originally-defined multiplier, say, u2_i, i=1, 2, ..., m, nonnegative constraints imposed on inequality constraints in Karush-Kuhn-Tucker necessary conditions are removed. For constructing the Lagrange neural network and Lagrange multiplier method, it is no longer necessary to convert inequality constraints into equality constraints by slack variables in order to reuse those results dedicated to equality constraints, and they can be similarly proved with minor modification. Utilizing this technique, a new type of Lagrange neural network and a new type of Lagrange multiplier method are devised, which both handle inequality constraints directly. Also, their stability and convergence are analyzed rigorously.
文摘An exact augmented Lagrangian function for the nonlinear nonconvex programming problems with inequality constraints was discussed. Under suitable hypotheses, the relationship was established between the local unconstrained minimizers of the augmented Lagrangian function on the space of problem variables and the local minimizers of the original constrained problem. Furthermore, under some assumptions, the relationship was also established between the global solutions of the augmented Lagrangian function on some compact subset of the space of problem variables and the global solutions of the constrained problem. Therefore, f^om the theoretical point of view, a solution of the inequality constrained problem and the corresponding values of the Lagrange multipliers can be found by the well-known method of multipliers which resort to the unconstrained minimization of the augmented Lagrangian function presented.
文摘A universal numerical approach for nonlinear mathematic programming problems is presented with an application of ratios of first-order differentials/differences of objective functions to constraint functions with respect to design variables. This approach can be efficiently used to solve continuous and, in particular, discrete programmings with arbitrary design variables and constraints. As a search method, this approach requires only computations of the functions and their partial derivatives or differences with respect to design variables, rather than any solution of mathematic equations. The present approach has been applied on many numerical examples as well as on some classical operational problems such as one-dimensional and two-dimensional knap-sack problems, one-dimensional and two-dimensional resource-distribution problems, problems of working reliability of composite systems and loading problems of machine, and more efficient and reliable solutions are obtained than traditional methods. The present approach can be used without limitation of modeling scales of the problem. Optimum solutions can be guaranteed as long as the objective function, constraint functions and their First-order derivatives/differences exist in the feasible domain or feasible set. There are no failures of convergence and instability when this approach is adopted.
文摘In this paper,we improve the algorithm proposed by T.F.Colemen and A.R.Conn in paper [1]. It is shown that the improved algorithm is possessed of global convergence and under some conditions it can obtain locally supperlinear convergence which is not possessed by the original algorithm.
文摘In this paper, we propose a primal-dual interior point method for solving general constrained nonlinear programming problems. To avoid the situation that the algorithm we use may converge to a saddle point or a local maximum, we utilize a merit function to guide the iterates toward a local minimum. Especially, we add the parameter ε to the Newton system when calculating the decrease directions. The global convergence is achieved by the decrease of a merit function. Furthermore, the numerical results confirm that the algorithm can solve this kind of problems in an efficient way.
基金supported by the National Natural Science Foundation of China under Grant No. 60501018
文摘A new preamble structure and design method for orthogonal frequency division multiplexing(OFDM)systems is described,which results a two-symbol long training preamble.The preamble contains four parts,the first part is the same as the third,and the four parts are calculated by using nonlinear programming(NLP)model such that the moving correlation of the preamble results a steep rectangular-like pulse of certain width,whose step-down indicates the timing offset.Simulation results in AWGN channel are given to evaluate the perf o rmance of the proposed preamble design.
基金The work is supported by National Natural Science Foundation of China ( 6 9974 0 33)
文摘A method for solving nonlinear programming using genetic algorithm is presented. In the operations of crossover and mutation in each generation, to ensure the new solutions are all feasible, we present a method in which the bounds of every variable in the solution are estimated beforehand according to the constrained conditions. For the operation of mutation, we present two methods of cube bounding and variable bounding. The experimental results are given and analyzed. They show that the method is efficient and can obtain the results in less generation.
基金supported by National Natural Science Foundation of China(Grant No.11431002)Shandong Province Natural Science Foundation(Grant No.ZR2016AM07)
文摘The sparse nonlinear programming (SNP) is to minimize a general continuously differentiable func- tion subject to sparsity, nonlinear equality and inequality constraints. We first define two restricted constraint qualifications and show how these constraint qualifications can be applied to obtain the decomposition properties of the Frechet, Mordukhovich and Clarke normal cones to the sparsity constrained feasible set. Based on the decomposition properties of the normal cones, we then present and analyze three classes of Karush-Kuhn- Tucker (KKT) conditions for the SNP. At last, we establish the second-order necessary optimality condition and sufficient optimality condition for the SNP.
基金This work is supported by National Science Founda-tion of China(No.11601318)Equipment Manufacturing Systems and Optimization(No.13XKJC01).
文摘We propose a multidimensional filter SQP algorithm.The multidimensional filter technique proposed by Gould et al.[SIAM J.Optim.,2005]is extended to solve constrained optimization problems.In our proposed algorithm,the constraints are partitioned into several parts,and the entry of our filter consists of these different parts.Not only the criteria for accepting a trial step would be relaxed,but the individual behavior of each part of constraints is considered.One feature is that the undesirable link between the objective function and the constraint violation in the filter acceptance criteria disappears.The other is that feasibility restoration phases are unnecessary because a consistent quadratic programming subproblem is used.We prove that our algorithm is globally convergent to KKT points under the constant positive generators(CPG)condition which is weaker than the well-known Mangasarian-Fromovitz constraint qualification(MFCQ)and the constant positive linear dependence(CPLD).Numerical results are presented to show the efficiency of the algorithm.
基金the National Natural Science Foundation of China(No.11471062).
文摘In this paper,an optimality condition for nonlinear programming problems with box constraints is given by using linear transformation and Lagrange interpolating polynomials.Based on this condition,two new local optimization methods are developed.The solution points obtained by the new local optimization methods can improve the Karush–Kuhn–Tucker(KKT)points in general.Two global optimization methods then are proposed by combining the two new local optimization methods with a filled function method.Some numerical examples are reported to show the effectiveness of the proposed methods.
文摘We propose a new method for finding the local optimal points of the constrained nonlinear programming by Ordinary Differential Equations (ODE) , and prove asymptotic stability of the singular points of partial variables in this paper. The condition of overall uniform, asymptotic stability is also given.
基金This research was supported by Nationa Natural Science Foundation of China, LSEC of CAS in Beijingand Natural Science Foundati
文摘An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable for handling sparse data structure and pos sesses Q-quadratic convergence rate. The global convergence of this new method is proved, the convergence rate is further analysed, and the detailed implementation is discussed in this paper. Some numerical tests for solving truss optimization and large sparse problems are reported. The theoretical and numerical results show that the new method is efficient for solving large-scale sparse NLP problems.
基金Chinese NSF grants 19525101, 19731001, and by State key project 96-221-04-02-02. It is also partially supported by Hebei provi
文摘Provides information on a study which presented a trust region approach for solving nonlinear constrained optimization. Algorithm of the trust region approach; Information on the global convergence of the algorithm; Numerical results of the study.
文摘Since the point-to-set maps were introduced by Zangwill in the study of conceptual algorithms, various sufficient conditions for the algorithms to be of global convergence have been established.In this paper, the relations among all these conditions are illustrated by a unified approach;still more, unlike the sufficient conditions previously given in the literature,a new necessary condition is put forward at the end of the paper, so that it implies more applications.
基金supported by the National Science Foundation of China (70771080)Social Science Foundation of Ministry of Education (10YJC630233)
文摘The penalty function method, presented many years ago, is an important nu- merical method for the mathematical programming problems. In this article, we propose a dual-relax penalty function approach, which is significantly different from penalty func- tion approach existing for solving the bilevel programming, to solve the nonlinear bilevel programming with linear lower level problem. Our algorithm will redound to the error analysis for computing an approximate solution to the bilevel programming. The error estimate is obtained among the optimal objective function value of the dual-relax penalty problem and of the original bilevel programming problem. An example is illustrated to show the feasibility of the proposed approach.
基金the the Innovation Fund of the Academy of Mathematics and System Sciencesby the Management,Decision and Information System Lab.,Chinese Academy of Sciences.
文摘In this paper a canonical neural network with adaptively changing synaptic weights and activation function parameters is presented to solve general nonlinear programming problems. The basic part of the model is a sub-network used to find a solution of quadratic programming problems with simple upper and lower bounds. By sequentially activating the sub-network under the control of an external computer or a special analog or digital processor that adjusts the weights and parameters, one then solves general nonlinear programming problems. Convergence proof and numerical results are given.