In this paper, a new augmented Lagrangian penalty function for constrained optimization problems is studied. The dual properties of the augmented Lagrangian objective penalty function for constrained optimization prob...In this paper, a new augmented Lagrangian penalty function for constrained optimization problems is studied. The dual properties of the augmented Lagrangian objective penalty function for constrained optimization problems are proved. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker (KKT) condition. Especially, when the KKT condition holds for convex programming its saddle point exists. Based on the augmented Lagrangian objective penalty function, an algorithm is developed for finding a global solution to an inequality constrained optimization problem and its global convergence is also proved under some conditions.展开更多
In this paper, we present an algorithm to solve the inequality constrained multi-objective programming (MP) by using a penalty function with objective parameters and constraint penalty parameter. First, the penalty fu...In this paper, we present an algorithm to solve the inequality constrained multi-objective programming (MP) by using a penalty function with objective parameters and constraint penalty parameter. First, the penalty function with objective parameters and constraint penalty parameter for MP and the corresponding unconstraint penalty optimization problem (UPOP) is defined. Under some conditions, a Pareto efficient solution (or a weakly-efficient solution) to UPOP is proved to be a Pareto efficient solution (or a weakly-efficient solution) to MP. The penalty function is proved to be exact under a stable condition. Then, we design an algorithm to solve MP and prove its convergence. Finally, numerical examples show that the algorithm may help decision makers to find a satisfactory solution to MP.展开更多
By using the penalty function method with objective parameters, the paper presents an interactive algorithm to solve the inequality constrained multi-objective programming (MP). The MP is transformed into a single obj...By using the penalty function method with objective parameters, the paper presents an interactive algorithm to solve the inequality constrained multi-objective programming (MP). The MP is transformed into a single objective optimal problem (SOOP) with inequality constrains;and it is proved that, under some conditions, an optimal solution to SOOP is a Pareto efficient solution to MP. Then, an interactive algorithm of MP is designed accordingly. Numerical examples show that the algorithm can find a satisfactory solution to MP with objective weight value adjusted by decision maker.展开更多
This paper studies multi-period risk management problems by presenting a dynamic risk measure. This risk measure is the sum of conditional value-at-risk of each period. The authors model it by Markov decision processe...This paper studies multi-period risk management problems by presenting a dynamic risk measure. This risk measure is the sum of conditional value-at-risk of each period. The authors model it by Markov decision processes and derive its optimality equation. This equation is further transformed equivalently to an analytically tractable one. The authors then use the model and its results to a multi-period portfolio optimization when the return rate vectors at each period form a Markov chain.展开更多
文摘In this paper, a new augmented Lagrangian penalty function for constrained optimization problems is studied. The dual properties of the augmented Lagrangian objective penalty function for constrained optimization problems are proved. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker (KKT) condition. Especially, when the KKT condition holds for convex programming its saddle point exists. Based on the augmented Lagrangian objective penalty function, an algorithm is developed for finding a global solution to an inequality constrained optimization problem and its global convergence is also proved under some conditions.
文摘In this paper, we present an algorithm to solve the inequality constrained multi-objective programming (MP) by using a penalty function with objective parameters and constraint penalty parameter. First, the penalty function with objective parameters and constraint penalty parameter for MP and the corresponding unconstraint penalty optimization problem (UPOP) is defined. Under some conditions, a Pareto efficient solution (or a weakly-efficient solution) to UPOP is proved to be a Pareto efficient solution (or a weakly-efficient solution) to MP. The penalty function is proved to be exact under a stable condition. Then, we design an algorithm to solve MP and prove its convergence. Finally, numerical examples show that the algorithm may help decision makers to find a satisfactory solution to MP.
文摘By using the penalty function method with objective parameters, the paper presents an interactive algorithm to solve the inequality constrained multi-objective programming (MP). The MP is transformed into a single objective optimal problem (SOOP) with inequality constrains;and it is proved that, under some conditions, an optimal solution to SOOP is a Pareto efficient solution to MP. Then, an interactive algorithm of MP is designed accordingly. Numerical examples show that the algorithm can find a satisfactory solution to MP with objective weight value adjusted by decision maker.
基金This research was supported in part by the National Natural Science Foundation of China under Grant Nos. 70971023 and 71001089 and in part by the Natural Science Foundation of Zhejiang Province under Grant No. Y60860040.
文摘This paper studies multi-period risk management problems by presenting a dynamic risk measure. This risk measure is the sum of conditional value-at-risk of each period. The authors model it by Markov decision processes and derive its optimality equation. This equation is further transformed equivalently to an analytically tractable one. The authors then use the model and its results to a multi-period portfolio optimization when the return rate vectors at each period form a Markov chain.