Optimization under uncertainty is a challenging topic of practical importance in the Process Systems Engineering.Since the solution of an optimization problem generally exhibits high sensitivity to the parameter varia...Optimization under uncertainty is a challenging topic of practical importance in the Process Systems Engineering.Since the solution of an optimization problem generally exhibits high sensitivity to the parameter variations, the deterministic model which neglects the parametric uncertainties is not suitable for practical applications. This paper provides an overview of the key contributions and recent advances in the field of process optimization under uncertainty over the past ten years and discusses their advantages and limitations thoroughly. The discussion is focused on three specific research areas, namely robust optimization, stochastic programming and chance constrained programming, based on which a systematic analysis of their applications, developments and future directions are presented. It shows that the more recent trend has been to integrate different optimization methods to leverage their respective superiority and compensate for their drawbacks. Moreover, data-driven optimization, which combines mathematical programming methods and machine learning algorithms, has become an emerging and competitive tool to handle optimization problems in the presence of uncertainty based on massive historical data.展开更多
In this paper, a novel iterative Q-learning algorithm, called "policy iteration based deterministic Qlearning algorithm", is developed to solve the optimal control problems for discrete-time deterministic no...In this paper, a novel iterative Q-learning algorithm, called "policy iteration based deterministic Qlearning algorithm", is developed to solve the optimal control problems for discrete-time deterministic nonlinear systems. The idea is to use an iterative adaptive dynamic programming(ADP) technique to construct the iterative control law which optimizes the iterative Q function. When the optimal Q function is obtained, the optimal control law can be achieved by directly minimizing the optimal Q function, where the mathematical model of the system is not necessary. Convergence property is analyzed to show that the iterative Q function is monotonically non-increasing and converges to the solution of the optimality equation. It is also proven that any of the iterative control laws is a stable control law. Neural networks are employed to implement the policy iteration based deterministic Q-learning algorithm, by approximating the iterative Q function and the iterative control law, respectively. Finally, two simulation examples are presented to illustrate the performance of the developed algorithm.展开更多
Regularized minimization problems with nonconvex, nonsmooth, even non-Lipschitz penalty functions have attracted much attention in recent years, owing to their wide applications in statistics, control,system identific...Regularized minimization problems with nonconvex, nonsmooth, even non-Lipschitz penalty functions have attracted much attention in recent years, owing to their wide applications in statistics, control,system identification and machine learning. In this paper, the non-Lipschitz ?_p(0 < p < 1) regularized matrix minimization problem is studied. A global necessary optimality condition for this non-Lipschitz optimization problem is firstly obtained, specifically, the global optimal solutions for the problem are fixed points of the so-called p-thresholding operator which is matrix-valued and set-valued. Then a fixed point iterative scheme for the non-Lipschitz model is proposed, and the convergence analysis is also addressed in detail. Moreover,some acceleration techniques are adopted to improve the performance of this algorithm. The effectiveness of the proposed p-thresholding fixed point continuation(p-FPC) algorithm is demonstrated by numerical experiments on randomly generated and real matrix completion problems.展开更多
文摘Optimization under uncertainty is a challenging topic of practical importance in the Process Systems Engineering.Since the solution of an optimization problem generally exhibits high sensitivity to the parameter variations, the deterministic model which neglects the parametric uncertainties is not suitable for practical applications. This paper provides an overview of the key contributions and recent advances in the field of process optimization under uncertainty over the past ten years and discusses their advantages and limitations thoroughly. The discussion is focused on three specific research areas, namely robust optimization, stochastic programming and chance constrained programming, based on which a systematic analysis of their applications, developments and future directions are presented. It shows that the more recent trend has been to integrate different optimization methods to leverage their respective superiority and compensate for their drawbacks. Moreover, data-driven optimization, which combines mathematical programming methods and machine learning algorithms, has become an emerging and competitive tool to handle optimization problems in the presence of uncertainty based on massive historical data.
基金supported in part by National Natural Science Foundation of China(Grant Nos.6137410561233001+1 种基金61273140)in part by Beijing Natural Science Foundation(Grant No.4132078)
文摘In this paper, a novel iterative Q-learning algorithm, called "policy iteration based deterministic Qlearning algorithm", is developed to solve the optimal control problems for discrete-time deterministic nonlinear systems. The idea is to use an iterative adaptive dynamic programming(ADP) technique to construct the iterative control law which optimizes the iterative Q function. When the optimal Q function is obtained, the optimal control law can be achieved by directly minimizing the optimal Q function, where the mathematical model of the system is not necessary. Convergence property is analyzed to show that the iterative Q function is monotonically non-increasing and converges to the solution of the optimality equation. It is also proven that any of the iterative control laws is a stable control law. Neural networks are employed to implement the policy iteration based deterministic Q-learning algorithm, by approximating the iterative Q function and the iterative control law, respectively. Finally, two simulation examples are presented to illustrate the performance of the developed algorithm.
基金supported by National Natural Science Foundation of China(Grant Nos.11401124 and 71271021)the Scientific Research Projects for the Introduced Talents of Guizhou University(Grant No.201343)the Key Program of National Natural Science Foundation of China(Grant No.11431002)
文摘Regularized minimization problems with nonconvex, nonsmooth, even non-Lipschitz penalty functions have attracted much attention in recent years, owing to their wide applications in statistics, control,system identification and machine learning. In this paper, the non-Lipschitz ?_p(0 < p < 1) regularized matrix minimization problem is studied. A global necessary optimality condition for this non-Lipschitz optimization problem is firstly obtained, specifically, the global optimal solutions for the problem are fixed points of the so-called p-thresholding operator which is matrix-valued and set-valued. Then a fixed point iterative scheme for the non-Lipschitz model is proposed, and the convergence analysis is also addressed in detail. Moreover,some acceleration techniques are adopted to improve the performance of this algorithm. The effectiveness of the proposed p-thresholding fixed point continuation(p-FPC) algorithm is demonstrated by numerical experiments on randomly generated and real matrix completion problems.