An iterative learning model predictive control (ILMPC) technique is applied to a class of continuous/batch processes. Such processes are characterized by the operations of batch processes generating periodic strong ...An iterative learning model predictive control (ILMPC) technique is applied to a class of continuous/batch processes. Such processes are characterized by the operations of batch processes generating periodic strong disturbances to the continuous processes and traditional regulatory controllers are unable to eliminate these periodic disturbances. ILMPC integrates the feature of iterative learning control (ILC) handling repetitive signal and the flexibility of model predictive control (MPC). By on-line monitoring the operation status of batch processes, an event-driven iterative learning algorithm for batch repetitive disturbances is initiated and the soft constraints are adjusted timely as the feasible region is away from the desired operating zone. The results of an industrial application show that the proposed ILMPC method is effective for a class of continuous/batch processes.展开更多
Based on an equivalent two-dimensional Fornasini-Marchsini model for a batch process in industry, a closed-loop robust iterative learning fault-tolerant guaranteed cost control scheme is proposed for batch processes w...Based on an equivalent two-dimensional Fornasini-Marchsini model for a batch process in industry, a closed-loop robust iterative learning fault-tolerant guaranteed cost control scheme is proposed for batch processes with actuator failures. This paper introduces relevant concepts of the fault-tolerant guaranteed cost control and formulates the robust iterative learning reliable guaranteed cost controller (ILRGCC). A significant advantage is that the proposed ILRGCC design method can be used for on-line optimization against batch-to-batch process uncertainties to realize robust tracking of set-point trajectory in time and batch-to-batch sequences. For the convenience of implementation, only measured output errors of current and previous cycles are used to design a synthetic controller for iterative learning control, consisting of dynamic output feedback plus feed-forward control. The proposed controller can not only guarantee the closed-loop convergency along time and cycle sequences but also satisfy the H∞performance level and a cost function with upper bounds for all admissible uncertainties and any actuator failures. Sufficient conditions for the controller solution are derived in terms of linear matrix inequalities (LMIs), and design procedures, which formulate a convex optimization problem with LMI constraints, are presented. An example of injection molding is given to illustrate the effectiveness and advantages of the ILRGCC design approach.展开更多
In this paper, a novel iterative Q-learning algorithm, called "policy iteration based deterministic Qlearning algorithm", is developed to solve the optimal control problems for discrete-time deterministic no...In this paper, a novel iterative Q-learning algorithm, called "policy iteration based deterministic Qlearning algorithm", is developed to solve the optimal control problems for discrete-time deterministic nonlinear systems. The idea is to use an iterative adaptive dynamic programming(ADP) technique to construct the iterative control law which optimizes the iterative Q function. When the optimal Q function is obtained, the optimal control law can be achieved by directly minimizing the optimal Q function, where the mathematical model of the system is not necessary. Convergence property is analyzed to show that the iterative Q function is monotonically non-increasing and converges to the solution of the optimality equation. It is also proven that any of the iterative control laws is a stable control law. Neural networks are employed to implement the policy iteration based deterministic Q-learning algorithm, by approximating the iterative Q function and the iterative control law, respectively. Finally, two simulation examples are presented to illustrate the performance of the developed algorithm.展开更多
基金Supported by the National Creative Research Groups Science Foundation of China (60721062) and the National High Technology Research and Development Program of China (2007AA04Z162).
文摘An iterative learning model predictive control (ILMPC) technique is applied to a class of continuous/batch processes. Such processes are characterized by the operations of batch processes generating periodic strong disturbances to the continuous processes and traditional regulatory controllers are unable to eliminate these periodic disturbances. ILMPC integrates the feature of iterative learning control (ILC) handling repetitive signal and the flexibility of model predictive control (MPC). By on-line monitoring the operation status of batch processes, an event-driven iterative learning algorithm for batch repetitive disturbances is initiated and the soft constraints are adjusted timely as the feasible region is away from the desired operating zone. The results of an industrial application show that the proposed ILMPC method is effective for a class of continuous/batch processes.
基金Supported in part by NSFC/RGC joint Research Scheme (N-HKUST639/09), the National Natural Science Foundation of China (61104058, 61273101), Guangzhou Scientific and Technological Project (2012J5100032), Nansha district independent innovation project (201103003), China Postdoctoral Science Foundation (2012M511367, 2012M511368), and Doctor Scientific Research Foundation of Liaoning Province (20121046).
文摘Based on an equivalent two-dimensional Fornasini-Marchsini model for a batch process in industry, a closed-loop robust iterative learning fault-tolerant guaranteed cost control scheme is proposed for batch processes with actuator failures. This paper introduces relevant concepts of the fault-tolerant guaranteed cost control and formulates the robust iterative learning reliable guaranteed cost controller (ILRGCC). A significant advantage is that the proposed ILRGCC design method can be used for on-line optimization against batch-to-batch process uncertainties to realize robust tracking of set-point trajectory in time and batch-to-batch sequences. For the convenience of implementation, only measured output errors of current and previous cycles are used to design a synthetic controller for iterative learning control, consisting of dynamic output feedback plus feed-forward control. The proposed controller can not only guarantee the closed-loop convergency along time and cycle sequences but also satisfy the H∞performance level and a cost function with upper bounds for all admissible uncertainties and any actuator failures. Sufficient conditions for the controller solution are derived in terms of linear matrix inequalities (LMIs), and design procedures, which formulate a convex optimization problem with LMI constraints, are presented. An example of injection molding is given to illustrate the effectiveness and advantages of the ILRGCC design approach.
基金supported in part by National Natural Science Foundation of China(Grant Nos.6137410561233001+1 种基金61273140)in part by Beijing Natural Science Foundation(Grant No.4132078)
文摘In this paper, a novel iterative Q-learning algorithm, called "policy iteration based deterministic Qlearning algorithm", is developed to solve the optimal control problems for discrete-time deterministic nonlinear systems. The idea is to use an iterative adaptive dynamic programming(ADP) technique to construct the iterative control law which optimizes the iterative Q function. When the optimal Q function is obtained, the optimal control law can be achieved by directly minimizing the optimal Q function, where the mathematical model of the system is not necessary. Convergence property is analyzed to show that the iterative Q function is monotonically non-increasing and converges to the solution of the optimality equation. It is also proven that any of the iterative control laws is a stable control law. Neural networks are employed to implement the policy iteration based deterministic Q-learning algorithm, by approximating the iterative Q function and the iterative control law, respectively. Finally, two simulation examples are presented to illustrate the performance of the developed algorithm.