摘要
基于马氏决策过程(Markov decision process,MDP)的动态系统学习控制是近年来一个涉及机器学习、控制理论和运筹学等多个学科的交叉研究方向,其主要目标是实现系统在模型复杂或者不确定等条件下基于数据驱动的多阶段优化控制.本文对基于MDP的动态系统学习控制理论、算法与应用的发展前沿进行综述,重点讨论增强学习(Reinforcement learning,RL)与近似动态规划(Approximate dynamic programming,ADP)理论与方法的研究进展,其中包括时域差值学习理论、求解连续状态与行为空间MDP的值函数逼近方法、直接策略搜索与近似策略迭代、自适应评价设计算法等,最后对相关研究领域的应用及发展趋势进行分析和探讨。
Learning control of dynamical systems based on Markov decision processes (MDPs) is an interdisciplinary research area of machine learning, control theory, and operations research. The main objective in this research area is to realize data-driven multi-stage optimal control for complex or uncertain dynamical systems. This paper presents a comprehensive survey on the theory, algorithms, and applications of MDP-based learning control of dynamical systems. Emphases are put on recent advances in the theory and methods of reinforcement learning (RL) and adaptive/approximate dynamic programming (ADP), including temporal-difference learning theory, value function approximation for continuous state and action spaces, direct policy search, approximate policy iteration, and adaptive critic designs. Applications and the trends for future research and developments in related fields are also discussed.
出处
《自动化学报》
EI
CSCD
北大核心
2012年第5期673-687,共15页
Acta Automatica Sinica
基金
国家自然科学基金(61075072
90820302
60921061)
霍英东青年教师基金优选资助课题(114005)
教育部新世纪优秀人才支持计划(NCET-10-0901)资助~~