期刊文献+

实时动态规划的最优行动判据及算法改进 被引量:8

Optimal Action Criterion and Algorithm Improvement of Real-Time Dynamic Programming
下载PDF
导出
摘要 主要以提高求解马尔可夫决策问题的实时动态规划(real-time dynamic programming,简称RTDP)算法的效率为目的.对几类典型的实时动态规划算法所使用的收敛判据进行了对比分析,并利用值函数上界、下界给出了称为最优行动判据的收敛判据,以及一个更适合实时算法的分支选择策略.最优行动判据可以更早地标定当前状态满足精度要求的最优行动供立即执行,而新的分支选择策略可以加快这一判据的满足.据此设计了一种有界增量实时动态规划(bounded incremental RTDP,简称BI-RTDP)算法.在两种典型仿真实时环境的实验中,BI-RTDP均显示出优于现有相关算法的实时性能. This paper is primarily to improve the efficiency of real-time dynamic programming (RTDP) algorithm for solving Markov decision problems. Several typical convergence criteria are compared and analyzed. A criterion called optimal action criterion and a corresponding branch strategy are proposed on the basis of the upper and lower bound theory. This criterion guarantees that the agent can act earlier in a real-time decision process while an optimal policy with sufficient precision still remains. It can be proved that under certain conditions one can obtain an optimal policy with arbitrary precision by using such an incremental method. With these new techniques, a bounded incremental real-time dynamic programming (BI-RTDP) algorithm is designed. In the experiments of two typical real-time simulation systems, BI-RTDP outperforms the other state-of-the-art RTDP algorithms tested.
出处 《软件学报》 EI CSCD 北大核心 2008年第11期2869-2878,共10页 Journal of Software
基金 Supported by the National Natural Science Foundation ofChina under Grant No.60745002(国家自然科学基金) the National Basic Research Program of China under No.2003CB317002(国家重点基础研究发展计划(973))
关键词 马尔可夫决策过程 实时动态规划 收敛判据 增量求解 启发式搜索 MDP (Markov decision process) RTDP (real-time dynamic programming) convergence criterion incremental solving heuristic search
  • 相关文献

参考文献14

  • 1Boutilier C, Dean T, Hanks S. Decision-Theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research, 1999,11 : 1-94.
  • 2Hansen EA, Zilberstein S. LAO^*: A heuristic search algorithm that finds solutions with loops. Artificial Intelligence, 2001,129(1-2): 35-62.
  • 3Bonet B, Geffner H. Faster heuristic search algorithms for planning with uncertainty and full feedback. In: Proc. of the 18th Int'l Joint Conf. on Artificial Intelligence. Acapulco: Morgan Kaufmann Publishers, 2003. 1233-1238.
  • 4Dean T, Kaelbling LP, Kirman J, Nicholson A. Planning under time constraints in stochastic domains. Artificial Intelligence, 1995, 76(1-2):35-74.
  • 5Ferguson D, Stentz A. 2004. Focused dynamic programming: Extensive comparative results, Technical Report, CMU-RI-TR-04-13, Pittsburgh: Robotics Institute, Carnegie Mellon University, 2004.
  • 6Barto AG, Bradtke SJ, Singh SP. Learning to act using real-time dynamic programming. Artificial Intelligence, 1995,72(1-2): 81-138.
  • 7Pemberton JC, Korf RE. Incremental search algorithms for real-time decision making. In: Proc. of the 2nd Artificial Intelligence Planning Systems Conf. 1994. 140-145.
  • 8Bonet B, Geffner H. Labeled RTDP: Improving the convergence of real-time dynamic programming. In: Giunchiglia E, Muscettola N, Nau D, eds. Proc. of the ICAPS 2003. AAAI Press, 2003. 12-21.
  • 9McMahan HB, Likhachev M, Gordon GJ. Bounded real-time dynamic programming: RTDP with monotone upper bounds and performance guarantees. In: Proc. of the 22nd Int'l Conf. on Machine learning. 2005.
  • 10Smith T, Simmons R. Focused real-time dynamic programming for MDPs: Squeezing More Out of a Heuristic. In: Proc. of the 21 st AAAI Conf. on Artificial Intelligence. AAAI Press, 2006.

同被引文献69

引证文献8

二级引证文献19

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部