期刊文献+

基于A*算法的坦克CGF全局路径规划 被引量:1

All Path Programming for Tank CGF Based on A* Arithmetic
下载PDF
导出
摘要 提出了一种应用A*启发式搜索算法对虚拟战场中坦克CGF(Computer Generated Forces)实体进行全局路径规划的方法,同时对A*算法进行了改进,利用二叉堆的结构组织OPEN表,大大加速了A*算法的搜索效率。所提方法可以处理较大规模的战场地形,具有较高的效率。 A method has been put forward in this paper ,which programs all path of tank CGF (Computer Generated Forces) entity in virtual battlefield by using heuristic A* search arithmetic. Meanwhile, A*arithmetic is improved by using framework organization OPEN chart, which improves search efficiency of A*arithmetic and deals with larger scale landform of battlefield with higher efficiency.
机构地区 蚌埠坦克学院
出处 《指挥控制与仿真》 2008年第3期28-30,35,共4页 Command Control & Simulation
关键词 A*算法 坦克CGF 路径规划 A*arithmetic tank CGF path programming
  • 相关文献

参考文献6

  • 1[2]Phillip John Mckerrow.Introduciotn to Robotics[M].Addison-Wesley,1991.
  • 2[3]Tomas Lozano-Perez.Spatial Planning:A Configuration Space Approach[J].IEEE Transaction on Computers.1983,32(2):108-120.
  • 3[4]Li Chen Fu,Dong Yueh Liu.An Eficient Algorithm for Finding a Collision-free Path Among Polyhedral Obstacles[J].Journal of Robotics Systems,1990,7(1):129-137.
  • 4王永庆.人工智能原理与方法[M].西安:西安交通大学出版社,2002.54—58.
  • 5王晓东.计算机算法设计与分析[M].北京:电子工业出版社,2002..
  • 6许精明.状态空间的启发式搜索方法研究[J].微机发展,2002,12(4):87-89. 被引量:4

共引文献18

同被引文献13

  • 1SU'fTON R S , BARTO A G. Reinforcement learning: an introduction [ M ]. London : MIT Press ,2005.
  • 2BARTO A G , MAHADEVAN S. Recent advances in hierarchical reinforcement learning [ J ]. Discrete Event Dynamic Systems : Theory and Applications, 2003, 13 (4) : 41-77.
  • 3SUTTON R S, PRECUP D, SINGH S P. Between MDPs and semiMDPs: a framework for temporal abstraction in reinforcement learning[ J]. Artificial Intelligence, 1999, 112(1/2) :181-211.
  • 4PARR R. Hierarchical control and learning for Markov decision process [ D ]. Berkeley: University of California, 1998.
  • 5DIETTERICH G T..Hierarchical reinforcement learning with the MAXQ value function decomposition [ J ]. Journal of Artificial Intelligence Research, 2000, 13 ( 1 ) : 227 -303.
  • 6LITTMAN M L. Markov games as a framework for multiagent reinforcement learning [ C ]// Proceedings of the 11th International Conference on Machine Learning,San Francisco : Morgan Kaufmann, 1994 : 157-163.
  • 7HU Junling, WELLMAN M P. Nash Q-learning for gen- eral-sum stochastic games [ J ]. Journal of Machine Learning Research, 2003 (4) : 1039-1069.
  • 8TADEPALLI P, GIVAN R, DRIESSENS K. Relational reinforcement learning: an overview [ C ]//Proceedings of the ICML2004 Workshop on Relational Reinforcement Learning, Banff, Canada, 2004.
  • 9TAN M. Multi-agent reinforcement learning: independent vs cooperative agents [ C]//Proceedings of the 10th International Conference on Machine Learning, 1993:330- 337.
  • 10孙彪,朱凡.采用粒子群优化算法的无人机实时航迹规划[J].电光与控制,2008,15(1):35-38. 被引量:10

引证文献1

二级引证文献8

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部