期刊文献+

一种基于多智能体系统的分布式规划系统 被引量:1

Distributed Planning System in Dynamic Uncertainty Environment
下载PDF
导出
摘要 为了解决动态不确定环境下多智能体系统中的规划问题,通过对现有规划系统的分析,提出一种新的分布式规划系统MPOMDPRS,通过保持PRS系统的持续规划机制来适应环境的动态性;通过保持POMDP的概率分布模型来适应环境的不确定性;通过通讯集的添加来满足多智能体系统的需要.仿真实验证明了系统的有效性和实时性,可以满足环境的动态不确定性要求. In order to solve problems in multi-agent system planning in dynamic uncertainty environment, a new distributed planning system MPOMDPRS is offered after an analysis of the current planning systems. MPOMDPRS adopts the continuing planning mechanism to adapt to the dynamic environment, develops the model of probability distribution to adapt to the uncertainty environment and add communication to adapt to multi-agent system. The simulation results show the effectiveness and real-time performance of MPOMDPRS system, which can adapt to the dynamic uncertainty environment.
出处 《深圳职业技术学院学报》 CAS 2009年第5期18-21,共4页 Journal of Shenzhen Polytechnic
关键词 分布式规划系统 MPOMDPRS 多智能体系统 动态不确定环境 distributed planning system MPOMDPRS multi-agent system dynamic uncertaintyenvironment
  • 相关文献

参考文献8

  • 1Robert A Wilson, Frank C Keil. The MIT Encyclopedia of the Cognitive Sciences [M]. MIT Press,2001.
  • 2Georgeff M P and Lansky A L. Reactive reasoning and planning [C] //Proceedings of the Sixth National Conference on Artificial Intelligence (AAAI-87), Seattle, WA, 1987: 677 - 682.
  • 3Ingrand F F, Georgeff M P, Rao A.S. An Architecture for Real-Time Reasoning and System Control[J]. IEEE Expert, 1992, 7 (6): 33-44.
  • 4Poupart P. Exploiting Structure to Efficiently Solve Large Scale Partially Observable Markov Decision Processes [D]. Toronto :University of Toronto, 2005.
  • 5Anthony R. Cassandra. A survey of POMDP applications [A]// Michael Littmann, editor, Working Notes: AAAI Fall Symposium on Planning with Partially Observable Markov Decision Processes. AAAI, October 1998: 17-24.
  • 6张新良,石纯一.M-POMDP模型及其划分求解算法[J].清华大学学报(自然科学版),2005,45(10):1413-1416. 被引量:3
  • 7Smith R G. The Contract Net Protocol: High-level Communication and Control in a Distributed Problem Solver[J]. IEEE Transaction on Computer, 1980, 12.
  • 8Lesser V R, Decker K, Carver N. Evolution of the GPGP Domain-independent Coordination Framework [R]. University of Massachusetts Computer Science Technical Report, 1998.

二级参考文献6

  • 1Bagnell J, Kakade S, Schneider J, et al. Policy Search by Dynamic Programming [M]. Neural Information Processing Systems, Cambridge: MIT Press, 2003.
  • 2Cassandra A R. Exact and Approximation Algorithms for Partially Observable Markov Decision Process [D]. Rhoed Island: Brown University, 1998.
  • 3Cassandra A, Littman M L, ZHANG N L, et al.Incremental pruning: A simple, fast, exact method for partially observable markov decision process [A].Proceedings of the Thirteenth Annual Conference on Uncertainty in Artificial Intelligence (UAI-97) [C]. Rhode Island: Brown University, 1997. 4 - 61.
  • 4Nari R, Tambe M, Yokoo M, et al. Taming decentralized POMDPs: Towards efficient policy computation for multiagent settings [A]. Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence [C].Mexico: Morgan Kaufrmann Press, 2003. 705- 712.
  • 5Chades L, Shcerrer B, Charpillet F. A heuristic approach for solving decentralized-POMDP: assessment on the pursuit problem [A]. Proceedings of the 2002 ACM Symposium on Applied Computing [C]. Madrid, Spain: ACM Press, 2002.57-62.
  • 6Xuan Ping, Lesser V, Zilberstein S. Communication decisions in multi-agent cooperation [A]. Model and Experiments Proceedings of the Fifth International Conference on Autonomous Agents, Montreal [C]. Canada:ACM Press, 2001, 5: 616-623.

共引文献2

同被引文献7

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部