期刊文献+

自主机器人自组织结构IRASO的仿真研究 被引量:6

SIMULATION STUDY OF THE SELF ORGANIZING ARCHITECTURE IRASO FOR AUTONOMOUS ROBOT VEHICLES
下载PDF
导出
摘要 自主机器人车辆具有智能性和快速反应的特点,而在精确推理和实时性之间寻求最佳折衷是体系结构的关键.文中提出基于分布式多Agent系统的自组织体系结构IRASO,Agent之间动态组合以适应环境变化,公告板系统评估环境势态和指导Agent组织,同时设计了Agent协调工作的空间和时间模型.基于TCP/IP的计算联通域为异质分布式多Agent的协作运行提供支持.仿真结果表明该结构在智能性和实时性达到了期望特性. Intelligence and responsiveness are the two issues in architectures of autonomous robot vehicles. The key is how to make good tradeoff between deliberative reasoning and fast reaction. This paper explores a type of architecture, which is based on a distributed multi agent system. Agents can be dynamically reorganized to suit the various environments. A bulletin board system plays the role of assessing the surrounding situation and organization of the agents. Several models are designed to coordinate agents both in temporal and spatial.The computing coherent field based on TCP/IP supports agents to work together. The simulation shows that the architecture is desirable both in intelligence and responsibility.
出处 《计算机研究与发展》 EI CSCD 北大核心 1999年第7期776-782,共7页 Journal of Computer Research and Development
基金 国防科工委"九五"攻关项目基金
关键词 自主机器人 人工智能 自组织结构 IRASO 仿真 autonomous robot vehicle, multi agent system, distributed artificial intelligence, bulletin board system, situation assessment
  • 相关文献

参考文献1

  • 1Chun W H,SPIE.Proc Mobile Robot IX,1995年,180页

同被引文献45

  • 1Watkins C J C H. Learning from Delayed Rewards:[Ph.D.thesis]. Cambridge University, 1989.
  • 2Watkins C J C H. Dayan P. Technical not:Q-learning. Machine Learning, 1992,8:279~292.
  • 3Ohashi T ,et al. State transition rate based reinforcement learning Systems, Man, and Cybernetics. In: 2000 IEEE Intl. Cord.Volume: 1, 2000. 236~241.
  • 4Yamagnchi T,et al. Propagating learned behaviors from a virtual agent to a physical robot in reinforcement learnins, In..Proe. IEEE Int. Conf. on Evolutionary Computation, 1996. 855~859.
  • 5Yamagnchi T,et al. Reinforcement learning for a real robot in a real environment. In: European Conf. on Artificial Intelligence,Aug. 1996. 694~698.
  • 6Hailu G. Sommer G. Embedding knowledge in reinforcement·learning. In: Proc. 8^th Int. Conf. on Artificial Neural Networks.Sep. 1998. 1133~1138.
  • 7Huber M. A hybrid architecture for hierarchical reinforcement learning. In: Proc. IEEE Int. Conf. on Robotics & Automation,April 2000. 3290~3295.
  • 8Peng J, Bhanu B. Closed loop object recognition using reinforcement learning. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1998,20(2) : 139~154.
  • 9Schwartz J T,Shirir M. A survey of motion planning and related geometric algorithm. Artif. Intell. J. , 1988,37 : 157~169.
  • 10Canny 3 F. The Complexity of Robot Motion Planning.Cambridge, MA: MIT Press, 1988.

引证文献6

二级引证文献15

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部