期刊文献+

未知环境下多机器人协作追捕算法 被引量:4

Multi-Robot Cooperative Pursuit Algorithm for in An Unknown Environment
下载PDF
导出
摘要 研究了在未知环境下一种多机器人协作追捕多个移动目标方法,将追捕过程分为环境地图创建、构建追捕团队,及团队成员协作快速捕获目标三部分.设计抽象传感器模型用于追捕者探索障碍物.综合考虑与捕获目标相关的各种因素建立属性关系样本数据集,提出了利用关联规则数据挖掘技术构建追捕团队的方法.通过对追捕目标位置预测,将追捕问题类比为多机器人协作路径规划问题,在环境地图基础上求解追捕团队到达目标位置的最优路径.仿真结果表明,多机器人能够高效地协作捕获移动目标,证明了所提方法在复杂动态环境下实现的可行性及有效性. An approach of cooperative hunting for multiple mobile targets by multi-robot is presented in an unknown environment,which divides the pursuit process into creating the environment map,forming the pursuiting teams and capturing the targets.The abstract sensor model is designed to search obstacles for hunters.The data sets of attribute relationship is built by consulting all of factors about capturing evaders,then the association rule data mining is used to build the pursuit groups.Through doping out the positions of targets,the puisuit game can be transformed into multi-robot path planning problems.The best path can be found in the environment map.The simulation results show that the mobile evaders can be captured effectively and efficiently,and prove the feasibility and validity of the given algorithm under dynamic environment.
出处 《电子学报》 EI CAS CSCD 北大核心 2011年第3期567-574,共8页 Acta Electronica Sinica
基金 国家863计划资助项目(No.2006AA04Z259) 国家自然科学基金(No.60573108)
关键词 多机器人 追捕问题 目标搜索 数据挖掘 关联规则 路径规划 multi-robot pursuit game target search data mining association rule path planning
  • 相关文献

参考文献16

  • 1Yamaguchi H. A cooperative hunting behavior by mobile-robot troops[ J]. International Journal of Robotics Research, 1999, 8 (8) :931 -940.
  • 2Kopparty S, Ravishankar C V.A framework for pursuit evasion games in Rn [ J ]. Information Processing Letters 2005,96 ( 3 ) : 114 -122.
  • 3Kok J R, Vlassis N. Sparse Cooperative Q-learning[ A ]. Pro-ceedings of the 21st International Conference on Machine Learning[ C]. Banff, Alberta, Canada: Mrr Press, 2OM. 61 -68.
  • 4Yuko Ishiwaka, Takamasa Sato, Yukinori Kakazu. An approach to the pursuit problem on a heterogeneous mulfiagent system using reinforcement learning [ J ]. Robotics and Autonomous Systems, 2003,3(4) : 245 -256.
  • 5周浦城,洪炳镕,黄庆成.一种新颖的多agent强化学习方法[J].电子学报,2006,34(8):1488-1491. 被引量:8
  • 6Vidal R, Shakemia O, Kim H J, Shim D H, Sastry S. Proba-bifistic pursuit-evasion games: theory, implementation and ex-perimental evaluation [ J ]. IEEE Transactions on Robotics and Automation, 2002,18 (5) : 662 -669.
  • 7Gdnton C. A Tested for Investigating Agent Effectiveness in a Multiagent Pursuit Game[ D]. Victoria, Australia: The Universi-ty of Melbourne, 1996.
  • 8Luca Schenato, Songhwai Oh , Shankar Sastry, Prasanta Bose. Swarm coordination for pursuit evasion games using sensor net-works[ A]. Proceedings of the 2005 1EEE International Confer-ence on Robotics and Automation[ C]. Barcelona, Spain: IEEE. Press,2005.2493-2498.
  • 9李淑琴,王欢,李伟,杨静宇.基于动态角色的多移动目标围捕问题算法研究[J].系统仿真学报,2006,18(2):362-365. 被引量:12
  • 10周浦城,洪炳镕,王月海.动态环境下多机器人合作追捕研究[J].机器人,2005,27(4):289-295. 被引量:16

二级参考文献18

  • 1曹志强,张斌,王硕,谭民.未知环境中多移动机器人协作围捕的研究(英文)[J].自动化学报,2003,29(4):536-543. 被引量:13
  • 2Ho F,Kamel M.Learning coordinating strategies for cooperative multiagent systems[J].Machine Learning,1998,33(2-3):155 -177.
  • 3Garland A,Alterman R.Autonomous agents that learn to better coordinate[J].Autonomous Agents and Multi-Agent Systems,2004,8 (3):267-301.
  • 4Kaelbing L P,Littman M L,Moore A W.Reinforcement learning:A survey[J].Journal of Artificial Intelligence Research,1996,4:237-285.
  • 5Brafman R I,Tennenholtz M.Learning to coordinate efficiently:A model-based approach[J].Journal of Artificial Intelligence Research,2003,18:517-529.
  • 6Chen G,Yang Zh.Coordinating multiple agents via rein -forcement learning[J].Autonomous Agents and Multi-Agent Systems,2005,10 (3):273-328.
  • 7Watkins C J C H,Dayan P.Technical note:Q-learning[J].Machine learning,1992,8(3-4):279 -292.
  • 8Grefenstette J J.Credit assignment in rule discovery systems based on genetic algorithms[J].Machine Learning,1988,3(2-3):225 -245.
  • 9Miyazaki K,Kobayashi S.Proposal for an algorithm to improve a rational policy in POMDPs[A].Proc of IEEE International Conference on SMC[C].Japan,Tokyo:IEEE SMC Society,1999.492-497.
  • 10Arai S,Sycara K.Effective learning approach for planning and scheduling in multi-agent domain[A].Proc of the 6th ISAB[C].France,Paris:The MIT Press,2000.507-516.

共引文献32

同被引文献43

引证文献4

二级引证文献9

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部