期刊文献+

一种基于新奇的动作发育模型

An Action Developmental Model Based on Novelty
下载PDF
导出
摘要 机器人的动作是一切活动的基本单元。就足球机器人而言,好的动作设计实现是决策实现的重要保证。传统的强化学习模型在整个学习过程中使用恒定学习速率,导致在未知环境下收敛速度慢,且适应性差。针对以上问题,提出了一种新的动作发育模型——基于新奇的动作发育模型;该模型在学习过程中使用基于状态的遗忘均值的学习速率,更加符合人类发育的真实过程。模型采用内在价值系统,该系统由三部分组成:奖励、惩罚和新奇评判。在机器人足球比赛中,通过机器人截球实验表明,该模型在不断变化的环境下可以高效而准确地完成相应的截球动作。 The robot's action is the basic element of the activities,for the robot,good action design is the important pledge to implement strategy.The learning process uses the constant learning rate in the traditional reinforce learning model,because of that robot learn in a low convergence speed and with the poor adaptation.For the above questions,a new kind of an action developmental model-action is proposed developmental model based on novelty.The model in the learning process uses the learning rate which based on the amnesic average,which is consistent with human real development process.This model uses innate value system which is consists of three parts: reward,punishment and the novelty.Robots intercepting experiments indicates that the model can be efficiently and accurately to carry out appropriate actions in constantly changing environment.
作者 崔瑞丽
出处 《科学技术与工程》 2011年第5期975-978,共4页 Science Technology and Engineering
关键词 基于新奇的动作发育模型 强化学习 遗忘均值 内在价值系统 action developmental model based novelty traditional reinforce learning amnesic averageinnate value system
  • 相关文献

参考文献5

  • 1Argall B D,Chernova S,Veloso M,et al.A survey of robot learning from demonstration.Robotics and Autonomous Systems,2009;57:469-483.
  • 2张彦铎,闵锋.基于人工神经网络的强化学习在机器人足球中的应用[J].哈尔滨工业大学学报,2004,36(7):859-861. 被引量:7
  • 3Lungarella M,Metta G,Pfeifer R.Developmental robotics:a survey.Connection Science,December 2003;15(4):151-190.
  • 4RanasingheN,Shen Weimin.Surprise-based learning for developmental robotics;learning and adaptive behaviors for robtic systems.LAB-RS'08.ECSIS Symposium,2008:65-70.
  • 5Huang Xiao,Weng Juyang.Inherent value systems for autonomous mental development.InternationalJournalof Humanoid Robotics,2007;4(2):407-433.

二级参考文献7

  • 1胡守仁 余少波.神经网络导论[M].长沙:国防科技大学出版社,1992.113-129.
  • 2[6]WATKINS C J C H, DAYAN P. Q-learning [ J ]. Machine Learning, 1994,8(3 ): 279 - 292.
  • 3[1]TAMBE M. Tracking dynamic team activity [ J ]. Proceedings of National Conference on Artificial Intelligence (AAAl)[C]. [s. l. ]:[s.n. ],1998.
  • 4[3]STONE P, VELOSO M. Multi-agent systems: a survey from a machine learning perspective[R]. CMU CS technical report, No. CMU - CS -97 - 193.
  • 5[4]SINGH S. Agents and reinforcement learning [ M ]. San Mateo: CA: Miller freeman publish Inc, 1997.
  • 6[5]SUTTON R S, BARTO A G. Reinforcement Learning [M]. [s. l.]: MITPress,1998.
  • 7蔡庆生,张波.一种基于Agent团队的强化学习模型与应用研究[J].计算机研究与发展,2000,37(9):1087-1093. 被引量:31

共引文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部