摘要
提出一种新的基于非接触观测信息的机器人模仿学习表征与执行的控制图模型.建立可模仿学习的人-机关系,并得出模仿学习前提条件是以系统末端微分运动为基本行为元.提出控制图模型结构和基于视觉观测序列的模型学习方法.提出基于累积和瞬时相关函数的观测序列分割和图结构生成方法,和基于RBF(径向基函数)网络的行为元目标学习方法.通过不同结构和自由度的机器人毛笔绘画和物体抓取模仿学习实例实验,证明了所提出模型在视觉观测信息下能够表征与执行不同层次和类型的行为,具有良好的泛化能力、通用性及实用性.
The cybernetic graphic model (CGM), a new model of behavioral representation and reproduction, based on non-contact observation for robot imitation learning is proposed. The human-robots relationship is built for imitating the behaviors from demonstration of human, and the pre-condition of imitation learning is analyzed to be that differential motions of end-effector of system are used as the behavioral primitives. Architecture of CGM and the learning method based on visual observation sequences are proposed. The segmenting method of sequences based on accumulating and instantaneous correlation function for generation of graphic structure of CGM and the learning method of behavioral primitive target based on RBF (radial basis function) networks are proposed. The brush drawing and object grasping experiments are performed with different types and degrees of freedom of robots. The results show that the proposed CGM based on visual observation can represent and reproduce different levels and types of behaviors, and is powerful in generalization, generality and utility of imitation learning.
出处
《机器人》
EI
CSCD
北大核心
2014年第3期309-315,共7页
Robot
基金
国家自然科学基金资助项目(51075281)
教育部高等学校博士学科点专项科研基金资助项目(20112102110002)
辽宁省自然科学基金资助项目(201102163)
关键词
机器人行为
模仿学习
控制图模型
非接触观测信息
视觉观测
robot behavior
imitation learning
cybernetic graphic model
non-contact observation
vision observation