摘要
根据分布视觉组合导航系统利用全局视觉对移动机器人进行绝对定位的要求,针对机器人远离摄像机时定位精度明显下降的问题,提出了适用于较大场景的多摄像机参数分区标定方法。在集合的概念下描述了移动机器人的整个工作区域与各个摄像机的有效区域的关系,建立了基于小孔模型的空间平面到摄像机像平面的透视变换矩阵的摄像机参数标定模型。通过4部摄像机在长9.6 m,宽6.4 m的区域内进行的标定实验和误差分析表明,该方法的整体平均误差仅为7.96 mm。
Due to the fact that the distributed vision composite navigation system of the autonomous mobile robot requrics making use of the overall vision to perform the absolute locating of the robot, aiming at the problem of that when the robot locates far from the camera, the locating precision will deteriorate seriously, a muhi-camera parametrially partitioned calibration method suitable for large scenes was proposed. The relation between the total working region of the mobile robot and the effective region of every camera was described using the concept of set. A camera parametric calibration model was built based on the perspective transformation matrix from the space plane of the hole model to the image plane of the camera. The calibration experiment with 4 cameras in a 9.6 m × 6.4 m region and the error analysis shows that the total mean error is only 7.96 mm.
出处
《吉林大学学报(工学版)》
EI
CAS
CSCD
北大核心
2006年第3期387-392,共6页
Journal of Jilin University:Engineering and Technology Edition
基金
山东省教委资助项目(J00g54)
吉林大学引进优秀人才科研启动经费资助项目
关键词
自动控制技术
移动机器人
导航
摄像机标定
automatic control technology
mobile robot
navigation
camera calibration