摘要
鉴于尺度不变特征转换(SIFT)匹配算法存在计算效率不高且容易出现误匹配的问题,针对视觉同步定位与地图重建,提出了一种基于先验信息的SIFT匹配算法.该算法首先根据机器人和特征点的相对距离变化来预测尺度空间的变化;然后根据机器人和特征点的当前状态来预测特征点的图像位置;最后在预测的图像位置进行SIFT匹配.实验结果表明该算法能显著提高SIFT匹配的计算效率和准确性.
The scale invariant feature transform(SIFT) algorithm has the problem of computational inefficiency and mismatch. Therefor, a prior information constrained SIFT matching algorithm is proposed for the visual simultaneous localization and mapping(vSLAM) applications. Firstly, the scale space is predicted according to the relative distance from the robot to the feature. Then the feature position is estimated according to the state of both the robot and the feature. Finally, sift matching is conducted within the predicted image region. The experiment results show that the proposed algorithm can achieve better computational efficiency and matching performance.
出处
《控制与决策》
EI
CSCD
北大核心
2011年第6期911-915,共5页
Control and Decision
基金
浙江省科技计划重大科技专项重点项目(2006C11200)
关键词
视觉同步定位与地图重建
特征匹配
尺度空间
扩展卡尔曼滤波
visual simultaneous localization and mapping: feature matching~ scale space~ extended Kalman filter