摘要
景象匹配视觉导航技术通常需要硬件测量相机的距离和姿态,提出了基于标定区域的特征点组合匹配的位姿估计方法,通过选择实时图像的标定区域中最优的一组尺度不变特征变换(Scale-Invariant Feature Transform,SIFT)匹配点,通过三角形内部线性插值方法计算SIFT特征匹配点的地面局部坐标,利用空间后方交会得到实时图像的相机位置和姿态,避免了硬件测量相机的距离和姿态的缺陷,扩大了景象匹配视觉导航技术的适用范围。实验结果表明:该方法计算的实时图像相机位姿与真实结果比较接近。
Vision navigation technology of scene matching needs hardware to measure the distance and attitude of the camera. A method of position and attitude estimation based on the combination of feature points in the calibration area is proposed. The software can get the camera position and attitude of the real-time image by selecting the best set of Scale-Invariant Feature Transform(SIFT) feature matching points of the reference image calibration area of the real-time image, and calculating the local coordinates of the ground of SIFT feature matching points based on the triangle internal linear interpolation method, using space resection, avoids the defect of hardware measuring camera’s distance and attitude, and expands the application scope of vision navigation technology of scene matching. The experimental results show that the position and attitude of real-time image calculated by this method is close to the real.
作者
蔡鹏
沈朝萍
李红燕
Cai Peng;Shen Chaoping;Li Hongyan(Aeronautical Engineering Institute,Jiangsu Aviation Technical College,Zhenjiang 212134,China;Zhenjiang Key Laboratory of UAV Application Technology,Jiangsu Aviation Technical College,Zhenjiang 212134,China)
出处
《系统仿真学报》
CAS
CSCD
北大核心
2021年第7期1638-1646,共9页
Journal of System Simulation
基金
镇江市科技计划(GY2018029)
校级重点课题(JATC19010107,JATC20020101,JATC20010104)。
关键词
位姿估计
SIFT特征匹配
标定区域
特征点组合匹配
空间后方交会
position and attitude estimation
Scale-Invariant Feature Transform(SIFT)
feature matching
calibration area
combination matching of feature points
space resection