期刊文献+

基于四面体特征的面阵激光雷达与相机标定方法 被引量:1

Planar Array Lidar and Camera Calibration Method Based on Tetrahedral Features
下载PDF
导出
摘要 为了提高面阵激光雷达与相机标定精度,提出了一种基于四面体结构的面阵激光雷达与相机标定方法。利用图像中角点的反投影射线和点云中直线特征建立“点-三线”的对应关系,提高了约束强度,根据异面直线间的距离和投影直线间的夹角建立约束方程,通过非线性优化求解相机和雷达坐标系间的变换向量并最终得到变换矩阵。构建三角板和地面组成的四面体结构并进行实验验证,实验结果表明:该方法的平均投影误差在0.6像素以内,优于其他两种标定方法,为后续激光雷达数据与图像数据融合提供了研究基础。 Compared with traditional mechanical multi-line Lidar,area array Lidar can achieve greater field of view coverage through non-repetitive scanning,and has many applications in industrial production and robotics.A single sensor often has some limitations,while the fusion of multi-sensor data has higher precision and accuracy.Point cloud data contains accurate depth information of the environment,while image data contains rich color and texture information of the environment.The fusion of Point cloud and image can increase the dimensionality of the sensor′s perception of environmental,and also enable robots and automation equipment to achieve more complex tasks and reconstruct colorful three-dimensional environments.The prerequisite for achieving the fusion of point cloud data and image data is to calibrate the extrinsic parameters of the Lidar and camera.The purpose of sensors external parameter calibration is to find the accurate position conversion relationship between the two sensor coordinate systems.In order to improve the calibration accuracy of array Lidar and monocular camera,this paper proposes a calibration method based on tetrahedral structure for array Lidar and camera.The tetrahedron structure formed by two isosceles right triangle calibration plates and the ground is used as the calibration object.Using the random sampling consensus algorithm to extract three plane features from point cloud data,The straight line parameter equations of the edges of the tetrahedral structure are obtained through the intersection of the three planes.The common points of the three planes are the vertices of the tetrahedral structure.The line segment detection algorithm is used to extract the straight line features of the intersection of three planes in the image data,and the vertex coordinates of the tetrahedral structure are obtained through the straight line intersection.The method of indirectly obtaining point and linear features through plane extraction is more accurate than the direct extraction method.The back-projection rays of the corner points in the image and the straight-line features in the point cloud form multiple sets of skew lines.The distance between the skew lines is used to construct the residual equation of the rotation and translation relationship.The angle error between the projection of the straight line feature in the point cloud on the image and the real imaging straight line is used to establish the residual equation of the rotation relationship.The“point-three-line”correspondence relationship between point cloud data and image data is established,which improves the constraint strength between corresponding features in point cloud and image data.Based on the Rodriguez formula,a rotation vector is used to represent the transformation matrix between the Lidar and camera coordinate systems.The transformation vector between the camera and radar coordinate systems is solved through nonlinear optimization and the transformation matrix is finally obtained.The distance from the projected point of the edge in the point cloud to the edge in the image is used as the projection error from the point cloud data to the image data.The proposed method is experimentally compared with the Livox company′s open-source calibration method and the University of Hong Kong′s targetless calibration method.The tetrahedral structure data collected in this paper and the open-source algorithm data were tested,respectively.The calibration results on images with a resolution of 1920×1080 shows that the average projection error of the proposed method is within 0.6 pixels,which is superior to the other two calibration methods.At the same time,without a specific tetrahedral structure calibration object.The proposed method can use the wall corner tetrahedral structure to complete the calibration,which reflects the flexibility of the proposed method.The proposed high-precision calibration method of Lidar and camera provides a research foundation for the subsequent research on the fusion of Lidar data and image data.
作者 徐孝彬 曹晨飞 张磊 胡锦超 冉莹莹 谭治英 徐林森 骆敏舟 XU Xiaobin;CAO Chenfei;ZHANG Lei;HU Jinchao;RAN Yingying;TAN Zhiying;XU Linsen;LUO Minzhou(College of Mechanical and Electrical Engineering,Hohai University,Changzhou 213022,China)
出处 《光子学报》 EI CAS CSCD 北大核心 2024年第7期166-180,共15页 Acta Photonica Sinica
基金 国家重点研发计划项目(No.2022YFB4201000) 江苏省重点研发计划项目(No.BE2020082-1) 中央高校基本科研业务费专项资金(No.B220202023)。
关键词 面阵激光雷达 相机 标定 四面体 直线特征 Planar array lidar Camera Calibration Tetrahedron Line Feature
  • 相关文献

参考文献1

二级参考文献7

共引文献92

同被引文献10

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部