期刊文献+

矿用激光雷达与相机的无目标自动标定方法研究

Research on the targetless automatic calibration method for mining LiDAR and camera
下载PDF
导出
摘要 矿用车辆实现无人驾驶依赖于准确的环境感知,激光雷达和相机的结合可以提供更丰富和准确的环境感知信息。为确保激光雷达和相机的有效融合,需进行外参标定。目前矿用本安型车载激光雷达多为16线激光雷达,产生的点云较为稀疏。针对该问题,提出一种矿用激光雷达与相机的无目标自动标定方法。利用多帧点云融合的方法获得融合帧点云,以增加点云密度,丰富点云信息;通过全景分割的方法提取场景中的车辆和交通标志物作为有效目标,通过构建2D-3D有效目标质心对应关系,完成粗校准;在精校准过程中,将有效目标点云通过粗校准的外参投影在逆距离变换后的分割掩码上,构建有效目标全景信息匹配度目标函数,通过粒子群算法最大化目标函数得到最优的外参。从定量、定性和消融实验3个方面验证了方法的有效性:(1)定量实验中,平移误差为0.055 m,旋转误差为0.394°,与基于语义分割技术的方法相比,平移误差降低了43.88%,旋转误差降低了48.63%。(2)定性结果显示,在车库和矿区场景中的投影效果与外参真值高度吻合,证明了该方法的稳定性。(3)消融实验表明,多帧点云融合和目标函数权重系数显著提高了标定精度。与单帧点云相比,使用融合帧点云作为输入时,平移误差降低了50.89%,旋转误差降低了53.76%;考虑权重系数后,平移误差降低了36.05%,旋转误差降低了37.87%。 The realization of autonomous driving for mining vehicles relies on accurate environmental perception,and the combination of LiDAR and cameras can provide richer and more accurate environmental information.To ensure effective fusion of LiDAR and cameras,external parameter calibration is necessary.Currently,most mining intrinsically safe onboard LiDARs are 16-line LiDARs,which generate relatively sparse point clouds.To address this issue,this paper proposed a targetless automatic calibration method for mining LiDAR and camera.Multi-frame point cloud fusion was utilized to obtain fused frame point clouds,increasing point cloud density and enriching point cloud information.Then,effective targets such as vehicles and traffic signs in the scene were extracted using panoramic segmentation.By establishing a corresponding relationship between the centroids of 2D and 3D effective targets,a coarse calibration was completed.In the fine calibration process,the effective target point clouds were projected onto the segmentation mask after inverse distance transformation using the coarse-calibrated external parameters,constructing an objective function based on the matching degree of effective target panoramic information.The optimal external parameters were obtained by maximizing the objective function using a particle swarm algorithm.The effectiveness of the method was validated from three aspects:quantitative,qualitative,and ablation experiments.(1)In the quantitative experiments,the translation error was 0.055 m,and the rotation error was 0.394°.Compared with the method based on semantic segmentation technology,the translation error was reduced by 43.88%,and the rotation error was reduced by 48.63%.(2)The qualitative results showed that the projection effects in the garage and mining area scenes were highly consistent with the true values of the external parameters,demonstrating the stability of the method.(3)Ablation experiments indicated that multi-frame point cloud fusion and the weight coefficients of the objective function significantly improved calibration accuracy.When using fused frame point clouds as input compared to single-frame point clouds,the translation error was reduced by 50.89%,and the rotation error was reduced by 53.76%.Considering the weight coefficients,the translation error was reduced by 36.05%,and the rotation error was reduced by 37.87%.
作者 杨佳佳 张传伟 周李兵 秦沛霖 赵瑞祺 YANG Jiajia;ZHANG Chuanwei;ZHOU Libing;QIN Peilin;ZHAO Ruiqi(College of Mechanical Engineering,Xi'an University of Science and Technology,Xi'an 710054,China;Shaanxi College of Communications Technology,Xi'an 710018,China;Tiandi(Changzhou)Automation Co.,Ltd.,Changzhou 213015,China;CCTEG Changzhou Research Institute,Changzhou 213015,China;College of Mechanical&Electrical Engineering,Nanjing University of Aeronautics and Astronautics,Nanjing 210016,China)
出处 《工矿自动化》 CSCD 北大核心 2024年第10期53-61,89,共10页 Journal Of Mine Automation
基金 陕西省创新人才推进计划-科技创新团队(2021TD-27)。
关键词 矿用车辆 无人驾驶车辆 激光雷达 相机 多帧点云融合 全景分割 外参标定 无目标标定 mining vehicles autonomous vehicles LiDAR camera multi-frame point cloud fusion panoramic segmentation external parameter calibration targetless calibration
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部