摘要
随着自动驾驶汽车的快速发展,汽车需要来自多个传感器的不同数据来感知周围环境。激光雷达和相机的精确标定对自动驾驶汽车的数据融合至关重要。针对经典神经网络对图像数据特征提取不全面、不准确而导致激光雷达与相机外参标定精度低的问题,提出一种基于多维度动态卷积的激光雷达与相机外参标定方法。添加随机变换对数据进行预处理,将预处理后的数据输入基于多维度动态卷积的特征提取网络,然后经特征聚合输出旋转和平移向量,此外使用几何监督和转换监督来指导学习过程。实验结果表明,所提方法可以提升神经网络特征信息提取的能力,进一步提高了外参标定的精度。和对比方法中最优的结果相比,所提方法的平移预测的误差平均值减少了0.7 cm,验证了所提标定方法的有效性。
The rapid development of autonomous driving necessitates precise multisensor data fusion to accurately perceive the surrounding vehicular environment.Central to this is the precise calibration of LiDAR and camera systems,which forms the basis for effective data integration.Traditional neural networks,used for image feature extraction,often yield incomplete or inaccurate results,thereby undermining the calibration accuracy of LiDAR and camera parameters.Addressing this challenge,we propose a novel method hinged on multidimensional dynamic convolution for the extrinsic calibration of LiDAR and camera systems.Initially,data undergoes random transformations as a preprocessing step,followed by feature extraction through a specialized network based on multidimensional dynamic convolution.This network outputs rotation and translation vectors through feature aggregation mechanism.To guide the learning process,geometric and transformation supervisions are employed.Experimental validation suggests an enhancement in feature extraction capabilities of the neural network,leading to improved extrinsic calibration accuracy.Notably,our method exhibits a 0.7 cm reduction in the average error of translation prediction compared with the leading alternative approaches,substantiating the efficacy of the proposed calibration method.
作者
张赛赛
于红绯
Zhang Saisai;Yu Hongfei(School of Artificial Intelligence and Software,Liaoning Petrochemical University,Fushun 113000,Liaoning,China)
出处
《激光与光电子学进展》
CSCD
北大核心
2024年第12期203-210,共8页
Laser & Optoelectronics Progress
基金
国家自然科学基金(61702247)
辽宁省教育厅高校基本科研项目(LJKMZ20220723)。
关键词
机器视觉
激光雷达
外参标定
深度学习
machine vision
LiDAR
external parameter calibration
deep learning