摘要
为提高交通监控设备获取车辆微观轨迹的精度同时节约设备成本,提出了一种基于多传感器融合感知的车辆微观轨迹自动采集方法。首先,采用YOLOv5+DeepSORT车辆检测与跟踪模型获取了视频图像中行驶车辆的像素轨迹。其次,利用棋盘格标定法获取摄像机内参数,得到了像素坐标系与摄像机坐标系的转换关系。然后,通过匹配漫反射标定板在激光雷达和摄像机中的特征点信息来标定摄像机外参数(标定后激光雷达和标定板即可移除),获取了激光雷达坐标系与摄像机坐标系的位姿关系。再者,结合摄像机内外参数,构建了从像素坐标系到激光雷达坐标系的转换公式,实现了从像素轨迹到真实轨迹的转换。最后,利用基于高斯函数的核回归方法进行优化,降低了摄像机抖动和坐标转化等原因带来的误差。为验证提出方法的有效性,选择直行路段、弯道路段及交叉路口3种场景进行试验,并与激光雷达提取的车辆轨迹进行了对比。结果表明:校准后摄像机提取的直行路段轨迹平均绝对误差(MAE)为0.19 m,速度平均绝对误差百分比(MAPE)为3.27%;弯道路段轨迹MAE为0.17 m,速度MAPE为4.38%;交叉路口轨迹MAE为0.16m,速度MAPE为3.38%。从整体看,轨迹误差平均值小于0.2 m,速度误差平均值小于2 km/h。该方法有效提高了通过交通监控视频获取车辆行驶轨迹的精度。
To enhance the accuracy of vehicle micro-trajectory obtained by trafic surveillance equipment and save equipment expenditures,a vehicle micro-trajectory automatic acquisition method based on multi-sensor fusion and sensing is proposed.First,the pixel trajectories of moving vehicles in the video are obtained by applying the YOLOv5+DeepSORT vehicle detecting and tracking model.Second,the camera internal parameters are obtained by checkerboard calibration method,and obtain the conversion relation between pixel and camera coordinate systems.Then,the camera external parameters are calibrated by matching the feature point information of the diffuse calibration plate both in LiDAR and camera(the LiDAR and calibration plate can be removed after calibration),and the positional relationship between the LiDAR and camera coordinate systems are obtained.Afterwards,the conversion formula from pixel coordinate system to LiDAR coordinate system is established by combining the internal and external parameters,and the conversion from pixel trajectory to real trajectory is realized.Finally,the optimization is conducted by using the kernel regression method based on Gaussian function,which reduced the errors caused by camera vibration and coordinate transformation.To verify the effectiveness of the proposed method,the experiment is conducted on the selected 3 scenarios(straight section,curve section,and intersection),and the experimental result is compared with the vehicle trajectories extracted from LiDAR data.The result shows that(1)after calibration,the mean absolute error(MAE)of the trajectory in the straight section extracted by camera is 0.19 m,and the mean absolute percentage error(MAPE)of speed is 3.27%;(2)the MAE of the trajectory in curve section is 0.17 m,and the MAPE of speed is 4.38%;(3)the MAE of trajectory in intersection is 0.16 m,and the MAPE of speed is 3.38%.On the whole,the average trajectory error is less than 0.2 m,and the average speed error is less than 2 km/h.This method effectively improved the accuracy of obtaining the vehicle trajectory from traffic surveillance video.
作者
王祎
王翔
郑建颖
昝雨尧
王喜
WANG Yi;WANG Xiang;ZHENG Jian-ying;ZAN Yu-yao;WANG Xi(School of Rail Transportation,Soochow University,Suzhou Jiangsu 215000,China)
出处
《公路交通科技》
CAS
CSCD
北大核心
2023年第4期160-169,共10页
Journal of Highway and Transportation Research and Development
基金
国家自然科学基金项目(52002262)。
关键词
交通工程
轨迹采集
YOLOv5算法
机器视觉
摄像机内外参数标定
激光雷达
traffic engineering
trajectory acquisition
YOLOv5 algorithm
machine vision
camera internal and external parameter calibration
LiDAR