New approaches for testing of autonomous driving functions are using Virtual Reality (VR) to analyze the behavior of automated vehicles in various scenarios. The real time simulation of the environment sensors is stil...New approaches for testing of autonomous driving functions are using Virtual Reality (VR) to analyze the behavior of automated vehicles in various scenarios. The real time simulation of the environment sensors is still a challenge. In this paper, the conception, development and validation of an automotive radar raw data sensor model is shown. For the implementation, the Unreal VR engine developed by Epic Games is used. The model consists of a sending antenna, a propagation and a receiving antenna model. The microwave field propagation is simulated by a raytracing approach. It uses the method of shooting and bouncing rays to cover the field. A diffused scattering model is implemented to simulate the influence of rough structures on the reflection of rays. To parameterize the model, simple reflectors are used. The validation is done by a comparison of the measured radar patterns of pedestrians and cyclists with simulated values. The outcome is that the developed model shows valid results, even if it still has deficits in the context of performance. It shows that the bouncing of diffuse scattered field can only be done once. This produces inadequacies in some scenarios. In summary, the paper shows a high potential for real time simulation of radar sensors by using ray tracing in a virtual reality.展开更多
在自动驾驶场景下的3D目标检测任务中,探索毫米波雷达数据作为RGB图像输入的补充正成为多模态融合的新兴趋势。然而,现有的毫米波雷达-相机融合方法高度依赖于相机的一阶段检测结果,导致整体性能不够理想。本文提供了一种不依赖于相机...在自动驾驶场景下的3D目标检测任务中,探索毫米波雷达数据作为RGB图像输入的补充正成为多模态融合的新兴趋势。然而,现有的毫米波雷达-相机融合方法高度依赖于相机的一阶段检测结果,导致整体性能不够理想。本文提供了一种不依赖于相机检测结果的鸟瞰图下双向融合方法(BEV-radar)。对于来自不同域的两个模态的特征,BEV-radar设计了一个双向的基于注意力的融合策略。具体地,以基于BEV的3D目标检测方法为基础,我们的方法使用双向转换器嵌入来自两种模态的信息,并根据后续的卷积块强制执行局部空间关系。嵌入特征后,BEV特征在3D对象预测头中解码。我们在nu Scenes数据集上评估了我们的方法,实现了48.2 m AP和57.6 NDS。结果显示,与仅使用相机的基础模型相比,不仅在精度上有所提升,特别地,速度预测误差项有了相当大的改进。代码开源于https://github.com/Etah0409/BEV-Radar。展开更多
基于视频的目标检测在恶劣天气情况下识别效果较差,故需弥补视频缺陷、提高检测框架的鲁棒性。针对此问题,文中设计了一个基于雷达和视频融合的目标检测框架,利用YOLOv5(You Only Look Once version 5)网络获得图片特征图与图片检测框,...基于视频的目标检测在恶劣天气情况下识别效果较差,故需弥补视频缺陷、提高检测框架的鲁棒性。针对此问题,文中设计了一个基于雷达和视频融合的目标检测框架,利用YOLOv5(You Only Look Once version 5)网络获得图片特征图与图片检测框,利用基于密度的聚类获得雷达检测框,并将雷达数据进行编码,得到基于雷达信息的目标检测结果。最后将两者的检测框叠加得到新ROI(Region of Interest),并得到融合雷达信息后的分类向量,提高了在极端天气下检测的准确率。实验结果表明,该框架的mAP(mean Average Precision)达到了60.07%,且参数量仅为7.64×10^(6),表明该框架具有轻量级、计算速度快、鲁棒性高等特点,可以被广泛应用于嵌入式与移动端平台。展开更多
针对自动驾驶路面上目标漏检和错检的问题,提出一种基于改进Centerfusion的自动驾驶3D目标检测模型。该模型通过将相机信息和雷达特征融合,构成多通道特征数据输入,从而增强目标检测网络的鲁棒性,减少漏检问题;为了能够得到更加准确丰富...针对自动驾驶路面上目标漏检和错检的问题,提出一种基于改进Centerfusion的自动驾驶3D目标检测模型。该模型通过将相机信息和雷达特征融合,构成多通道特征数据输入,从而增强目标检测网络的鲁棒性,减少漏检问题;为了能够得到更加准确丰富的3D目标检测信息,引入了改进的注意力机制,用于增强视锥网格中的雷达点云和视觉信息融合;使用改进的损失函数优化边框预测的准确度。在Nuscenes数据集上进行模型验证和对比,实验结果表明,相较于传统的Centerfusion模型,提出的模型平均检测精度均值(mean Average Precision,mAP)提高了1.3%,Nuscenes检测分数(Nuscenes Detection Scores,NDS)提高了1.2%。展开更多
文摘New approaches for testing of autonomous driving functions are using Virtual Reality (VR) to analyze the behavior of automated vehicles in various scenarios. The real time simulation of the environment sensors is still a challenge. In this paper, the conception, development and validation of an automotive radar raw data sensor model is shown. For the implementation, the Unreal VR engine developed by Epic Games is used. The model consists of a sending antenna, a propagation and a receiving antenna model. The microwave field propagation is simulated by a raytracing approach. It uses the method of shooting and bouncing rays to cover the field. A diffused scattering model is implemented to simulate the influence of rough structures on the reflection of rays. To parameterize the model, simple reflectors are used. The validation is done by a comparison of the measured radar patterns of pedestrians and cyclists with simulated values. The outcome is that the developed model shows valid results, even if it still has deficits in the context of performance. It shows that the bouncing of diffuse scattered field can only be done once. This produces inadequacies in some scenarios. In summary, the paper shows a high potential for real time simulation of radar sensors by using ray tracing in a virtual reality.
文摘在自动驾驶场景下的3D目标检测任务中,探索毫米波雷达数据作为RGB图像输入的补充正成为多模态融合的新兴趋势。然而,现有的毫米波雷达-相机融合方法高度依赖于相机的一阶段检测结果,导致整体性能不够理想。本文提供了一种不依赖于相机检测结果的鸟瞰图下双向融合方法(BEV-radar)。对于来自不同域的两个模态的特征,BEV-radar设计了一个双向的基于注意力的融合策略。具体地,以基于BEV的3D目标检测方法为基础,我们的方法使用双向转换器嵌入来自两种模态的信息,并根据后续的卷积块强制执行局部空间关系。嵌入特征后,BEV特征在3D对象预测头中解码。我们在nu Scenes数据集上评估了我们的方法,实现了48.2 m AP和57.6 NDS。结果显示,与仅使用相机的基础模型相比,不仅在精度上有所提升,特别地,速度预测误差项有了相当大的改进。代码开源于https://github.com/Etah0409/BEV-Radar。
文摘基于视频的目标检测在恶劣天气情况下识别效果较差,故需弥补视频缺陷、提高检测框架的鲁棒性。针对此问题,文中设计了一个基于雷达和视频融合的目标检测框架,利用YOLOv5(You Only Look Once version 5)网络获得图片特征图与图片检测框,利用基于密度的聚类获得雷达检测框,并将雷达数据进行编码,得到基于雷达信息的目标检测结果。最后将两者的检测框叠加得到新ROI(Region of Interest),并得到融合雷达信息后的分类向量,提高了在极端天气下检测的准确率。实验结果表明,该框架的mAP(mean Average Precision)达到了60.07%,且参数量仅为7.64×10^(6),表明该框架具有轻量级、计算速度快、鲁棒性高等特点,可以被广泛应用于嵌入式与移动端平台。
文摘针对自动驾驶路面上目标漏检和错检的问题,提出一种基于改进Centerfusion的自动驾驶3D目标检测模型。该模型通过将相机信息和雷达特征融合,构成多通道特征数据输入,从而增强目标检测网络的鲁棒性,减少漏检问题;为了能够得到更加准确丰富的3D目标检测信息,引入了改进的注意力机制,用于增强视锥网格中的雷达点云和视觉信息融合;使用改进的损失函数优化边框预测的准确度。在Nuscenes数据集上进行模型验证和对比,实验结果表明,相较于传统的Centerfusion模型,提出的模型平均检测精度均值(mean Average Precision,mAP)提高了1.3%,Nuscenes检测分数(Nuscenes Detection Scores,NDS)提高了1.2%。