Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input t...Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input to augment the RGB images.Depth-based methods attempt to convert estimated depth maps to pseudo-LiDAR and then use LiDAR-based object detectors or focus on the perspective of image and depth fusion learning.However,they demonstrate limited performance and efficiency as a result of depth inaccuracy and complex fusion mode with convolutions.Different from these approaches,our proposed depth-guided vision transformer with a normalizing flows(NF-DVT)network uses normalizing flows to build priors in depth maps to achieve more accurate depth information.Then we develop a novel Swin-Transformer-based backbone with a fusion module to process RGB image patches and depth map patches with two separate branches and fuse them using cross-attention to exchange information with each other.Furthermore,with the help of pixel-wise relative depth values in depth maps,we develop new relative position embeddings in the cross-attention mechanism to capture more accurate sequence ordering of input tokens.Our method is the first Swin-Transformer-based backbone architecture for monocular 3D object detection.The experimental results on the KITTI and the challenging Waymo Open datasets show the effectiveness of our proposed method and superior performance over previous counterparts.展开更多
The high bandwidth and low latency of 6G network technology enable the successful application of monocular 3D object detection on vehicle platforms.Monocular 3D-object-detection-based Pseudo-LiDAR is a low-cost,lowpow...The high bandwidth and low latency of 6G network technology enable the successful application of monocular 3D object detection on vehicle platforms.Monocular 3D-object-detection-based Pseudo-LiDAR is a low-cost,lowpower solution compared to LiDAR solutions in the field of autonomous driving.However,this technique has some problems,i.e.,(1)the poor quality of generated Pseudo-LiDAR point clouds resulting from the nonlinear error distribution of monocular depth estimation and(2)the weak representation capability of point cloud features due to the neglected global geometric structure features of point clouds existing in LiDAR-based 3D detection networks.Therefore,we proposed a Pseudo-LiDAR confidence sampling strategy and a hierarchical geometric feature extraction module for monocular 3D object detection.We first designed a point cloud confidence sampling strategy based on a 3D Gaussian distribution to assign small confidence to the points with great error in depth estimation and filter them out according to the confidence.Then,we present a hierarchical geometric feature extraction module by aggregating the local neighborhood features and a dual transformer to capture the global geometric features in the point cloud.Finally,our detection framework is based on Point-Voxel-RCNN(PV-RCNN)with high-quality Pseudo-LiDAR and enriched geometric features as input.From the experimental results,our method achieves satisfactory results in monocular 3D object detection.展开更多
针对单目3D目标检测在视角变化引起的物体大小变化以及物体遮挡等情况下效果不佳的问题,提出一种融合深度信息和实例分割掩码的新型单目3D目标检测方法。首先,通过深度-掩码注意力融合(DMAF)模块,将深度信息与实例分割掩码结合,以提供...针对单目3D目标检测在视角变化引起的物体大小变化以及物体遮挡等情况下效果不佳的问题,提出一种融合深度信息和实例分割掩码的新型单目3D目标检测方法。首先,通过深度-掩码注意力融合(DMAF)模块,将深度信息与实例分割掩码结合,以提供更准确的物体边界;其次,引入动态卷积,并利用DMAF模块得到的融合特征引导动态卷积核的生成,以处理不同尺度的物体;再次,在损失函数中引入2D-3D边界框一致性损失函数,调整预测的3D边界框与对应的2D检测框高度一致,以提高实例分割和3D目标检测任务的效果;最后,通过消融实验验证该方法的有效性,并在KITTI测试集上对该方法进行验证。实验结果表明,与仅使用深度估计图和实例分割掩码的方法相比,在中等难度下对车辆类别检测的平均精度提高了6.36个百分点,且3D目标检测和鸟瞰图目标检测任务的效果均优于D4LCN(Depth-guided Dynamic-Depthwise-Dilated Local Convolutional Network)、M3D-RPN(Monocular 3D Region Proposal Network)等对比方法。展开更多
实时、鲁棒的三维动态目标感知系统是自动驾驶技术的关键。提出了一种融合单目相机和激光雷达的三维目标检测流程,首先,在图像上使用卷积神经网络进行二维目标检测,根据几何投影关系生成锥形感兴趣区域(region of interest,ROI),在ROI...实时、鲁棒的三维动态目标感知系统是自动驾驶技术的关键。提出了一种融合单目相机和激光雷达的三维目标检测流程,首先,在图像上使用卷积神经网络进行二维目标检测,根据几何投影关系生成锥形感兴趣区域(region of interest,ROI),在ROI内对点云进行聚类,并拟合三维外包矩形;然后,基于外观特征和匈牙利算法对三维目标进行帧间匹配,并提出了一种基于四元有限状态机的跟踪器管理模型;最后,设计了一种利用车道信息的轨迹预测模型,对车辆轨迹进行预测。实验结果表明,在目标检测阶段,所提算法的准确率和召回率分别达到了92.5%和86.7%。在仿真数据集上对轨迹预测算法进行测试,与现有算法相比,所提算法在直线、弧线和缓和曲线3种类型的车道上均有较小的均方根误差,且算法平均耗时约为25 ms,满足实时性要求。所提算法鲁棒、有效,在不同车道模型下均有较好的结果。展开更多
新兴的三维目标检测技术在自动驾驶领域中扮演着关键的角色,它通过提供环境感知和障碍物检测等信息,为自动驾驶系统的决策和控制提供了基础。过去的许多学者对该领域优秀的方法论和成果进行了全面的检验和研究。然而,由于技术上的不断...新兴的三维目标检测技术在自动驾驶领域中扮演着关键的角色,它通过提供环境感知和障碍物检测等信息,为自动驾驶系统的决策和控制提供了基础。过去的许多学者对该领域优秀的方法论和成果进行了全面的检验和研究。然而,由于技术上的不断更新和快速进步,对该领域的最新进展保持持续跟踪并坚持跟随知识前沿,不仅是学术界的一项至关重要任务,同时也是应对新兴挑战的一项基础。本文回顾了近两年内的新兴成果并针对该方向中的前沿理论进行系统性的阐述。首先,简单介绍三维目标检测的背景知识并回顾相关的综述研究。然后,从数据规模、多样性等方面对KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago)等多个流行的数据集进行了归纳总结,并进一步介绍相关基准的评测原理。接下来,按照传感器类型和数量将最近的几十种检测方法划分为基于单目的、基于立体的、基于多视图的、基于激光雷达的、基于多模态5个类别,并根据模型架构或数据预处理方式的不同对每一种类别进行更深层次的细分。在每一种类别的方法中,首先对其代表性算法进行简单回顾,然后着重对该类别中最前沿的方法进行综述介绍,并进一步深入分析了该类别潜在的发展前景和当前面临的严峻挑战。最后展望了三维目标检测领域未来的研究方向。展开更多
基金supported in part by the Major Project for New Generation of AI (2018AAA0100400)the National Natural Science Foundation of China (61836014,U21B2042,62072457,62006231)the InnoHK Program。
文摘Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input to augment the RGB images.Depth-based methods attempt to convert estimated depth maps to pseudo-LiDAR and then use LiDAR-based object detectors or focus on the perspective of image and depth fusion learning.However,they demonstrate limited performance and efficiency as a result of depth inaccuracy and complex fusion mode with convolutions.Different from these approaches,our proposed depth-guided vision transformer with a normalizing flows(NF-DVT)network uses normalizing flows to build priors in depth maps to achieve more accurate depth information.Then we develop a novel Swin-Transformer-based backbone with a fusion module to process RGB image patches and depth map patches with two separate branches and fuse them using cross-attention to exchange information with each other.Furthermore,with the help of pixel-wise relative depth values in depth maps,we develop new relative position embeddings in the cross-attention mechanism to capture more accurate sequence ordering of input tokens.Our method is the first Swin-Transformer-based backbone architecture for monocular 3D object detection.The experimental results on the KITTI and the challenging Waymo Open datasets show the effectiveness of our proposed method and superior performance over previous counterparts.
基金supported by the National Key Research and Development Program of China(2020YFB1807500)the National Natural Science Foundation of China(62072360,62001357,62172438,61901367)+4 种基金the key research and development plan of Shaanxi province(2021ZDLGY02-09,2023-GHZD-44,2023-ZDLGY-54)the Natural Science Foundation of Guangdong Province of China(2022A1515010988)Key Project on Artificial Intelligence of Xi'an Science and Technology Plan(2022JH-RGZN-0003,2022JH-RGZN-0103,2022JH-CLCJ-0053)Xi'an Science and Technology Plan(20RGZN0005)the Proof-ofconcept fund from Hangzhou Research Institute of Xidian University(GNYZ2023QC0201).
文摘The high bandwidth and low latency of 6G network technology enable the successful application of monocular 3D object detection on vehicle platforms.Monocular 3D-object-detection-based Pseudo-LiDAR is a low-cost,lowpower solution compared to LiDAR solutions in the field of autonomous driving.However,this technique has some problems,i.e.,(1)the poor quality of generated Pseudo-LiDAR point clouds resulting from the nonlinear error distribution of monocular depth estimation and(2)the weak representation capability of point cloud features due to the neglected global geometric structure features of point clouds existing in LiDAR-based 3D detection networks.Therefore,we proposed a Pseudo-LiDAR confidence sampling strategy and a hierarchical geometric feature extraction module for monocular 3D object detection.We first designed a point cloud confidence sampling strategy based on a 3D Gaussian distribution to assign small confidence to the points with great error in depth estimation and filter them out according to the confidence.Then,we present a hierarchical geometric feature extraction module by aggregating the local neighborhood features and a dual transformer to capture the global geometric features in the point cloud.Finally,our detection framework is based on Point-Voxel-RCNN(PV-RCNN)with high-quality Pseudo-LiDAR and enriched geometric features as input.From the experimental results,our method achieves satisfactory results in monocular 3D object detection.
文摘针对单目3D目标检测在视角变化引起的物体大小变化以及物体遮挡等情况下效果不佳的问题,提出一种融合深度信息和实例分割掩码的新型单目3D目标检测方法。首先,通过深度-掩码注意力融合(DMAF)模块,将深度信息与实例分割掩码结合,以提供更准确的物体边界;其次,引入动态卷积,并利用DMAF模块得到的融合特征引导动态卷积核的生成,以处理不同尺度的物体;再次,在损失函数中引入2D-3D边界框一致性损失函数,调整预测的3D边界框与对应的2D检测框高度一致,以提高实例分割和3D目标检测任务的效果;最后,通过消融实验验证该方法的有效性,并在KITTI测试集上对该方法进行验证。实验结果表明,与仅使用深度估计图和实例分割掩码的方法相比,在中等难度下对车辆类别检测的平均精度提高了6.36个百分点,且3D目标检测和鸟瞰图目标检测任务的效果均优于D4LCN(Depth-guided Dynamic-Depthwise-Dilated Local Convolutional Network)、M3D-RPN(Monocular 3D Region Proposal Network)等对比方法。
文摘实时、鲁棒的三维动态目标感知系统是自动驾驶技术的关键。提出了一种融合单目相机和激光雷达的三维目标检测流程,首先,在图像上使用卷积神经网络进行二维目标检测,根据几何投影关系生成锥形感兴趣区域(region of interest,ROI),在ROI内对点云进行聚类,并拟合三维外包矩形;然后,基于外观特征和匈牙利算法对三维目标进行帧间匹配,并提出了一种基于四元有限状态机的跟踪器管理模型;最后,设计了一种利用车道信息的轨迹预测模型,对车辆轨迹进行预测。实验结果表明,在目标检测阶段,所提算法的准确率和召回率分别达到了92.5%和86.7%。在仿真数据集上对轨迹预测算法进行测试,与现有算法相比,所提算法在直线、弧线和缓和曲线3种类型的车道上均有较小的均方根误差,且算法平均耗时约为25 ms,满足实时性要求。所提算法鲁棒、有效,在不同车道模型下均有较好的结果。
文摘新兴的三维目标检测技术在自动驾驶领域中扮演着关键的角色,它通过提供环境感知和障碍物检测等信息,为自动驾驶系统的决策和控制提供了基础。过去的许多学者对该领域优秀的方法论和成果进行了全面的检验和研究。然而,由于技术上的不断更新和快速进步,对该领域的最新进展保持持续跟踪并坚持跟随知识前沿,不仅是学术界的一项至关重要任务,同时也是应对新兴挑战的一项基础。本文回顾了近两年内的新兴成果并针对该方向中的前沿理论进行系统性的阐述。首先,简单介绍三维目标检测的背景知识并回顾相关的综述研究。然后,从数据规模、多样性等方面对KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago)等多个流行的数据集进行了归纳总结,并进一步介绍相关基准的评测原理。接下来,按照传感器类型和数量将最近的几十种检测方法划分为基于单目的、基于立体的、基于多视图的、基于激光雷达的、基于多模态5个类别,并根据模型架构或数据预处理方式的不同对每一种类别进行更深层次的细分。在每一种类别的方法中,首先对其代表性算法进行简单回顾,然后着重对该类别中最前沿的方法进行综述介绍,并进一步深入分析了该类别潜在的发展前景和当前面临的严峻挑战。最后展望了三维目标检测领域未来的研究方向。