Contemporarily numerous analysts labored in the field of Vehicle detection which improves Intelligent Transport System(ITS)and reduces road accidents.The major obstacles in automatic detection of tiny vehicles are due...Contemporarily numerous analysts labored in the field of Vehicle detection which improves Intelligent Transport System(ITS)and reduces road accidents.The major obstacles in automatic detection of tiny vehicles are due to occlusion,environmental conditions,illumination,view angles and variation in size of objects.This research centers on tiny and partially occluded vehicle detection and identification in challenging scene specifically in crowed area.In this paper we present comprehensive methodology of tiny vehicle detection using Deep Neural Networks(DNN)namely CenterNet.Substantially DNN disregards objects that are small in size 5 pixels and more false positives likely to happen in crowded area.Primarily there are two categories of deep learning models single-step and two-step.A single forward pass model is the one in which detection is performed directly to possible location over dense sampling,wherein two-step models incorporated by Region proposals followed by object detection.We in this research scrutinize one-step State of the art(SOTA)model CenteNet as proposed recently with three different feature extractor ResNet-50,HourGlass-104 and ResNet-101 one by one.We train our model on challenging KITTI dataset which outperforms in comparison with SOTA single-step technique MSSD300∗which depicts performance improvement by 20.2%mAPandSMOKEby with 13.2%mAP respectively.Effectiveness of CenterNet can be justified through the huge improved performance.The performance of our model is evaluated on KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute)benchmark dataset with different backbones such as ResNet-50 gives 62.3%mAP ResNet-10182.5%mAP,last but not the least HourGlass-104 outperforms with 98.2%mAP CenterNet-HourGlass-104 achieved high mAP among above mentioned feature extractors.We also compare our model with other SOTA techniques.展开更多
三维目标检测是实现无人驾驶必不可少的技术,但很多三维检测算法采用的分割算法并不能很好地提取局部特征,导致检测精度不理想。为了改善局部特征缺失的情况,提出一种基于边缘卷积的三维目标识别算法。本算法以激光点云和RGB(red,green,...三维目标检测是实现无人驾驶必不可少的技术,但很多三维检测算法采用的分割算法并不能很好地提取局部特征,导致检测精度不理想。为了改善局部特征缺失的情况,提出一种基于边缘卷积的三维目标识别算法。本算法以激光点云和RGB(red,green,blue)图像作为输入,基于二维候选区域中的像素过滤激光点云生成视锥点云,以此提高检测速度。同时,在分割算法中,在点云的局部特征图的基础上计算目标点和相邻点之间的欧氏距离,并将其作为边缘特征赋予目标点和相邻点。此外,在卷积神经网络提取特征的过程中,每次卷积完成后都会在新的局部特征图上重新计算三维点之间的欧氏距离,为三维点构造新的边缘特征。这使得边缘特征能随着卷积神经网络的计算扩散到整个点云,从而提高局部特征的提取效果。本算法在KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute)的三维点云数据集上进行验证,分割精度达到92.82%,相比于F-PointNet提高了2.30百分点;对不同目标的检测精度也有所提高,车辆、自行车、行人的检测精度分别达到了85.77%,76.09%,53.08%。试验结果证明了本算法的可行性。本算法可应用于无人驾驶汽车,实现车辆、行人和自行车的定位与检测。展开更多
提出适用于配有三维激光雷达的自主移动机器人在室外场景进行同时定位与地图创建(simul-taneous localization and mapping, SLAM)的一种闭环检测算法,命名为SegGraph.作为SLAM的关键模块,闭环检测的任务是判断机器人当前位置是否与已...提出适用于配有三维激光雷达的自主移动机器人在室外场景进行同时定位与地图创建(simul-taneous localization and mapping, SLAM)的一种闭环检测算法,命名为SegGraph.作为SLAM的关键模块,闭环检测的任务是判断机器人当前位置是否与已到过的某一位置邻近.SegGraph包含3步:1)对在不同时刻得到的2组点云分别移除大地平面后采用区域增长方法分割为若干个点云簇;2)以点云簇为顶点,以点云簇图心间距离为边权值,分别构建带权值的完全图;3)判定所得的2个完全图是否含有足够大的公共子图.SegGraph的主要创新点是在寻找公共子图时以边权值(即点云簇间距离)为主要匹配依据.这是因为点云数据中的噪声会导致在邻近地点获得的不同点云经分割后得出差别很大的点云簇集,不同点云中相应的点云簇也便无法匹配.然而相应点云簇间距离却受分割过程影响不大.主要贡献包括研发高效的判定2个点云簇图是否有足够大的公共子图的近似算法,实现完整的SegGraph算法,及以被广泛使用的公开数据集KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute)评估SegGraph的准确度及运行效率.实验结果显示SegGraph具有良好的准确度及运行效率.展开更多
文摘Contemporarily numerous analysts labored in the field of Vehicle detection which improves Intelligent Transport System(ITS)and reduces road accidents.The major obstacles in automatic detection of tiny vehicles are due to occlusion,environmental conditions,illumination,view angles and variation in size of objects.This research centers on tiny and partially occluded vehicle detection and identification in challenging scene specifically in crowed area.In this paper we present comprehensive methodology of tiny vehicle detection using Deep Neural Networks(DNN)namely CenterNet.Substantially DNN disregards objects that are small in size 5 pixels and more false positives likely to happen in crowded area.Primarily there are two categories of deep learning models single-step and two-step.A single forward pass model is the one in which detection is performed directly to possible location over dense sampling,wherein two-step models incorporated by Region proposals followed by object detection.We in this research scrutinize one-step State of the art(SOTA)model CenteNet as proposed recently with three different feature extractor ResNet-50,HourGlass-104 and ResNet-101 one by one.We train our model on challenging KITTI dataset which outperforms in comparison with SOTA single-step technique MSSD300∗which depicts performance improvement by 20.2%mAPandSMOKEby with 13.2%mAP respectively.Effectiveness of CenterNet can be justified through the huge improved performance.The performance of our model is evaluated on KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute)benchmark dataset with different backbones such as ResNet-50 gives 62.3%mAP ResNet-10182.5%mAP,last but not the least HourGlass-104 outperforms with 98.2%mAP CenterNet-HourGlass-104 achieved high mAP among above mentioned feature extractors.We also compare our model with other SOTA techniques.
文摘三维目标检测是实现无人驾驶必不可少的技术,但很多三维检测算法采用的分割算法并不能很好地提取局部特征,导致检测精度不理想。为了改善局部特征缺失的情况,提出一种基于边缘卷积的三维目标识别算法。本算法以激光点云和RGB(red,green,blue)图像作为输入,基于二维候选区域中的像素过滤激光点云生成视锥点云,以此提高检测速度。同时,在分割算法中,在点云的局部特征图的基础上计算目标点和相邻点之间的欧氏距离,并将其作为边缘特征赋予目标点和相邻点。此外,在卷积神经网络提取特征的过程中,每次卷积完成后都会在新的局部特征图上重新计算三维点之间的欧氏距离,为三维点构造新的边缘特征。这使得边缘特征能随着卷积神经网络的计算扩散到整个点云,从而提高局部特征的提取效果。本算法在KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute)的三维点云数据集上进行验证,分割精度达到92.82%,相比于F-PointNet提高了2.30百分点;对不同目标的检测精度也有所提高,车辆、自行车、行人的检测精度分别达到了85.77%,76.09%,53.08%。试验结果证明了本算法的可行性。本算法可应用于无人驾驶汽车,实现车辆、行人和自行车的定位与检测。
文摘提出适用于配有三维激光雷达的自主移动机器人在室外场景进行同时定位与地图创建(simul-taneous localization and mapping, SLAM)的一种闭环检测算法,命名为SegGraph.作为SLAM的关键模块,闭环检测的任务是判断机器人当前位置是否与已到过的某一位置邻近.SegGraph包含3步:1)对在不同时刻得到的2组点云分别移除大地平面后采用区域增长方法分割为若干个点云簇;2)以点云簇为顶点,以点云簇图心间距离为边权值,分别构建带权值的完全图;3)判定所得的2个完全图是否含有足够大的公共子图.SegGraph的主要创新点是在寻找公共子图时以边权值(即点云簇间距离)为主要匹配依据.这是因为点云数据中的噪声会导致在邻近地点获得的不同点云经分割后得出差别很大的点云簇集,不同点云中相应的点云簇也便无法匹配.然而相应点云簇间距离却受分割过程影响不大.主要贡献包括研发高效的判定2个点云簇图是否有足够大的公共子图的近似算法,实现完整的SegGraph算法,及以被广泛使用的公开数据集KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute)评估SegGraph的准确度及运行效率.实验结果显示SegGraph具有良好的准确度及运行效率.