The airport apron scene contains rich contextual information about the spatial position relationship.Traditional object detectors only considered visual appearance and ignored the contextual information.In addition,th...The airport apron scene contains rich contextual information about the spatial position relationship.Traditional object detectors only considered visual appearance and ignored the contextual information.In addition,the detection accuracy of some categories in the apron dataset was low.Therefore,an improved object detection method using spatial-aware features in apron scenes called SA-FRCNN is presented.The method uses graph convolutional networks to capture the relative spatial relationship between objects in the apron scene,incorporating this spatial context into feature learning.Moreover,an attention mechanism is introduced into the feature extraction process,with the goal to focus on the spatial position and key features,and distance-IoU loss is used to achieve a more accurate regression.The experimental results show that the mean average precision of the apron object detection based on SAFRCNN can reach 95.75%,and the detection effect of some hard-to-detect categories has been significantly improved.The proposed method effectively improves the detection accuracy on the apron dataset,which has a leading advantage over other methods.展开更多
机场特种车辆自主安全地运行对于保障飞行区安全至关重要。目前,机场特种车辆作业主要通过驾驶员操作和机场管制员的目视指挥完成,存在过度依赖人力、自主性较低等问题,为提高其安全性和自主性,本文提出一种用于机场特种车辆作业基于3D...机场特种车辆自主安全地运行对于保障飞行区安全至关重要。目前,机场特种车辆作业主要通过驾驶员操作和机场管制员的目视指挥完成,存在过度依赖人力、自主性较低等问题,为提高其安全性和自主性,本文提出一种用于机场特种车辆作业基于3D点云分割的目标识别方法。首先,基于仿真技术构建飞行区环境点云数据集(Airfield Area of Air‐port Point Cloud Data,3A-PCD)。其次,在PointNet++的基础上设计一种面向大规模点云的语义分割网络(Semantic Segmentation Network of Airfield Area of Airport,3A-Net),结合采样点空间编码(SPSE)模块和注意力池化(AP)模块以解决传统分割网络在分割精度以及对物体细节特征保留能力不足的问题。最后,基于3A-PCD数据集设计实验,消融实验结果表明增加SPSE后,模型的分割精度MIoU提升了6个百分点、增加AP模块后MIoU提升了3.9个百分点;3A-Net与基准模型PointNet++相比,MIoU提高了6.7个百分点;与现有先进的6种语义分割模型相比,所提模型性能均有不同程度的提升,更适用于室外大场景的目标识别。展开更多
基金supported by the Fundamental Research Funds for Central Universities of the Civil Aviation University of China(No.3122021088).
文摘The airport apron scene contains rich contextual information about the spatial position relationship.Traditional object detectors only considered visual appearance and ignored the contextual information.In addition,the detection accuracy of some categories in the apron dataset was low.Therefore,an improved object detection method using spatial-aware features in apron scenes called SA-FRCNN is presented.The method uses graph convolutional networks to capture the relative spatial relationship between objects in the apron scene,incorporating this spatial context into feature learning.Moreover,an attention mechanism is introduced into the feature extraction process,with the goal to focus on the spatial position and key features,and distance-IoU loss is used to achieve a more accurate regression.The experimental results show that the mean average precision of the apron object detection based on SAFRCNN can reach 95.75%,and the detection effect of some hard-to-detect categories has been significantly improved.The proposed method effectively improves the detection accuracy on the apron dataset,which has a leading advantage over other methods.
文摘机场特种车辆自主安全地运行对于保障飞行区安全至关重要。目前,机场特种车辆作业主要通过驾驶员操作和机场管制员的目视指挥完成,存在过度依赖人力、自主性较低等问题,为提高其安全性和自主性,本文提出一种用于机场特种车辆作业基于3D点云分割的目标识别方法。首先,基于仿真技术构建飞行区环境点云数据集(Airfield Area of Air‐port Point Cloud Data,3A-PCD)。其次,在PointNet++的基础上设计一种面向大规模点云的语义分割网络(Semantic Segmentation Network of Airfield Area of Airport,3A-Net),结合采样点空间编码(SPSE)模块和注意力池化(AP)模块以解决传统分割网络在分割精度以及对物体细节特征保留能力不足的问题。最后,基于3A-PCD数据集设计实验,消融实验结果表明增加SPSE后,模型的分割精度MIoU提升了6个百分点、增加AP模块后MIoU提升了3.9个百分点;3A-Net与基准模型PointNet++相比,MIoU提高了6.7个百分点;与现有先进的6种语义分割模型相比,所提模型性能均有不同程度的提升,更适用于室外大场景的目标识别。