摘要
针对大规模城市道路点云环境中,道路典型地物识别效率不高的问题,提出了一种基于自投影注意力的三维点云模型U-RandLA,通过点云投影算法获取道路点云信息自投影图,采用二维图像卷积网络分支U-Proj提取该自投影图特征,生成注意力分布图,强化模型对典型地物的识别能力,提升了现有点云识别算法的重点区域感知能力;融合点云原始信息和具有大感受野的注意力分布图的特征,扩增模型初始感受野,解决现有算法感受野狭窄问题,提升对大尺度典型地物的信息提取能力。实验结果表明,U-RandLA模型对典型地物的平均识别准确率达到97.7%,物体平均交并比达到64.4%。帮助提升了实际项目的生产效率,已成功应用于浙江省、上海市、山东省、重庆市等城市道路部件的智能提取。
In response to the challenge of inefficient recognition of typical structures in large-scale urban road point cloud environments,this paper proposes a self-projection attention based point cloud recognition model,U-RandLA.The model uses a point cloud projection algorithm to obtain self-projection maps of road point cloud information and employs a two-dimensional image convolutional network branch,U-Proj,to extract features from the self-projection map.Subsequently,it generates attention distribution maps to enhance the recognition capabilities of model for typical structures and improve the focal area perception.By integrating the original point cloud information with features from the attention distribution map,which has a significant receptive field,the model expands its initial receptive field,addressing the narrow receptive field issue of existing algorithms and enhancing the information extraction for large-scale typical structures.Experimental results demonstrate that the U-RandLA model achieves an average recognition accuracy of 97.7%for typical structures,with an average intersection over union of 64.4%.This has contributed to enhancing the production efficiency of practical projects and has been successfully applied to the intelligent extraction of urban road components in Zhejiang Province,Shanghai,Shandong,and Chongqing.
作者
杨莹
邹文明
黄恺翔
王进
陈昱臻
金钊
YANG Ying;ZOU Wenming;HUANG Kaiziang;WANG Jin;CHEN Yuzhen;JIN Zhao(Zhejiang Institute of Surveying and Mapping Science and Technology,Hangzhou 310000,China;State Key Laboratory of Fluid Power&Mechatronic Systems,Zhejiang University,Hangzhou 310027,China)
出处
《测绘科学》
CSCD
北大核心
2024年第3期67-76,共10页
Science of Surveying and Mapping
基金
浙江省自然科学基金项目(LTGG23D010001)。