摘要
机器人在复杂环境中的工作性能与环境的交互作用息息相关,但传统的几何映射无法充分捕捉环境的细节信息。然而,现有的移动机器人的环境模型通常为二分类激光地图或小范围的低实时性语义地图,缺乏可承载多样化地貌信息的轻量级环境模型。针对该问题,文中提出了一种融合激光与视觉的轻量级地貌地图构建方法。该方法在时间和空间同步的基础上,利用改进CSPResnet的轻量级网络提取地貌的语义信息,并与点云相融合生成包含地貌信息的语义点云,以构建具有地形描述的地貌地图。同时为通过并行策略提高构图实时性,采用改进的ICP算法对点云配准进行优化,基于局部子图拼接方法构建大范围场景下的地貌地图。在实际场景中进行实验,结果表明所提方法可有效识别多种典型地貌,并在有限机载算力下构建轻量级地貌地图。
The performance of robots in complex environments is closely related to the interaction with the environment,and traditional geometric mapping can not capture the detailed information of the enviroment adequately.To deal with the problems mentioned above,this study proposes a lightweight terrain map building approach combining laser and vision(LTMB-LV).Based on temporal and spatial synchronization,this method extracts semantic terrain information with improved CSPResnet and fuses it with point clouds to generate semantic point clouds involving terrain information,thereby building a terrain map with terrain description.Meanwhile,local subgraph stitching method based on an improved ICP for optimizing point cloud registration is employed for building terrain maps in large-scale scenarios,while a parallel method enhances real-time performance.Experimental results in real environments demonstrate that the proposed approach can efficiently detect many typical terrains and construct lightweight terrain maps with limited onboard computational power.
作者
李雅雯
张波涛
仲朝亮
吕强
LI Yawen;ZHANG Botao;ZHONG Chaoliang;LYU Qiang(School of Automation,Hangzhou Dianzi University,Hangzhou 310018,China;International Joint Research Laboratory for Autonomous Robotic Systems,Hangzhou Dianzi University,Hangzhou 310018,China)
出处
《计算机科学》
CSCD
北大核心
2024年第S02期399-407,共9页
Computer Science
基金
国家自然科学基金(62073108)
浙江省自然科学基金(LZ23F030004)
浙江省属高校基本科研业务专项资金重点项目(GK229909299001-004)。