摘要
利用单目相机对空间非合作目标进行准确的姿态估计对于空间碎片清除、自主交会和其他在轨服务至关重要。然而,单目姿态估计方法缺乏深度信息,导致尺度不确定性问题,大大降低了其精度和实时性。本文首先提出了一种多尺度注意块(Multi-scale attention block, MAB),从输入图像中提取复杂的高维语义特征。其次,基于MAB模块,提出了空间非合作目标6自由度位姿估计的密集多尺度注意网络(Dense multi-scale attention network, DMANet),该网络由平面位置估计、深度位置估计和姿态估计3个分支组成,通过引入基于欧拉角的软分类方法,将位姿回归问题表述为经典分类问题。此外,设计了空间非合作目标模型,并利用Coppeliasim构建了姿态估计数据集。最后,与其他最先进的方法相比,在SPEED+、URSO数据集和本文数据集上全面评估了所提出的方法。实验结果表明,该方法具有较好的姿态估计精度。
Accurate pose estimation of space non-cooperative targets with a monocular camera is crucial to space debris removal,autonomous rendezvous,and other on-orbit services.However,monocular pose estimation methods lack depth information,resulting in scale uncertainty issue that significantly reduces their accuracy and real-time performance.We first propose a multi-scale attention block(MAB)to extract complex high-dimensional semantic features from the input image.Second,based on the MAB module,we propose a dense multi-scale attention network(DMANet)for estimating the 6-degree-of-freedom(DoF)pose of space non-cooperative targets,which consists of planar position estimation,depth position estimation,and attitude estimation branches.By introducing an Euler anglebased soft classification method,we formulate the pose regression problem as a classical classification problem.Besides,we design a space non-cooperative object model and construct a pose estimation dataset by using Coppeliasim.Finally,we thoroughly evaluate the proposed method on the SPEED+,URSO datasets and our dataset,compared to other state-of-the-art methods.Experiment results demonstrate that the DMANet achieves excellent pose estimation accuracy.
作者
张钊
胡瑀晖
周栋
吴立刚
姚蔚然
李鹏
ZHANG Zhao;HU Yuhui;ZHOU Dong;WU Ligang;YAO Weiran;LI Peng(School of Astronautics,Harbin Institute of Technology,Harbin 150001,P.R.China)
关键词
六自由度位姿估计
空间非合作目标
多尺度注意力机制
深度学习
神经网络
6-degree-of-freedom(DoF)pose estimation
space non-cooperative object
multi-scale attention
deep learning
neural network