In an asteroid sample-return mission,accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point.During the missions of Hayabusa and Hayabusa2,the main part of ...In an asteroid sample-return mission,accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point.During the missions of Hayabusa and Hayabusa2,the main part of the visual position estimation procedure was performed by human operators on the Earth based on a sequence of asteroid images acquired and sent by the spacecraft.Although this approach is still adopted in critical space missions,there is an increasing demand for automated visual position estimation,so that the time and cost of human intervention may be reduced.In this paper,we propose a method for estimating the relative position of the spacecraft and asteroid during the descent phase for touchdown from an image sequence using state-of-the-art techniques of image processing,feature extraction,and structure from motion.We apply this method to real Ryugu images that were taken by Hayabusa2 from altitudes of 20 km-500 m.It is demonstrated that the method has practical relevance for altitudes within the range of 5-1 km.This result indicates that our method could improve the efficiency of the ground operation in the global mapping and navigation during the touchdown sequence,whereas full automation and autonomous on-board estimation are beyond the scope of this study.Furthermore,we discuss the challenges of developing a completely automatic position estimation framework.展开更多
基金This work was partially supported by JSPS KAKENHI Grant No.18H01628.
文摘In an asteroid sample-return mission,accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point.During the missions of Hayabusa and Hayabusa2,the main part of the visual position estimation procedure was performed by human operators on the Earth based on a sequence of asteroid images acquired and sent by the spacecraft.Although this approach is still adopted in critical space missions,there is an increasing demand for automated visual position estimation,so that the time and cost of human intervention may be reduced.In this paper,we propose a method for estimating the relative position of the spacecraft and asteroid during the descent phase for touchdown from an image sequence using state-of-the-art techniques of image processing,feature extraction,and structure from motion.We apply this method to real Ryugu images that were taken by Hayabusa2 from altitudes of 20 km-500 m.It is demonstrated that the method has practical relevance for altitudes within the range of 5-1 km.This result indicates that our method could improve the efficiency of the ground operation in the global mapping and navigation during the touchdown sequence,whereas full automation and autonomous on-board estimation are beyond the scope of this study.Furthermore,we discuss the challenges of developing a completely automatic position estimation framework.