Self-driving vehicles require a number of tests to prevent fatal accidents and ensure their appropriate operation in the physical world.However,conducting vehicle tests on the road is difficult because such tests are ...Self-driving vehicles require a number of tests to prevent fatal accidents and ensure their appropriate operation in the physical world.However,conducting vehicle tests on the road is difficult because such tests are expensive and labor intensive.In this study,we used an autonomous-driving simulator,and investigated the three-dimensional environmental perception problem of the simulated system.Using the open-source CARLA simulator,we generated a CarlaSim from unreal traffic scenarios,comprising 15000 camera-LiDAR(Light Detection and Ranging)samples with annotations and calibration files.Then,we developed Multi-Sensor Fusion Perception(MSFP)model for consuming two-modal data and detecting objects in the scenes.Furthermore,we conducted experiments on the KITTI and CarlaSim datasets;the results demonstrated the effectiveness of our proposed methods in terms of perception accuracy,inference efficiency,and generalization performance.The results of this study will faciliate the future development of autonomous-driving simulated tests.展开更多
基金supported by the National Natural Science Foundation of China(Nos.61822101 and 62061130221)the Beijing Municipal Key Research and Development Program(No.Z181100004618006)+1 种基金the Beijing Municipal Natural Science Foundation(No.L191001),the Zhuoyue Program of Beihang University(Postdoctoral Fellowship)(No.262716)the China Postdoctoral Science Foundation(No.2020M680299)。
文摘Self-driving vehicles require a number of tests to prevent fatal accidents and ensure their appropriate operation in the physical world.However,conducting vehicle tests on the road is difficult because such tests are expensive and labor intensive.In this study,we used an autonomous-driving simulator,and investigated the three-dimensional environmental perception problem of the simulated system.Using the open-source CARLA simulator,we generated a CarlaSim from unreal traffic scenarios,comprising 15000 camera-LiDAR(Light Detection and Ranging)samples with annotations and calibration files.Then,we developed Multi-Sensor Fusion Perception(MSFP)model for consuming two-modal data and detecting objects in the scenes.Furthermore,we conducted experiments on the KITTI and CarlaSim datasets;the results demonstrated the effectiveness of our proposed methods in terms of perception accuracy,inference efficiency,and generalization performance.The results of this study will faciliate the future development of autonomous-driving simulated tests.