Road lanes and markings are the bases for autonomous driving environment perception.In this paper,we propose an end-to-end multi-task network,Road All Information Extractor named RAIENet,which aims to extract the full...Road lanes and markings are the bases for autonomous driving environment perception.In this paper,we propose an end-to-end multi-task network,Road All Information Extractor named RAIENet,which aims to extract the full information of the road surface including road lanes,road markings and their correspondences.Based on the prior knowledge of pavement information,we explore and use the deep progressive relationship between lane segmentation and pavement mark-ing detection.Then,different attention mechanisms are adapted for different tasks.A lane detection accuracy of 0.807 F1-score and a ground marking accuracy of 0.971 mean average precision at intersection over union(IOU)threshold 0.5 were achieved on the newly labeled see more on road plus(CeyMo+)dataset.Of course,we also validated it on two well-known datasets Berkeley Deep-Drive 100K(BDD100K)and CULane.In addition,a post-processing method for generating bird’s eye view lane(BEVLane)using lidar point cloud information is proposed,which is used for the construction of high-definition maps and subsequent decision-making planning.The code and data are available at https://github.com/mayberpf/RAIEnet.展开更多
基金supported by the Key R&D Program of Shandong Province,China(No.2020CXGC010118)Advanced Technology Research Institute,Beijing Institute of Technology(BITAI).
文摘Road lanes and markings are the bases for autonomous driving environment perception.In this paper,we propose an end-to-end multi-task network,Road All Information Extractor named RAIENet,which aims to extract the full information of the road surface including road lanes,road markings and their correspondences.Based on the prior knowledge of pavement information,we explore and use the deep progressive relationship between lane segmentation and pavement mark-ing detection.Then,different attention mechanisms are adapted for different tasks.A lane detection accuracy of 0.807 F1-score and a ground marking accuracy of 0.971 mean average precision at intersection over union(IOU)threshold 0.5 were achieved on the newly labeled see more on road plus(CeyMo+)dataset.Of course,we also validated it on two well-known datasets Berkeley Deep-Drive 100K(BDD100K)and CULane.In addition,a post-processing method for generating bird’s eye view lane(BEVLane)using lidar point cloud information is proposed,which is used for the construction of high-definition maps and subsequent decision-making planning.The code and data are available at https://github.com/mayberpf/RAIEnet.