Lane detection is animportant aspect of autonomous driving,aiming to ensure that vehicles accurately understand road structures as well as improve their ability to drive in complex traffic environments.In recent years...Lane detection is animportant aspect of autonomous driving,aiming to ensure that vehicles accurately understand road structures as well as improve their ability to drive in complex traffic environments.In recent years,lane detection tasks based on deep learning methods have made significant progress in detection accuracy.In this paper,we provide a comprehensive review of deep learning-based lane detection tasks in recent years.First,we introduce the background of the lane detection task,including lane detection,the lane datasets and the factors affecting lane detection.Second,we review the traditional and deep learning methods for lane detection,and analyze their features in detail while classifying the different methods.In the deep learning methods classification section,we explore five main categories,including segmentation-based,object detection,parametric curves,end-to-end,and keypoint-based methods.Then,some typical models are briefly compared and analyzed.Finally,in this paper,based on the comprehensive consideration of current lane detection methods,we put forward the current problems still faced,such as model generalization and computational cost.At the same time,possible future research directions are given for extreme scenarios,model generalization and other issues.展开更多
Lane detection is a fundamental aspect of most current advanced driver assistance systems(ADASs). A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowl...Lane detection is a fundamental aspect of most current advanced driver assistance systems(ADASs). A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowledge background and the low-cost of camera devices. In this paper, previous visionbased lane detection studies are reviewed in terms of three aspects, which are lane detection algorithms, integration, and evaluation methods. Next, considering the inevitable limitations that exist in the camera-based lane detection system, the system integration methodologies for constructing more robust detection systems are reviewed and analyzed. The integration methods are further divided into three levels, namely, algorithm, system,and sensor. Algorithm level combines different lane detection algorithms while system level integrates other object detection systems to comprehensively detect lane positions. Sensor level uses multi-modal sensors to build a robust lane recognition system. In view of the complexity of evaluating the detection system, and the lack of common evaluation procedure and uniform metrics in past studies, the existing evaluation methods and metrics are analyzed and classified to propose a better evaluation of the lane detection system. Next, a comparison of representative studies is performed. Finally, a discussion on the limitations of current lane detection systems and the future developing trends toward an Artificial Society, Computational experiment-based parallel lane detection framework is proposed.展开更多
This paper proposes a novel method of lane detection,which adopts VGG16 as the basis of convolutional neural network to extract lane line features by cavity convolution,wherein the lane lines are divided into dotted l...This paper proposes a novel method of lane detection,which adopts VGG16 as the basis of convolutional neural network to extract lane line features by cavity convolution,wherein the lane lines are divided into dotted lines and solid lines.Expanding the field of experience through hollow convolution,the full connection layer of the network is discarded,the last largest pooling layer of the VGG16 network is removed,and the processing of the last three convolution layers is replaced by hole convolution.At the same time,CNN adopts the encoder and decoder structure mode,and uses the index function of the maximum pooling layer in the decoder part to upsample the encoder in a counter-pooling manner,realizing semantic segmentation.And combined with the instance segmentation,and finally through the fitting to achieve the detection of the lane line.In addition,the currently disclosed lane line data sets are relatively small,and there is no distinction between lane solid lines and dashed lines.To this end,our work made a lane line data set for the lane virtual and real identification,and based on the proposed algorithm effective verification of the data set achieved by the increased segmentation.The final test shows that the proposed method has a good balance between lane detection speed and accuracy,which has good robustness.展开更多
Lane detection is a fundamental necessary task for autonomous driving.The conventional methods mainly treat lane detection as a pixel-wise segmentation problem,which suffers from the challenge of uncontrollable drivin...Lane detection is a fundamental necessary task for autonomous driving.The conventional methods mainly treat lane detection as a pixel-wise segmentation problem,which suffers from the challenge of uncontrollable driving road environments and needs post-processing to abstract the lane parameters.In this work,a series of lines are used to represent traffic lanes and a novel line deformation network(LDNet) is proposed to directly predict the coordinates of lane line points.Inspired by the dynamic behavior of classic snake algorithms,LDNet uses a neural network to iteratively deform an initial lane line to match the lane markings.To capture the long and discontinuous structures of lane lines,1 D convolution in LDNet is used for structured feature learning along the lane lines.Based on LDNet,a two-stage pipeline is developed for lane marking detection:(1) initial lane line proposal to predict a list of lane line candidates,and(2) lane line deformation to obtain the coordinates of lane line points.Experiments show that the proposed approach achieves competitive performances on the TuSimple dataset while being efficient for real-time applications on a GTX 1650 GPU.In particular,the accuracy of LDNet with the annotated starting and ending points is up to99.45%,which indicates the improved initial lane line proposal method can further enhance the performance of LDNet.展开更多
This paper presents an in-vehicle stereo vision system as a solution to accidents involving large good vehicle due to blind spots using Nigeria as a case study. In this paper, a stereo-vision system was attached to th...This paper presents an in-vehicle stereo vision system as a solution to accidents involving large good vehicle due to blind spots using Nigeria as a case study. In this paper, a stereo-vision system was attached to the front of Large Good Vehicles (LGVs) with a view to presenting live feeds of vehicles close to the LGV vehicles and their distance away. The captured road images using the stereo vision system were optimized for effectiveness and optimal vehicle maneuvering using a modified metaheuristics algorithm called the simulated annealing Ant Colony Optimization (saACO) algorithm. The concept of simulated annealing is strategies used to automatically select the control parameters of the ACO algorithm. This helps to stabilize the performance of the ACO algorithm irrespective of the quality of the lane images captured in the in-vehicle vision system. The system is capable of notifying drivers through lane detection techniques of blind spots. This technique enables the driver to be more aware of what surrounds the vehicle and make decisions early. In order to test the system, the stereo-vision device was mounted on a Large good vehicle, driven in Zaria (a city in Kaduna state in Nigeria), and data were in the record. Out of 180 events, 42.22% of potential accident events were caused by Passenger Cars, while 27.22%, 18.33% and 12.22% were caused by two-wheelers, Large Good Vehicles and road users, respectively. In the same vein, the in-vehicle lane detection system shows a good performance of the saACO-based lane detection system and gives a better performance in comparison with the standard ACO method.展开更多
Despite recent advances in lane detection methods,scenarios with limited-or no-visual-clue of lanes due to factors such as lighting conditions and occlusion remain challenging and crucial for automated driving.Moreove...Despite recent advances in lane detection methods,scenarios with limited-or no-visual-clue of lanes due to factors such as lighting conditions and occlusion remain challenging and crucial for automated driving.Moreover,current lane representations require complex post-processing and struggle with specific instances.Inspired by the DETR architecture,we propose LDTR,a transformer-based model to address these issues.Lanes are modeled with a novel anchorchain,regarding a lane as a whole from the beginning,which enables LDTR to handle special lanes inherently.To enhance lane instance perception,LDTR incorporates a novel multi-referenced deformable attention module to distribute attention around the object.Additionally,LDTR incorporates two line IoU algorithms to improve convergence efficiency and employs a Gaussian heatmap auxiliary branch to enhance model representation capability during training.To evaluate lane detection models,we rely on Fr´echet distance,parameterized F1-score,and additional synthetic metrics.Experimental results demonstrate that LDTR achieves state-of-the-art performance on well-known datasets.展开更多
The advancement of autonomous driving heavily relies on the ability to accurate lane lines detection.As deep learning and computer vision technologies evolve,a variety of deep learning-based methods for lane line dete...The advancement of autonomous driving heavily relies on the ability to accurate lane lines detection.As deep learning and computer vision technologies evolve,a variety of deep learning-based methods for lane line detection have been proposed by researchers in the field.However,owing to the simple appearance of lane lines and the lack of distinctive features,it is easy for other objects with similar local appearances to interfere with the process of detecting lane lines.The precision of lane line detection is limited by the unpredictable quantity and diversity of lane lines.To address the aforementioned challenges,we propose a novel deep learning approach for lane line detection.This method leverages the Swin Transformer in conjunction with LaneNet(called ST-LaneNet).The experience results showed that the true positive detection rate can reach 97.53%for easy lanes and 96.83%for difficult lanes(such as scenes with severe occlusion and extreme lighting conditions),which can better accomplish the objective of detecting lane lines.In 1000 detection samples,the average detection accuracy can reach 97.83%,the average inference time per image can reach 17.8 ms,and the average number of frames per second can reach 64.8 Hz.The programming scripts and associated models for this project can be accessed openly at the following GitHub repository:https://github.com/Duane 711/Lane-line-detec tion-ST-LaneNet.展开更多
Autonomous vehicles are currently regarded as an interesting topic in the AI field.For such vehicles,the lane where they are traveling should be detected.Most lane detection methods identify the whole road area with a...Autonomous vehicles are currently regarded as an interesting topic in the AI field.For such vehicles,the lane where they are traveling should be detected.Most lane detection methods identify the whole road area with all the lanes built on it.In addition to having a low accuracy rate and slow processing time,these methods require costly hardware and training datasets,and they fail under critical conditions.In this study,a novel detection algo-rithm for a lane where a car is currently traveling is proposed by combining simple traditional image processing with lightweight machine learning(ML)methods.First,a preparation phase removes all unwanted information to preserve the topographical representations of virtual edges within a one-pixel width around expected lanes.Then,a simple feature extraction phase obtains only the intersection point position and angle degree of each candidate edge.Subsequently,a proposed scheme that comprises consecutive lightweight ML models is applied to detect the correct lane by using the extracted features.This scheme is based on the density-based spatial clustering of applications with noise,random forest trees,a neural network,and rule-based methods.To increase accuracy and reduce processing time,each model supports the next one during detection.When a model detects a lane,the subsequent models are skipped.The models are trained on the Karlsruhe Institute of Technology and Toyota Technological Institute datasets.Results show that the proposed method is faster and achieves higher accuracy than state-of-the-art methods.This method is simple,can handle degradation conditions,and requires low-cost hardware and training datasets.展开更多
Accurate perception of lane line information is one of the basic requirements of unmanned driving technology, which is related to the localization of the vehicle and the determination of the forward direction. In this...Accurate perception of lane line information is one of the basic requirements of unmanned driving technology, which is related to the localization of the vehicle and the determination of the forward direction. In this paper, multi-level constraints are added to the lane line detection model PINet, which is used to improve the perception of lane lines. Predicted lane lines in the network are predicted to have real and imaginary attributes, which are used to enhance the perception of features around the lane lines, with pixel-level constraints on the lane lines;images are converted to bird’s-eye views, where the parallelism between lane lines is reconstructed, with lane line-level constraints on the predicted lane lines;and vanishing points are used to focus on the image hierarchy, with image-level constraints on the lane lines. The model proposed in this paper meets both accuracy (96.44%) and real-time (30 + FPS) requirements, has been tested on the highway on the ground, and has performed stably.展开更多
The formation control of multiple unmanned aerial vehicles(multi-UAVs)has always been a research hotspot.Based on the straight line trajectory,a multi-UAVs target point assignment algorithm based on the assignment pro...The formation control of multiple unmanned aerial vehicles(multi-UAVs)has always been a research hotspot.Based on the straight line trajectory,a multi-UAVs target point assignment algorithm based on the assignment probability is proposed to achieve the shortest overall formation path of multi-UAVs with low complexity and reduce the energy consumption.In order to avoid the collision between UAVs in the formation process,the concept of safety ball is introduced,and the collision detection based on continuous motion of two time slots and the lane occupation detection after motion is proposed to avoid collision between UAVs.Based on the idea of game theory,a method of UAV motion form setting based on the maximization of interests is proposed,including the maximization of self-interest and the maximization of formation interest is proposed,so that multi-UAVs can complete the formation task quickly and reasonably with the linear trajectory assigned in advance.Finally,through simulation verification,the multi-UAVs target assignment algorithm based on the assignment probability proposed in this paper can effectively reduce the total path length,and the UAV motion selection method based on the maximization interests can effectively complete the task formation.展开更多
Lane detection is essential for many aspects of autonomous driving,such as lane-based navigation and high-definition(HD)map modeling.Although lane detection is challenging especially with complex road conditions,consi...Lane detection is essential for many aspects of autonomous driving,such as lane-based navigation and high-definition(HD)map modeling.Although lane detection is challenging especially with complex road conditions,considerable progress has been witnessed in this area in the past several years.In this survey,we review recent visual-based lane detection datasets and methods.For datasets,we categorize them by annotations,provide detailed descriptions for each category,and show comparisons among them.For methods,we focus on methods based on deep learning and organize them in terms of their detection targets.Moreover,we introduce a new dataset with more detailed annotations for HD map modeling,a new direction for lane detection that is applicable to autonomous driving in complex road conditions,a deep neural network LineNet for lane detection,and show its application to HD map modeling.展开更多
Purpose–The purpose of this paper is to develop a lane detection analysis algorithm by Hough transform and histogram shapes,which can effectively detect the lane markers in various lane road conditions,in driving sys...Purpose–The purpose of this paper is to develop a lane detection analysis algorithm by Hough transform and histogram shapes,which can effectively detect the lane markers in various lane road conditions,in driving system for drivers.Design/methodology/approach–Step 1:receiving image:the developed system is able to acquire images from video files.Step 2:splitting image:the system analyzes the splitting process of video file.Step 3:cropping image:specifying the area of interest using crop tool.Step 4:image enhancement:the system conducts the frame to convert RGB color image into grayscale image.Step 5:converting grayscale image to binary image.Step 6:segmenting and removing objects:using the opening morphological operations.Step 7:defining the analyzed area within the image using the Hough transform.Step 8:computing Houghline transform:the system operates the defined segment to analyze the Houghline transform.Findings–This paper presents the useful solution for lane detection by analyzing histogram shapes and Hough transform algorithms through digital image processing.The method has tested on video sequences filmed by using a webcam camera to record the road as a video file in a form of avi.The experimental results show the combination of two algorithms to compare the similarities and differences between histogram and Hough transform algorithm for better lane detection results.The performance of the Hough transform is better than the histogram shapes.Originality/value–This paper proposed two algorithms by comparing the similarities and differences between histogram shapes and Hough transform algorithm.The concept of this paper is to analyze between algorithms,provide a process of lane detection and search for the algorithm that has the better lane detection results.展开更多
Objective To determine the positions of marking in the presence of distracting shadows, highlight, pavement cracks, etc. Methods RGB color space is transformed into I 1 I 2 I 3 color space and I 2 ...Objective To determine the positions of marking in the presence of distracting shadows, highlight, pavement cracks, etc. Methods RGB color space is transformed into I 1 I 2 I 3 color space and I 2 component was used to form a new image with less effect of the clutter. Using an improved edge detection operator, an edge strength map was produced, and binarilized by adaptive thresholds. The binary image was labeled and circularity of all connected components is calculated. The Self Organizing Mapping is adopted to extract regions which imply potential marking. Finally the position of marking was obtained by curve fitting. Results Color information was utilized fully, all thresholds were set adaptively and lane marking could be detected in challenging images with shadows, highlight or other cars. Conclusion The method based on circularity of connected components shows its outstanding robustness to lane marking detection and has a wide variety of applications in the areas of vehicle autonomous navigation and driver assistance system.展开更多
To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentat...To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentation,lane detection,and traffic object detection.Firstly,in the encoding stage,features are extracted,and Generalized Efficient Layer Aggregation Network(GELAN)is utilized to enhance feature extraction and gradient flow.Secondly,in the decoding stage,specialized detection heads are designed;the drivable area segmentation head employs DySample to expand feature maps,the lane detection head merges early-stage features and processes the output through the Focal Modulation Network(FMN).Lastly,the Minimum Point Distance IoU(MPDIoU)loss function is employed to compute the matching degree between traffic object detection boxes and predicted boxes,facilitating model training adjustments.Experimental results on the BDD100K dataset demonstrate that the proposed network achieves a drivable area segmentation mean intersection over union(mIoU)of 92.2%,lane detection accuracy and intersection over union(IoU)of 75.3%and 26.4%,respectively,and traffic object detection recall and mAP of 89.7%and 78.2%,respectively.The detection performance surpasses that of other single-task or multi-task algorithm models.展开更多
A new vision-based long-distance lane perception and front vehicle location method was developed for decision making of full autonomous vehicles on highway roads,Firstly,a real-time long-distance lane detection approa...A new vision-based long-distance lane perception and front vehicle location method was developed for decision making of full autonomous vehicles on highway roads,Firstly,a real-time long-distance lane detection approach was presented based on a linear-cubic road model for two-lane highways.By using a novel robust lane marking feature which combines the constraints of intensity,edge and width,the lane markings in far regions were extracted accurately and efficiently.Next,the detected lane lines were selected and tracked by estimating the lateral offset and heading angle of ego vehicle with a Kalman filter,Finally,front vehicles were located on correct lanes using the tracked lane lines,Experiment results show that the proposed lane perception approach can achieve an average correct detection rate of 94.37% with an average false positive detection rate of 0.35%,The proposed approaches for long-distance lane perception and front vehicle location were validated in a 286 km full autonomous drive experiment under real traffic conditions.This successful experiment shows that the approaches are effective and robust enough for full autonomous vehicles on highway roads.展开更多
A technology for unintended lane departure warning was proposed. As crucial information, lane boundaries were detected based on principal component analysis of grayscale distribution in search bars of given number and...A technology for unintended lane departure warning was proposed. As crucial information, lane boundaries were detected based on principal component analysis of grayscale distribution in search bars of given number and then each search bar was tracked using Kalman filter between frames. The lane detection performance was evaluated and demonstrated in ways of receiver operating characteristic, dice similarity coefficient and real-time performance. For lane departure detection, a lane departure risk evaluation model based on lasting time and frequency was effectively executed on the ARM-based platform. Experimental results indicate that the algorithm generates satisfactory lane detection results under different traffic and lighting conditions, and the proposed warning mechanism sends effective warning signals, avoiding most false warning.展开更多
A robust lane detection and tracking system based on monocular vision is presented in this paper. First, the lane detection algorithm can transform raw images into top view images by inverse perspective mapping ( IPM...A robust lane detection and tracking system based on monocular vision is presented in this paper. First, the lane detection algorithm can transform raw images into top view images by inverse perspective mapping ( IPM), and detect both inner sides of the lane accurately from the top view im- ages. Then the system will turn to lane tracking procedures to extract the lane according to the infor- mation of last frame. If it fails to track the lane, lane detection will be triggered again until the true lane is found. In this system, 0-oriented Hough transform is applied to extract candidate lane mark- ers, and a geometrical analysis of the lane candidates is proposed to remove the outliers. Additional- ly, vanishing point and region of interest(ROI) dynamically planning are used to enhance the accura- cy and efficiency. The system was tested under various road conditions, and the result turned out to be robust and reliable.展开更多
This paper presents an approach of model-oriented road detection based on trapezoidal model proposed by H. Jeong, et al and fuzzy Support Vector Machine (SVM). Firstly, the frames ex-tracted from the video are preproc...This paper presents an approach of model-oriented road detection based on trapezoidal model proposed by H. Jeong, et al and fuzzy Support Vector Machine (SVM). Firstly, the frames ex-tracted from the video are preprocessed by Pulse Coupled Neural Network (PCNN), and then handled by Kalman filter and Expectation Maximization (EM) algorithms. Next, according to the road's dif-ferent feathers, using fuzzy algorithm chooses a corresponding SVM for further lane detection, and then using morphological filters obtains the final detecting result. For different types of roads, this method uses fuzzy algorithm to choose different SVMs. Furthermore, in preprocessing using PCNN removes the shadow in the road to reduce the effect of illumination variations. Experimental results show that our method can receive better lane detecting results than the trapezoidal model and BP proposed by H. Jeong, et al..展开更多
Lane and its bifurcation detection is a vital and active research topic in low cost camera-based autonomous driving and advanced driver assistance system(ADAS). The common lane detection pipeline usually predicts lane...Lane and its bifurcation detection is a vital and active research topic in low cost camera-based autonomous driving and advanced driver assistance system(ADAS). The common lane detection pipeline usually predicts lane segmentation mask firstly, and then makes line fitting by parabola or spline post-processing. However, if the speed of the lane and its bifurcation detection is fast and robust enough, we think curve fitting is not a necessary step. The goal of this work is to get accurate lane segmentation,identification of every lane, adaptability of lane numbers and the right combination of lane bifurcation. In this work, we relabeled lane and its bifurcation with solid line if the image of Tu Simple dataset has both of them. In the data training process, we apply a data balance strategy for the heavily biased lane and non-lane data. In such a way, we develop a competitive cascaded instance lane detection model and propose a novel bifurcation pixel embedding nested fusion method based on full binary segmentation pixel embedding with self-grouping cluster, called Lane Draw. Our method discards curve fitting process, therefore it reduces the complexity of post-processing and increases detection speed at 35 fps. Moreover, the proposed method yields better performance and high accuracy on the relabeled Tu Simple dataset. To the best of our knowledge, this is the first attempt in 2 D lane and bifurcation detection, which more often happens in actual situations.展开更多
Abnormal driving behavior identification( ADBI) has become a research hotspot because of its significance in driver assistance systems. However,current methods still have some limitations in terms of accuracy and reli...Abnormal driving behavior identification( ADBI) has become a research hotspot because of its significance in driver assistance systems. However,current methods still have some limitations in terms of accuracy and reliability under severe traffic scenes. This paper proposes a new ADBI method based on direction and position offsets,where a two-factor identification strategy is proposed to improve the accuracy and reliability of ADBI. Self-adaptive edge detection based on Sobel operator is used to extract edge information of lanes. In order to enhance the efficiency and reliability of lane detection,an improved lane detection algorithm is proposed,where a Hough transform based on local search scope is employed to quickly detect the lane,and a validation scheme based on priori information is proposed to further verify the detected lane. Experimental results under various complex road conditions demonstrate the validity of the proposed ADBI.展开更多
文摘Lane detection is animportant aspect of autonomous driving,aiming to ensure that vehicles accurately understand road structures as well as improve their ability to drive in complex traffic environments.In recent years,lane detection tasks based on deep learning methods have made significant progress in detection accuracy.In this paper,we provide a comprehensive review of deep learning-based lane detection tasks in recent years.First,we introduce the background of the lane detection task,including lane detection,the lane datasets and the factors affecting lane detection.Second,we review the traditional and deep learning methods for lane detection,and analyze their features in detail while classifying the different methods.In the deep learning methods classification section,we explore five main categories,including segmentation-based,object detection,parametric curves,end-to-end,and keypoint-based methods.Then,some typical models are briefly compared and analyzed.Finally,in this paper,based on the comprehensive consideration of current lane detection methods,we put forward the current problems still faced,such as model generalization and computational cost.At the same time,possible future research directions are given for extreme scenarios,model generalization and other issues.
文摘Lane detection is a fundamental aspect of most current advanced driver assistance systems(ADASs). A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowledge background and the low-cost of camera devices. In this paper, previous visionbased lane detection studies are reviewed in terms of three aspects, which are lane detection algorithms, integration, and evaluation methods. Next, considering the inevitable limitations that exist in the camera-based lane detection system, the system integration methodologies for constructing more robust detection systems are reviewed and analyzed. The integration methods are further divided into three levels, namely, algorithm, system,and sensor. Algorithm level combines different lane detection algorithms while system level integrates other object detection systems to comprehensively detect lane positions. Sensor level uses multi-modal sensors to build a robust lane recognition system. In view of the complexity of evaluating the detection system, and the lack of common evaluation procedure and uniform metrics in past studies, the existing evaluation methods and metrics are analyzed and classified to propose a better evaluation of the lane detection system. Next, a comparison of representative studies is performed. Finally, a discussion on the limitations of current lane detection systems and the future developing trends toward an Artificial Society, Computational experiment-based parallel lane detection framework is proposed.
基金the National Natural Science Foundation of China(61772386)Joint fund project(nsfc-guangdong big data science center project),project number:U1611262,Hubei University of Science and Technology,Master of Engineering,special construction,project number:2018-19GZ01,Hubei University of Science and Technology Teaching Reform Project,project number:2018-XB-023,S201910927028.
文摘This paper proposes a novel method of lane detection,which adopts VGG16 as the basis of convolutional neural network to extract lane line features by cavity convolution,wherein the lane lines are divided into dotted lines and solid lines.Expanding the field of experience through hollow convolution,the full connection layer of the network is discarded,the last largest pooling layer of the VGG16 network is removed,and the processing of the last three convolution layers is replaced by hole convolution.At the same time,CNN adopts the encoder and decoder structure mode,and uses the index function of the maximum pooling layer in the decoder part to upsample the encoder in a counter-pooling manner,realizing semantic segmentation.And combined with the instance segmentation,and finally through the fitting to achieve the detection of the lane line.In addition,the currently disclosed lane line data sets are relatively small,and there is no distinction between lane solid lines and dashed lines.To this end,our work made a lane line data set for the lane virtual and real identification,and based on the proposed algorithm effective verification of the data set achieved by the increased segmentation.The final test shows that the proposed method has a good balance between lane detection speed and accuracy,which has good robustness.
基金Supported by the Science and Technology Research Project of Hubei Provincial Department of Education (No.Q20202604)。
文摘Lane detection is a fundamental necessary task for autonomous driving.The conventional methods mainly treat lane detection as a pixel-wise segmentation problem,which suffers from the challenge of uncontrollable driving road environments and needs post-processing to abstract the lane parameters.In this work,a series of lines are used to represent traffic lanes and a novel line deformation network(LDNet) is proposed to directly predict the coordinates of lane line points.Inspired by the dynamic behavior of classic snake algorithms,LDNet uses a neural network to iteratively deform an initial lane line to match the lane markings.To capture the long and discontinuous structures of lane lines,1 D convolution in LDNet is used for structured feature learning along the lane lines.Based on LDNet,a two-stage pipeline is developed for lane marking detection:(1) initial lane line proposal to predict a list of lane line candidates,and(2) lane line deformation to obtain the coordinates of lane line points.Experiments show that the proposed approach achieves competitive performances on the TuSimple dataset while being efficient for real-time applications on a GTX 1650 GPU.In particular,the accuracy of LDNet with the annotated starting and ending points is up to99.45%,which indicates the improved initial lane line proposal method can further enhance the performance of LDNet.
文摘This paper presents an in-vehicle stereo vision system as a solution to accidents involving large good vehicle due to blind spots using Nigeria as a case study. In this paper, a stereo-vision system was attached to the front of Large Good Vehicles (LGVs) with a view to presenting live feeds of vehicles close to the LGV vehicles and their distance away. The captured road images using the stereo vision system were optimized for effectiveness and optimal vehicle maneuvering using a modified metaheuristics algorithm called the simulated annealing Ant Colony Optimization (saACO) algorithm. The concept of simulated annealing is strategies used to automatically select the control parameters of the ACO algorithm. This helps to stabilize the performance of the ACO algorithm irrespective of the quality of the lane images captured in the in-vehicle vision system. The system is capable of notifying drivers through lane detection techniques of blind spots. This technique enables the driver to be more aware of what surrounds the vehicle and make decisions early. In order to test the system, the stereo-vision device was mounted on a Large good vehicle, driven in Zaria (a city in Kaduna state in Nigeria), and data were in the record. Out of 180 events, 42.22% of potential accident events were caused by Passenger Cars, while 27.22%, 18.33% and 12.22% were caused by two-wheelers, Large Good Vehicles and road users, respectively. In the same vein, the in-vehicle lane detection system shows a good performance of the saACO-based lane detection system and gives a better performance in comparison with the standard ACO method.
基金supported by the National Natural Science Foundation of China(No.U23A6007).
文摘Despite recent advances in lane detection methods,scenarios with limited-or no-visual-clue of lanes due to factors such as lighting conditions and occlusion remain challenging and crucial for automated driving.Moreover,current lane representations require complex post-processing and struggle with specific instances.Inspired by the DETR architecture,we propose LDTR,a transformer-based model to address these issues.Lanes are modeled with a novel anchorchain,regarding a lane as a whole from the beginning,which enables LDTR to handle special lanes inherently.To enhance lane instance perception,LDTR incorporates a novel multi-referenced deformable attention module to distribute attention around the object.Additionally,LDTR incorporates two line IoU algorithms to improve convergence efficiency and employs a Gaussian heatmap auxiliary branch to enhance model representation capability during training.To evaluate lane detection models,we rely on Fr´echet distance,parameterized F1-score,and additional synthetic metrics.Experimental results demonstrate that LDTR achieves state-of-the-art performance on well-known datasets.
基金Supported by National Natural Science Foundation of China(Grant Nos.51605003,51575001)Natural Science Foundation of Anhui Higher Education Institutions of China(Grant No.KJ2020A0358)Young and Middle-Aged Top Talents Training Program of Anhui Polytechnic University of China.
文摘The advancement of autonomous driving heavily relies on the ability to accurate lane lines detection.As deep learning and computer vision technologies evolve,a variety of deep learning-based methods for lane line detection have been proposed by researchers in the field.However,owing to the simple appearance of lane lines and the lack of distinctive features,it is easy for other objects with similar local appearances to interfere with the process of detecting lane lines.The precision of lane line detection is limited by the unpredictable quantity and diversity of lane lines.To address the aforementioned challenges,we propose a novel deep learning approach for lane line detection.This method leverages the Swin Transformer in conjunction with LaneNet(called ST-LaneNet).The experience results showed that the true positive detection rate can reach 97.53%for easy lanes and 96.83%for difficult lanes(such as scenes with severe occlusion and extreme lighting conditions),which can better accomplish the objective of detecting lane lines.In 1000 detection samples,the average detection accuracy can reach 97.83%,the average inference time per image can reach 17.8 ms,and the average number of frames per second can reach 64.8 Hz.The programming scripts and associated models for this project can be accessed openly at the following GitHub repository:https://github.com/Duane 711/Lane-line-detec tion-ST-LaneNet.
基金funded by DEANSHIP OF SCIENTIFIC RESEARCH AT UMM AL-QURA UNIVERSITY,Grant Number 22UQU4361009DSR04.
文摘Autonomous vehicles are currently regarded as an interesting topic in the AI field.For such vehicles,the lane where they are traveling should be detected.Most lane detection methods identify the whole road area with all the lanes built on it.In addition to having a low accuracy rate and slow processing time,these methods require costly hardware and training datasets,and they fail under critical conditions.In this study,a novel detection algo-rithm for a lane where a car is currently traveling is proposed by combining simple traditional image processing with lightweight machine learning(ML)methods.First,a preparation phase removes all unwanted information to preserve the topographical representations of virtual edges within a one-pixel width around expected lanes.Then,a simple feature extraction phase obtains only the intersection point position and angle degree of each candidate edge.Subsequently,a proposed scheme that comprises consecutive lightweight ML models is applied to detect the correct lane by using the extracted features.This scheme is based on the density-based spatial clustering of applications with noise,random forest trees,a neural network,and rule-based methods.To increase accuracy and reduce processing time,each model supports the next one during detection.When a model detects a lane,the subsequent models are skipped.The models are trained on the Karlsruhe Institute of Technology and Toyota Technological Institute datasets.Results show that the proposed method is faster and achieves higher accuracy than state-of-the-art methods.This method is simple,can handle degradation conditions,and requires low-cost hardware and training datasets.
文摘Accurate perception of lane line information is one of the basic requirements of unmanned driving technology, which is related to the localization of the vehicle and the determination of the forward direction. In this paper, multi-level constraints are added to the lane line detection model PINet, which is used to improve the perception of lane lines. Predicted lane lines in the network are predicted to have real and imaginary attributes, which are used to enhance the perception of features around the lane lines, with pixel-level constraints on the lane lines;images are converted to bird’s-eye views, where the parallelism between lane lines is reconstructed, with lane line-level constraints on the predicted lane lines;and vanishing points are used to focus on the image hierarchy, with image-level constraints on the lane lines. The model proposed in this paper meets both accuracy (96.44%) and real-time (30 + FPS) requirements, has been tested on the highway on the ground, and has performed stably.
基金supported by the Basic Scientific Research Business Expenses of Central Universities(3072022QBZ0806)。
文摘The formation control of multiple unmanned aerial vehicles(multi-UAVs)has always been a research hotspot.Based on the straight line trajectory,a multi-UAVs target point assignment algorithm based on the assignment probability is proposed to achieve the shortest overall formation path of multi-UAVs with low complexity and reduce the energy consumption.In order to avoid the collision between UAVs in the formation process,the concept of safety ball is introduced,and the collision detection based on continuous motion of two time slots and the lane occupation detection after motion is proposed to avoid collision between UAVs.Based on the idea of game theory,a method of UAV motion form setting based on the maximization of interests is proposed,including the maximization of self-interest and the maximization of formation interest is proposed,so that multi-UAVs can complete the formation task quickly and reasonably with the linear trajectory assigned in advance.Finally,through simulation verification,the multi-UAVs target assignment algorithm based on the assignment probability proposed in this paper can effectively reduce the total path length,and the UAV motion selection method based on the maximization interests can effectively complete the task formation.
基金This work was supported by the National Natural Science Foundation of China under Grant Nos.61902210 and 61521002a research grant from the Beijing Higher Institution Engineering Research Center,and the Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology.
文摘Lane detection is essential for many aspects of autonomous driving,such as lane-based navigation and high-definition(HD)map modeling.Although lane detection is challenging especially with complex road conditions,considerable progress has been witnessed in this area in the past several years.In this survey,we review recent visual-based lane detection datasets and methods.For datasets,we categorize them by annotations,provide detailed descriptions for each category,and show comparisons among them.For methods,we focus on methods based on deep learning and organize them in terms of their detection targets.Moreover,we introduce a new dataset with more detailed annotations for HD map modeling,a new direction for lane detection that is applicable to autonomous driving in complex road conditions,a deep neural network LineNet for lane detection,and show its application to HD map modeling.
基金This research has been financially granted by National Research Council of Thailand(NRCT,Thailand),Contact No.KMUTNB-GOV-58-46.
文摘Purpose–The purpose of this paper is to develop a lane detection analysis algorithm by Hough transform and histogram shapes,which can effectively detect the lane markers in various lane road conditions,in driving system for drivers.Design/methodology/approach–Step 1:receiving image:the developed system is able to acquire images from video files.Step 2:splitting image:the system analyzes the splitting process of video file.Step 3:cropping image:specifying the area of interest using crop tool.Step 4:image enhancement:the system conducts the frame to convert RGB color image into grayscale image.Step 5:converting grayscale image to binary image.Step 6:segmenting and removing objects:using the opening morphological operations.Step 7:defining the analyzed area within the image using the Hough transform.Step 8:computing Houghline transform:the system operates the defined segment to analyze the Houghline transform.Findings–This paper presents the useful solution for lane detection by analyzing histogram shapes and Hough transform algorithms through digital image processing.The method has tested on video sequences filmed by using a webcam camera to record the road as a video file in a form of avi.The experimental results show the combination of two algorithms to compare the similarities and differences between histogram and Hough transform algorithm for better lane detection results.The performance of the Hough transform is better than the histogram shapes.Originality/value–This paper proposed two algorithms by comparing the similarities and differences between histogram shapes and Hough transform algorithm.The concept of this paper is to analyze between algorithms,provide a process of lane detection and search for the algorithm that has the better lane detection results.
文摘Objective To determine the positions of marking in the presence of distracting shadows, highlight, pavement cracks, etc. Methods RGB color space is transformed into I 1 I 2 I 3 color space and I 2 component was used to form a new image with less effect of the clutter. Using an improved edge detection operator, an edge strength map was produced, and binarilized by adaptive thresholds. The binary image was labeled and circularity of all connected components is calculated. The Self Organizing Mapping is adopted to extract regions which imply potential marking. Finally the position of marking was obtained by curve fitting. Results Color information was utilized fully, all thresholds were set adaptively and lane marking could be detected in challenging images with shadows, highlight or other cars. Conclusion The method based on circularity of connected components shows its outstanding robustness to lane marking detection and has a wide variety of applications in the areas of vehicle autonomous navigation and driver assistance system.
文摘To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentation,lane detection,and traffic object detection.Firstly,in the encoding stage,features are extracted,and Generalized Efficient Layer Aggregation Network(GELAN)is utilized to enhance feature extraction and gradient flow.Secondly,in the decoding stage,specialized detection heads are designed;the drivable area segmentation head employs DySample to expand feature maps,the lane detection head merges early-stage features and processes the output through the Focal Modulation Network(FMN).Lastly,the Minimum Point Distance IoU(MPDIoU)loss function is employed to compute the matching degree between traffic object detection boxes and predicted boxes,facilitating model training adjustments.Experimental results on the BDD100K dataset demonstrate that the proposed network achieves a drivable area segmentation mean intersection over union(mIoU)of 92.2%,lane detection accuracy and intersection over union(IoU)of 75.3%and 26.4%,respectively,and traffic object detection recall and mAP of 89.7%and 78.2%,respectively.The detection performance surpasses that of other single-task or multi-task algorithm models.
基金Project(90820302) supported by the National Natural Science Foundation of China
文摘A new vision-based long-distance lane perception and front vehicle location method was developed for decision making of full autonomous vehicles on highway roads,Firstly,a real-time long-distance lane detection approach was presented based on a linear-cubic road model for two-lane highways.By using a novel robust lane marking feature which combines the constraints of intensity,edge and width,the lane markings in far regions were extracted accurately and efficiently.Next,the detected lane lines were selected and tracked by estimating the lateral offset and heading angle of ego vehicle with a Kalman filter,Finally,front vehicles were located on correct lanes using the tracked lane lines,Experiment results show that the proposed lane perception approach can achieve an average correct detection rate of 94.37% with an average false positive detection rate of 0.35%,The proposed approaches for long-distance lane perception and front vehicle location were validated in a 286 km full autonomous drive experiment under real traffic conditions.This successful experiment shows that the approaches are effective and robust enough for full autonomous vehicles on highway roads.
基金Project(51175159)supported by the National Natural Science Foundation of ChinaProject(2013WK3024)supported by the Science andTechnology Planning Program of Hunan Province,ChinaProject(CX2013B146)supported by the Hunan Provincial InnovationFoundation for Postgraduate,China
文摘A technology for unintended lane departure warning was proposed. As crucial information, lane boundaries were detected based on principal component analysis of grayscale distribution in search bars of given number and then each search bar was tracked using Kalman filter between frames. The lane detection performance was evaluated and demonstrated in ways of receiver operating characteristic, dice similarity coefficient and real-time performance. For lane departure detection, a lane departure risk evaluation model based on lasting time and frequency was effectively executed on the ARM-based platform. Experimental results indicate that the algorithm generates satisfactory lane detection results under different traffic and lighting conditions, and the proposed warning mechanism sends effective warning signals, avoiding most false warning.
基金Supported by the National Natural Science Foundation of China(51005019)
文摘A robust lane detection and tracking system based on monocular vision is presented in this paper. First, the lane detection algorithm can transform raw images into top view images by inverse perspective mapping ( IPM), and detect both inner sides of the lane accurately from the top view im- ages. Then the system will turn to lane tracking procedures to extract the lane according to the infor- mation of last frame. If it fails to track the lane, lane detection will be triggered again until the true lane is found. In this system, 0-oriented Hough transform is applied to extract candidate lane mark- ers, and a geometrical analysis of the lane candidates is proposed to remove the outliers. Additional- ly, vanishing point and region of interest(ROI) dynamically planning are used to enhance the accura- cy and efficiency. The system was tested under various road conditions, and the result turned out to be robust and reliable.
基金Supported by the National Natural Science Foundation of China (No. 60671062)the National Basic Research Program of China (2005CB724303)
文摘This paper presents an approach of model-oriented road detection based on trapezoidal model proposed by H. Jeong, et al and fuzzy Support Vector Machine (SVM). Firstly, the frames ex-tracted from the video are preprocessed by Pulse Coupled Neural Network (PCNN), and then handled by Kalman filter and Expectation Maximization (EM) algorithms. Next, according to the road's dif-ferent feathers, using fuzzy algorithm chooses a corresponding SVM for further lane detection, and then using morphological filters obtains the final detecting result. For different types of roads, this method uses fuzzy algorithm to choose different SVMs. Furthermore, in preprocessing using PCNN removes the shadow in the road to reduce the effect of illumination variations. Experimental results show that our method can receive better lane detecting results than the trapezoidal model and BP proposed by H. Jeong, et al..
基金the National Key Research and Development Project(Grant No.2019YFC1511003)the National Natural Science Foundation of China(Grant No.61803004)the Aeronautical Science Foundation of China(Grant No.20161375002)。
文摘Lane and its bifurcation detection is a vital and active research topic in low cost camera-based autonomous driving and advanced driver assistance system(ADAS). The common lane detection pipeline usually predicts lane segmentation mask firstly, and then makes line fitting by parabola or spline post-processing. However, if the speed of the lane and its bifurcation detection is fast and robust enough, we think curve fitting is not a necessary step. The goal of this work is to get accurate lane segmentation,identification of every lane, adaptability of lane numbers and the right combination of lane bifurcation. In this work, we relabeled lane and its bifurcation with solid line if the image of Tu Simple dataset has both of them. In the data training process, we apply a data balance strategy for the heavily biased lane and non-lane data. In such a way, we develop a competitive cascaded instance lane detection model and propose a novel bifurcation pixel embedding nested fusion method based on full binary segmentation pixel embedding with self-grouping cluster, called Lane Draw. Our method discards curve fitting process, therefore it reduces the complexity of post-processing and increases detection speed at 35 fps. Moreover, the proposed method yields better performance and high accuracy on the relabeled Tu Simple dataset. To the best of our knowledge, this is the first attempt in 2 D lane and bifurcation detection, which more often happens in actual situations.
基金Supported by the National Natural Science Foundation of China(No.61304205,61502240)Natural Science Foundation of Jiangsu Province(BK20141002)+1 种基金Innovation and Entrepreneurship Training Project of College Students(No.201710300051,201710300050)Foundation for Excellent Undergraduate Dissertation(Design) of Naning University of Information Science & Technology
文摘Abnormal driving behavior identification( ADBI) has become a research hotspot because of its significance in driver assistance systems. However,current methods still have some limitations in terms of accuracy and reliability under severe traffic scenes. This paper proposes a new ADBI method based on direction and position offsets,where a two-factor identification strategy is proposed to improve the accuracy and reliability of ADBI. Self-adaptive edge detection based on Sobel operator is used to extract edge information of lanes. In order to enhance the efficiency and reliability of lane detection,an improved lane detection algorithm is proposed,where a Hough transform based on local search scope is employed to quickly detect the lane,and a validation scheme based on priori information is proposed to further verify the detected lane. Experimental results under various complex road conditions demonstrate the validity of the proposed ADBI.