期刊文献+
共找到34篇文章
< 1 2 >
每页显示 20 50 100
A Real-Time Small Target Vehicle Detection Algorithm with an Improved YOLOv5m Network Model
1
作者 Yaoyao Du Xiangkui Jiang 《Computers, Materials & Continua》 SCIE EI 2024年第1期303-327,共25页
To address the challenges of high complexity,poor real-time performance,and low detection rates for small target vehicles in existing vehicle object detection algorithms,this paper proposes a real-time lightweight arc... To address the challenges of high complexity,poor real-time performance,and low detection rates for small target vehicles in existing vehicle object detection algorithms,this paper proposes a real-time lightweight architecture based on You Only Look Once(YOLO)v5m.Firstly,a lightweight upsampling operator called Content-Aware Reassembly of Features(CARAFE)is introduced in the feature fusion layer of the network to maximize the extraction of deep-level features for small target vehicles,reducing the missed detection rate and false detection rate.Secondly,a new prediction layer for tiny targets is added,and the feature fusion network is redesigned to enhance the detection capability for small targets.Finally,this paper applies L1 regularization to train the improved network,followed by pruning and fine-tuning operations to remove redundant channels,reducing computational and parameter complexity and enhancing the detection efficiency of the network.Training is conducted on the VisDrone2019-DET dataset.The experimental results show that the proposed algorithmreduces parameters and computation by 63.8% and 65.8%,respectively.The average detection accuracy improves by 5.15%,and the detection speed reaches 47 images per second,satisfying real-time requirements.Compared with existing approaches,including YOLOv5m and classical vehicle detection algorithms,our method achieves higher accuracy and faster speed for real-time detection of small target vehicles in edge computing. 展开更多
关键词 vehicle detection YOLOv5m small target channel pruning CARAFE
下载PDF
A New Vehicle Detection Framework Based on Feature-Guided in the Road Scene
2
作者 Tianmin Deng Xiyue Zhang Xinxin Cheng 《Computers, Materials & Continua》 SCIE EI 2024年第1期533-549,共17页
Vehicle detection plays a crucial role in the field of autonomous driving technology.However,directly applying deep learning-based object detection algorithms to complex road scene images often leads to subpar perform... Vehicle detection plays a crucial role in the field of autonomous driving technology.However,directly applying deep learning-based object detection algorithms to complex road scene images often leads to subpar performance and slow inference speeds in vehicle detection.Achieving a balance between accuracy and detection speed is crucial for real-time object detection in real-world road scenes.This paper proposes a high-precision and fast vehicle detector called the feature-guided bidirectional pyramid network(FBPN).Firstly,to tackle challenges like vehicle occlusion and significant background interference,the efficient feature filtering module(EFFM)is introduced into the deep network,which amplifies the disparities between the features of the vehicle and the background.Secondly,the proposed global attention localization module(GALM)in the model neck effectively perceives the detailed position information of the target,improving both the accuracy and inference speed of themodel.Finally,the detection accuracy of small-scale vehicles is further enhanced through the utilization of a four-layer feature pyramid structure.Experimental results show that FBPN achieves an average precision of 60.8% and 97.8% on the BDD100K and KITTI datasets,respectively,with inference speeds reaching 344.83 frames/s and 357.14 frames/s.FBPN demonstrates its effectiveness and superiority by striking a balance between detection accuracy and inference speed,outperforming several state-of-the-art methods. 展开更多
关键词 Driverless car vehicle detection channel attention mechanism deep learning
下载PDF
Improved YOLOv8s-Based Night Vehicle Detection
3
作者 WAN Xin-ei SI Zhan-jun 《印刷与数字媒体技术研究》 CAS 北大核心 2024年第4期76-85,共10页
With the gradual development of automatic driving technology,people’s attention is no longer limited to daily automatic driving target detection.In response to the problem that it is difficult to achieve fast and acc... With the gradual development of automatic driving technology,people’s attention is no longer limited to daily automatic driving target detection.In response to the problem that it is difficult to achieve fast and accurate detection of visual targets in complex scenes of automatic driving at night,a detection algorithm based on improved YOLOv8s was proposed.Firsly,By adding Triplet Attention module into the lower sampling layer of the original model,the model can effectively retain and enhance feature information related to target detection on the lower-resolution feature map.This enhancement improved the robustness of the target detection network and reduced instances of missed detections.Secondly,the Soft-NMS algorithm was introduced to address the challenges of dealing with dense targets,overlapping objects,and complex scenes.This algorithm effectively reduced false and missed positives,thereby improved overall detection performance when faced with highly overlapping detection results.Finally,the experimental results on the MPDIoU loss function dataset showed that compared with the original model,the improved method,in which mAP and accuracy are increased by 2.9%and 2.8%respectively,can achieve better detection accuracy and speed in night vehicle detection.It can effectively improve the problem of target detection in night scenes. 展开更多
关键词 vehicle detection Yolov8 Attention mechanism
下载PDF
3D Vehicle Detection Algorithm Based onMultimodal Decision-Level Fusion
4
作者 Peicheng Shi Heng Qi +1 位作者 Zhiqiang Liu Aixi Yang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第6期2007-2023,共17页
3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be... 3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be a more effective decision-level fusion algorithm,but it does not fully utilize the extracted features of 3D and 2D.Therefore,we proposed a 3D vehicle detection algorithm based onmultimodal decision-level fusion.First,project the anchor point of the 3D detection bounding box into the 2D image,calculate the distance between 2D and 3D anchor points,and use this distance as a new fusion feature to enhance the feature redundancy of the network.Subsequently,add an attention module:squeeze-and-excitation networks,weight each feature channel to enhance the important features of the network,and suppress useless features.The experimental results show that the mean average precision of the algorithm in the KITTI dataset is 82.96%,which outperforms previous state-ofthe-art multimodal fusion-based methods,and the average accuracy in the Easy,Moderate and Hard evaluation indicators reaches 88.96%,82.60%,and 77.31%,respectively,which are higher compared to the original CLOCs model by 1.02%,2.29%,and 0.41%,respectively.Compared with the original CLOCs algorithm,our algorithm has higher accuracy and better performance in 3D vehicle detection. 展开更多
关键词 3D vehicle detection multimodal fusion CLOCs network structure optimization attention module
下载PDF
Optimal Deep Convolutional Neural Network for Vehicle Detection in Remote Sensing Images
5
作者 Saeed Masoud Alshahrani Saud S.Alotaibi +5 位作者 Shaha Al-Otaibi Mohamed Mousa Anwer Mustafa Hilal Amgad Atta Abdelmageed Abdelwahed Motwakel Mohamed I.Eldesouki 《Computers, Materials & Continua》 SCIE EI 2023年第2期3117-3131,共15页
Object detection(OD)in remote sensing images(RSI)acts as a vital part in numerous civilian and military application areas,like urban planning,geographic information system(GIS),and search and rescue functions.Vehicle ... Object detection(OD)in remote sensing images(RSI)acts as a vital part in numerous civilian and military application areas,like urban planning,geographic information system(GIS),and search and rescue functions.Vehicle recognition from RSIs remained a challenging process because of the difficulty of background data and the redundancy of recognition regions.The latest advancements in deep learning(DL)approaches permit the design of effectual OD approaches.This study develops an Artificial Ecosystem Optimizer with Deep Convolutional Neural Network for Vehicle Detection(AEODCNN-VD)model on Remote Sensing Images.The proposed AEODCNN-VD model focuses on the identification of vehicles accurately and rapidly.To detect vehicles,the presented AEODCNN-VD model employs single shot detector(SSD)with Inception network as a baseline model.In addition,Multiway Feature Pyramid Network(MFPN)is used for handling objects of varying sizes in RSIs.The features from the Inception model are passed into theMFPNformultiway andmultiscale feature fusion.Finally,the fused features are passed into bounding box and class prediction networks.For enhancing the detection efficiency of the AEODCNN-VD approach,AEO based hyperparameter optimizer is used,which is stimulated by the energy transfer strategies such as production,consumption,and decomposition in an ecosystem.The performance validation of the presentedmethod on benchmark datasets showed promising performance over recent DL models. 展开更多
关键词 Object detection remote sensing vehicle detection artificial ecosystem optimizer convolutional neural network
下载PDF
Pedestrian and Vehicle Detection Based on Pruning YOLOv4 with Cloud-Edge Collaboration
6
作者 Huabin Wang Ruichao Mo +3 位作者 Yuping Chen Weiwei Lin Minxian Xu Bo Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第11期2025-2047,共23页
Nowadays,the rapid development of edge computing has driven an increasing number of deep learning applications deployed at the edge of the network,such as pedestrian and vehicle detection,to provide efficient intellig... Nowadays,the rapid development of edge computing has driven an increasing number of deep learning applications deployed at the edge of the network,such as pedestrian and vehicle detection,to provide efficient intelligent services to mobile users.However,as the accuracy requirements continue to increase,the components of deep learning models for pedestrian and vehicle detection,such as YOLOv4,become more sophisticated and the computing resources required for model training are increasing dramatically,which in turn leads to significant challenges in achieving effective deployment on resource-constrained edge devices while ensuring the high accuracy performance.For addressing this challenge,a cloud-edge collaboration-based pedestrian and vehicle detection framework is proposed in this paper,which enables sufficient training of models by utilizing the abundant computing resources in the cloud,and then deploying the well-trained models on edge devices,thus reducing the computing resource requirements for model training on edge devices.Furthermore,to reduce the size of the model deployed on edge devices,an automatic pruning method combines the convolution layer and BN layer is proposed to compress the pedestrian and vehicle detection model size.Experimental results show that the framework proposed in this paper is able to deploy the pruned model on a real edge device,Jetson TX2,with 6.72 times higher FPS.Meanwhile,the channel pruning reduces the volume and the number of parameters to 96.77%for the model,and the computing amount is reduced to 81.37%. 展开更多
关键词 Pedestrian and vehicle detection YOLOv4 channel pruning cloud-edge collaboration
下载PDF
A Light-weight Deep Neural Network for Vehicle Detection in Complex Tunnel Environments
7
作者 ZHENG Lie REN Dandan 《Instrumentation》 2023年第1期32-44,共13页
With the rapid development of social economy,transportation has become faster and more efficient.As an important part of goods transportation,the safe maintenance of tunnel highways has become particularly important.T... With the rapid development of social economy,transportation has become faster and more efficient.As an important part of goods transportation,the safe maintenance of tunnel highways has become particularly important.The maintenance of tunnel roads has become more difficult due to problems such as sealing,narrowness and lack of light.Currently,target detection methods are advantageous in detecting tunnel vehicles in a timely manner through monitoring.Therefore,in order to prevent vehicle misdetection and missed detection in this complex environment,we propose aYOLOv5-Vehicle model based on the YOLOv5 network.This model is improved in three ways.Firstly,The backbone network of YOLOv5 is replaced by the lightweight MobileNetV3 network to extract features,which reduces the number of model parameters;Next,all convolutions in the neck module are improved to the depth-wise separable convolutions to further reduce the number of model parameters and computation,and improve the detection speed of the model;Finally,to ensure the accuracy of the model,the CBAM attention mechanism is introduced to improve the detection accuracy and precision of the model.Experiments results demonstrate that the YOLOv5-Vehicle model can improve the accuracy. 展开更多
关键词 CBAM Depth-wise Separable Convolution MobileNetV3 vehicle detection YOLOV5
下载PDF
Design of a road vehicle detection system based on monocular vision 被引量:5
8
作者 王海 张为公 蔡英凤 《Journal of Southeast University(English Edition)》 EI CAS 2011年第2期169-173,共5页
In order to decrease vehicle crashes, a new rear view vehicle detection system based on monocular vision is designed. First, a small and flexible hardware platform based on a DM642 digtal signal processor (DSP) micr... In order to decrease vehicle crashes, a new rear view vehicle detection system based on monocular vision is designed. First, a small and flexible hardware platform based on a DM642 digtal signal processor (DSP) micro-controller is built. Then, a two-step vehicle detection algorithm is proposed. In the first step, a fast vehicle edge and symmetry fusion algorithm is used and a low threshold is set so that all the possible vehicles have a nearly 100% detection rate (TP) and the non-vehicles have a high false detection rate (FP), i. e., all the possible vehicles can be obtained. In the second step, a classifier using a probabilistic neural network (PNN) which is based on multiple scales and an orientation Gabor feature is trained to classify the possible vehicles and eliminate the false detected vehicles from the candidate vehicles generated in the first step. Experimental results demonstrate that the proposed system maintains a high detection rate and a low false detection rate under different road, weather and lighting conditions. 展开更多
关键词 vehicle detection monocular vision edge andsymmetry fusion Gabor feature PNN network
下载PDF
Vehicle detection method for expressway by MPEG compressed domain
9
作者 何铁军 张宁 +1 位作者 高朝晖 黄卫 《Journal of Southeast University(English Edition)》 EI CAS 2008年第4期522-527,共6页
A method which extracts traffic information from an MPEG-2 compressed video is proposed. According to the features of vehicle motion, the motion vector of a macro-block is used to detect moving vehicles in daytime, an... A method which extracts traffic information from an MPEG-2 compressed video is proposed. According to the features of vehicle motion, the motion vector of a macro-block is used to detect moving vehicles in daytime, and a filter algorithm for removing noises of motion vectors is given. As the brightness of the headlights is higher than that of the background in night images, discrete cosine transform (DCT)coefficient of image block is used to detect headlights of vehicles at night, and an algorithm for calculating the DCT coefficients of P-frames is introduced. In order to prevent moving objects outside the expressway and video shot changes from disturbing the detection, a driveway location method and a video-shot-change detection algorithm are suggested. The detection rate is 97.4% in daytime and 95.4% in nighttime by this method. The results prove that this vehicle detection method is effective. 展开更多
关键词 vehicle detection compressed domain discrete cosine transform (DCT) coefficient motion vector
下载PDF
Vehicle detection based on information fusion of vehicle symmetrical contour and license plate position 被引量:1
10
作者 连捷 赵池航 +2 位作者 张百灵 何杰 党倩 《Journal of Southeast University(English Edition)》 EI CAS 2012年第2期240-244,共5页
An efficient vehicle detection approach is proposed for traffic surveillance images, which is based on information fusion of vehicle symmetrical contour and license plate position. The vertical symmetry axis of the ve... An efficient vehicle detection approach is proposed for traffic surveillance images, which is based on information fusion of vehicle symmetrical contour and license plate position. The vertical symmetry axis of the vehicle contour in an image is. first detected, and then the vertical and the horizontal symmetry axes of the license plate are detected using the symmetry axis of the vehicle contour as a reference. The vehicle location in an image is determined using license plate symmetry axes and the vertical and the horizontal projection maps of the vehicle edge image. A dataset consisting of 450 images (15 classes of vehicles) is used to test the proposed method. The experimental results indicate that compared with the vehicle contour-based, the license plate location-based, the vehicle texture-based and the Gabor feature-based methods, the proposed method is the best with a detection accuracy of 90.7% and an elapsed time of 125 ms. 展开更多
关键词 vehicle detection symmetrical contour license plate position information fusion
下载PDF
Vehicle detection algorithm based on codebook and local binary patterns algorithms 被引量:1
11
作者 许雪梅 周立超 +1 位作者 墨芹 郭巧云 《Journal of Central South University》 SCIE EI CAS CSCD 2015年第2期593-600,共8页
Detecting the moving vehicles in jittering traffic scenes is a very difficult problem because of the complex environment.Only by the color features of the pixel or only by the texture features of image cannot establis... Detecting the moving vehicles in jittering traffic scenes is a very difficult problem because of the complex environment.Only by the color features of the pixel or only by the texture features of image cannot establish a suitable background model for the moving vehicles. In order to solve this problem, the Gaussian pyramid layered algorithm is proposed, combining with the advantages of the Codebook algorithm and the Local binary patterns(LBP) algorithm. Firstly, the image pyramid is established to eliminate the noises generated by the camera shake. Then, codebook model and LBP model are constructed on the low-resolution level and the high-resolution level of Gaussian pyramid, respectively. At last, the final test results are obtained through a set of operations according to the spatial relations of pixels. The experimental results show that this algorithm can not only eliminate the noises effectively, but also save the calculating time with high detection sensitivity and high detection accuracy. 展开更多
关键词 background modeling Gaussian pyramid CODEBOOK Local binary patterns(LBP) moving vehicle detection
下载PDF
Vehicle Detection Based on Visual Saliency and Deep Sparse Convolution Hierarchical Model 被引量:4
12
作者 CAI Yingfeng WANG Hai +2 位作者 CHEN Xiaobo GAO Li CHEN Long 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2016年第4期765-772,共8页
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high ... Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high processing times and low vehicle detection performance.To address this issue,a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed.A visual saliency calculation is firstly used to generate a small vehicle candidate area.The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection.The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group,which outperforms the existing state-of-the-art algorithms.More importantly,high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle. 展开更多
关键词 vehicle detection visual saliency deep model convolution neural network
下载PDF
Image and Feature Space Based Domain Adaptation for Vehicle Detection 被引量:1
13
作者 Ying Tian Libing Wang +1 位作者 Hexin Gu Lin Fan 《Computers, Materials & Continua》 SCIE EI 2020年第12期2397-2412,共16页
The application of deep learning in the field of object detection has experienced much progress.However,due to the domain shift problem,applying an off-the-shelf detector to another domain leads to a significant perfo... The application of deep learning in the field of object detection has experienced much progress.However,due to the domain shift problem,applying an off-the-shelf detector to another domain leads to a significant performance drop.A large number of ground truth labels are required when using another domain to train models,demanding a large amount of human and financial resources.In order to avoid excessive resource requirements and performance drop caused by domain shift,this paper proposes a new domain adaptive approach to cross-domain vehicle detection.Our approach improves the cross-domain vehicle detection model from image space and feature space.We employ objectives of the generative adversarial network and cycle consistency loss for image style transfer in image space.For feature space,we align feature distributions between the source domain and the target domain to improve the detection accuracy.Experiments are carried out using the method with two different datasets,proving that this technique effectively improves the accuracy of vehicle detection in the target domain. 展开更多
关键词 Deep learning cross-domain vehicle detection
下载PDF
Vehicle Detection in Still Images by Using Boosted Local Feature Detector 被引量:1
14
作者 Young-joon HAN Hern-soo HAHN 《Journal of Measurement Science and Instrumentation》 CAS 2010年第1期41-45,共5页
Vehicle detectition in still images is a comparatively difficult task. This paper presents a method for this task by using boosted local pattern detector constructed from two local features including Haar-like and ori... Vehicle detectition in still images is a comparatively difficult task. This paper presents a method for this task by using boosted local pattern detector constructed from two local features including Haar-like and oriented gradient features. The whole process is composed of three stages. In the first stage, local appearance features of vehicles and non-vehicle objects are extracted. Haar-tike and oriented gradient features are extracted separately in this stage as local features. In the second stage, Adabeost algorithm is used to select the most discriminative features as weak detectors from the two local feature sets, and a strong local pattern detector is built by the weighted combination of these selected weak detectors. Finally, vehicle detection can be performed in still images by using the boosted strong local feature detector. Experiment results show that the local pattern detector constructed in this way combines the advantages of Haar-like and oriented gradient features, and can achieve better detection results than the detector by using single Haar-like features. 展开更多
关键词 vehicle detection still image ADABOOST local features
下载PDF
Real-Time Front Vehicle Detection Algorithm Based on Local Feature Tracking Method 被引量:1
15
作者 Jae-hyoung YU Young-joon HAN Hern-soo HAHN 《Journal of Measurement Science and Instrumentation》 CAS 2011年第3期244-246,共3页
This paper proposes an algorithm that extracts features of back side of the vehicle and detects the front vehicle in real-time by local feature tracking of vehicle in the continuous images.The features in back side of... This paper proposes an algorithm that extracts features of back side of the vehicle and detects the front vehicle in real-time by local feature tracking of vehicle in the continuous images.The features in back side of the vehicle are vertical and horizontal edges,shadow and symmetry.By comparing local features using the fixed window size,the features in the continuous images are tracked.A robust and fast Haarlike mask is used for detecting vertical and horizontal edges,and shadow is extracted by histogram equalization,and the sliding window method is used to compare both side templates of the detected candidates for extracting symmetry.The features for tracking are vertical edges,and histogram is used to compare location of the peak and magnitude of the edges.The method using local feature tracking in the continuous images is more robust for detecting vehicle than the method using single image,and the proposed algorithm is evaluated by continuous images obtained on the expressway and downtown.And it can be performed on real-time through applying it to the embedded system. 展开更多
关键词 vehicle detection object tracking real-time algorithm Haarlike edge detection
下载PDF
Night Vehicle Detection Using Variable Haar-Like Feature 被引量:1
16
作者 Jae-do KIM Sang-hee KIM +1 位作者 Young-joon HAN Hern-soo HAHN 《Journal of Measurement Science and Instrumentation》 CAS 2011年第4期337-340,共4页
This paper proposes a night-time vehicle detection method using variable Haar-like feature.The specific features of front vehicle cannot be obtained in road image at night-time because of light reflection and ambient ... This paper proposes a night-time vehicle detection method using variable Haar-like feature.The specific features of front vehicle cannot be obtained in road image at night-time because of light reflection and ambient light,and it is also difficult to define optimal brightness and color of rear lamp according to road conditions.In comparison,the difference of vehicle region and road surface is more robust for road illumination environment.Thus,we select the candidates of vehicles by analysing the difference,and verify the candidates using those brightness and complexity to detect vehicle correctly.The feature of brightness difference is detected using variable horizontal Haar-like mask according to vehicle size in the location of image.And the region occurring rapid change is selected as the candidate.The proposed method is evaluated by testing on the various real road conditions. 展开更多
关键词 vehicle detection variable Haar-like feature brightness distribution analysis
下载PDF
Road boundary estimation to improve vehicle detection and tracking in UAV video 被引量:1
17
作者 张立业 彭仲仁 +1 位作者 李立 王华 《Journal of Central South University》 SCIE EI CAS 2014年第12期4732-4741,共10页
Video processing is one challenge in collecting vehicle trajectories from unmanned aerial vehicle(UAV) and road boundary estimation is one way to improve the video processing algorithms. However, current methods do no... Video processing is one challenge in collecting vehicle trajectories from unmanned aerial vehicle(UAV) and road boundary estimation is one way to improve the video processing algorithms. However, current methods do not work well for low volume road, which is not well-marked and with noises such as vehicle tracks. A fusion-based method termed Dempster-Shafer-based road detection(DSRD) is proposed to address this issue. This method detects road boundary by combining multiple information sources using Dempster-Shafer theory(DST). In order to test the performance of the proposed method, two field experiments were conducted, one of which was on a highway partially covered by snow and another was on a dense traffic highway. The results show that DSRD is robust and accurate, whose detection rates are 100% and 99.8% compared with manual detection results. Then, DSRD is adopted to improve UAV video processing algorithm, and the vehicle detection and tracking rate are improved by 2.7% and 5.5%,respectively. Also, the computation time has decreased by 5% and 8.3% for two experiments, respectively. 展开更多
关键词 road boundary detection vehicle detection and tracking airborne video unmanned aerial vehicle Dempster-Shafer theory
下载PDF
Vision-Based On-Road Nighttime Vehicle Detection and Tracking Using Taillight and Headlight Features
18
作者 Shahnaj Parvin Liton Jude Rozario Md. Ezharul Islam 《Journal of Computer and Communications》 2021年第3期29-53,共25页
An important and challenging aspect of developing an intelligent transportation system is the identification of nighttime vehicles. Most accidents occur at night owing to the absence of night lighting conditions. Vehi... An important and challenging aspect of developing an intelligent transportation system is the identification of nighttime vehicles. Most accidents occur at night owing to the absence of night lighting conditions. Vehicle detection has become a vital subject for research to ensure safety and avoid accidents. New vision-based on-road nighttime vehicle detection and tracking system are suggested in this survey paper using taillight and headlight features. Using computer vision and some image processing techniques, the proposed system can identify vehicles based on taillight and headlight features. For vehicle tracking, a centroid tracking algorithm has been used. Euclidean Distance method has been used for measuring the distances between two neighboring objects and tracks the nearest neighbor. In the proposed system two flexible fixed Region of Interest (ROI) have been used, one is the Headlight ROI, and another is the Taillight ROI that could adapt to different resolutions of the images and videos. The achievement of this research work is that the proposed two ROIs can work simultaneously in a frame to identify oncoming and preceding vehicles at night. The segmentation techniques and double thresholding method have been used to extract the red and white components from the scene to identify the vehicle headlights and taillights. To evaluate the capability of the proposed process, two types of datasets have been used. Experimental findings indicate that the performance of the proposed technique is reliable and effective in distinct nighttime environments for detection and tracking of vehicles. The proposed method has been able to detect and track double lights as well as single light such as motorcycle light and achieved average accuracy and average processing time of vehicle detection about 97.22% and 0.01 s per frame respectively. 展开更多
关键词 vehicle detection Double Threshold NIGHTTIME HEADLIGHT TAILLIGHT vehicle Tracking
下载PDF
A review of vehicle detection methods based on computer vision
19
作者 Changxi Ma Fansong Xue 《Journal of Intelligent and Connected Vehicles》 EI 2024年第1期1-18,共18页
With the increasing number of vehicles,there has been an unprecedented pressure on the operation and maintenance of intelligent transportation systems and transportation infrastructure.In order to achieve faster and m... With the increasing number of vehicles,there has been an unprecedented pressure on the operation and maintenance of intelligent transportation systems and transportation infrastructure.In order to achieve faster and more accurate identification of traffic vehicles,computer vision and deep learning technology play a vital role and have made significant advancements.This study summarizes the current research status,latest findings,and future development trends of traditional detection algorithms and deep learning-based detection algorithms.Among the detection algorithms based on deep learning,this study focuses on the representative convolutional neural network models.Specifically,it examines the two-stage and one-stage detection algorithms,which have been extensively utilized in the field of intelligent transportation systems.Compared to traditional detection algorithms,deep learning-based detection algorithms can achieve higher accuracy and efficiency.The single-stage detection algorithm is more efficient for real-time detection,while the two-stage detection algorithm is more accurate than the single-stage detection algorithm.In the follow-up research,it is important to consider the balance between detection efficiency and detection accuracy.Additionally,vehicle missed detection and false detection in complex scenes,such as bad weather and vehicle overlap,should be taken into account.This will ensure better application of the research findings in engineering practice. 展开更多
关键词 intelligent transportation system computer vision deep learning vehicle detection object detection algorithm
原文传递
Efficient and Cost-Effective Vehicle Detection in Foggy Weather for Edge/Fog-Enabled Traffic Surveillance and Collision Avoidance Systems
20
作者 Naeem Raza Muhammad Asif Habib +3 位作者 Mudassar Ahmad Qaisar Abbas Mutlaq BAldajani Muhammad Ahsan Latif 《Computers, Materials & Continua》 SCIE EI 2024年第10期911-931,共21页
Vision-based vehicle detection in adverse weather conditions such as fog,haze,and mist is a challenging research area in the fields of autonomous vehicles,collision avoidance,and Internet of Things(IoT)-enabled edge/f... Vision-based vehicle detection in adverse weather conditions such as fog,haze,and mist is a challenging research area in the fields of autonomous vehicles,collision avoidance,and Internet of Things(IoT)-enabled edge/fog computing traffic surveillance and monitoring systems.Efficient and cost-effective vehicle detection at high accuracy and speed in foggy weather is essential to avoiding road traffic collisions in real-time.To evaluate vision-based vehicle detection performance in foggy weather conditions,state-of-the-art Vehicle Detection in Adverse Weather Nature(DAWN)and Foggy Driving(FD)datasets are self-annotated using the YOLO LABEL tool and customized to four vehicle detection classes:cars,buses,motorcycles,and trucks.The state-of-the-art single-stage deep learning algorithms YOLO-V5,and YOLO-V8 are considered for the task of vehicle detection.Furthermore,YOLO-V5s is enhanced by introducing attention modules Convolutional Block Attention Module(CBAM),Normalized-based Attention Module(NAM),and Simple Attention Module(SimAM)after the SPPF module as well as YOLO-V5l with BiFPN.Their vehicle detection accuracy parameters and running speed is validated on cloud(Google Colab)and edge(local)systems.The mAP50 score of YOLO-V5n is 72.60%,YOLOV5s is 75.20%,YOLO-V5m is 73.40%,and YOLO-V5l is 77.30%;and YOLO-V8n is 60.20%,YOLO-V8s is 73.50%,YOLO-V8m is 73.80%,and YOLO-V8l is 72.60%on DAWN dataset.The mAP50 score of YOLO-V5n is 43.90%,YOLO-V5s is 40.10%,YOLO-V5m is 49.70%,and YOLO-V5l is 57.30%;and YOLO-V8n is 41.60%,YOLO-V8s is 46.90%,YOLO-V8m is 42.90%,and YOLO-V8l is 44.80%on FD dataset.The vehicle detection speed of YOLOV5n is 59 Frame Per Seconds(FPS),YOLO-V5s is 47 FPS,YOLO-V5m is 38 FPS,and YOLO-V5l is 30 FPS;and YOLO-V8n is 185 FPS,YOLO-V8s is 109 FPS,YOLO-V8m is 72 FPS,and YOLO-V8l is 63 FPS on DAWN dataset.The vehicle detection speed of YOLO-V5n is 26 FPS,YOLO-V5s is 24 FPS,YOLO-V5m is 22 FPS,and YOLO-V5l is 17 FPS;and YOLO-V8n is 313 FPS,YOLO-V8s is 182 FPS,YOLO-V8m is 99 FPS,and YOLO-V8l is 60 FPS on FD dataset.YOLO-V5s,YOLO-V5s variants and YOLO-V5l_BiFPN,and YOLO-V8 algorithms are efficient and cost-effective solution for real-time vision-based vehicle detection in foggy weather. 展开更多
关键词 vehicle detection YOLO-V5 YOLO-V5s variants YOLO-V8 DAWN dataset foggy driving dataset IoT cloud/edge/fog computing
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部