期刊文献+
共找到23篇文章
< 1 2 >
每页显示 20 50 100
A Real-Time Small Target Vehicle Detection Algorithm with an Improved YOLOv5m Network Model
1
作者 Yaoyao Du Xiangkui Jiang 《Computers, Materials & Continua》 SCIE EI 2024年第1期303-327,共25页
To address the challenges of high complexity,poor real-time performance,and low detection rates for small target vehicles in existing vehicle object detection algorithms,this paper proposes a real-time lightweight arc... To address the challenges of high complexity,poor real-time performance,and low detection rates for small target vehicles in existing vehicle object detection algorithms,this paper proposes a real-time lightweight architecture based on You Only Look Once(YOLO)v5m.Firstly,a lightweight upsampling operator called Content-Aware Reassembly of Features(CARAFE)is introduced in the feature fusion layer of the network to maximize the extraction of deep-level features for small target vehicles,reducing the missed detection rate and false detection rate.Secondly,a new prediction layer for tiny targets is added,and the feature fusion network is redesigned to enhance the detection capability for small targets.Finally,this paper applies L1 regularization to train the improved network,followed by pruning and fine-tuning operations to remove redundant channels,reducing computational and parameter complexity and enhancing the detection efficiency of the network.Training is conducted on the VisDrone2019-DET dataset.The experimental results show that the proposed algorithmreduces parameters and computation by 63.8% and 65.8%,respectively.The average detection accuracy improves by 5.15%,and the detection speed reaches 47 images per second,satisfying real-time requirements.Compared with existing approaches,including YOLOv5m and classical vehicle detection algorithms,our method achieves higher accuracy and faster speed for real-time detection of small target vehicles in edge computing. 展开更多
关键词 vehicle detection YOLOv5m small target channel pruning CARAFE
下载PDF
A New Vehicle Detection Framework Based on Feature-Guided in the Road Scene
2
作者 Tianmin Deng Xiyue Zhang Xinxin Cheng 《Computers, Materials & Continua》 SCIE EI 2024年第1期533-549,共17页
Vehicle detection plays a crucial role in the field of autonomous driving technology.However,directly applying deep learning-based object detection algorithms to complex road scene images often leads to subpar perform... Vehicle detection plays a crucial role in the field of autonomous driving technology.However,directly applying deep learning-based object detection algorithms to complex road scene images often leads to subpar performance and slow inference speeds in vehicle detection.Achieving a balance between accuracy and detection speed is crucial for real-time object detection in real-world road scenes.This paper proposes a high-precision and fast vehicle detector called the feature-guided bidirectional pyramid network(FBPN).Firstly,to tackle challenges like vehicle occlusion and significant background interference,the efficient feature filtering module(EFFM)is introduced into the deep network,which amplifies the disparities between the features of the vehicle and the background.Secondly,the proposed global attention localization module(GALM)in the model neck effectively perceives the detailed position information of the target,improving both the accuracy and inference speed of themodel.Finally,the detection accuracy of small-scale vehicles is further enhanced through the utilization of a four-layer feature pyramid structure.Experimental results show that FBPN achieves an average precision of 60.8% and 97.8% on the BDD100K and KITTI datasets,respectively,with inference speeds reaching 344.83 frames/s and 357.14 frames/s.FBPN demonstrates its effectiveness and superiority by striking a balance between detection accuracy and inference speed,outperforming several state-of-the-art methods. 展开更多
关键词 Driverless car vehicle detection channel attention mechanism deep learning
下载PDF
3D Vehicle Detection Algorithm Based onMultimodal Decision-Level Fusion
3
作者 Peicheng Shi Heng Qi +1 位作者 Zhiqiang Liu Aixi Yang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第6期2007-2023,共17页
3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be... 3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be a more effective decision-level fusion algorithm,but it does not fully utilize the extracted features of 3D and 2D.Therefore,we proposed a 3D vehicle detection algorithm based onmultimodal decision-level fusion.First,project the anchor point of the 3D detection bounding box into the 2D image,calculate the distance between 2D and 3D anchor points,and use this distance as a new fusion feature to enhance the feature redundancy of the network.Subsequently,add an attention module:squeeze-and-excitation networks,weight each feature channel to enhance the important features of the network,and suppress useless features.The experimental results show that the mean average precision of the algorithm in the KITTI dataset is 82.96%,which outperforms previous state-ofthe-art multimodal fusion-based methods,and the average accuracy in the Easy,Moderate and Hard evaluation indicators reaches 88.96%,82.60%,and 77.31%,respectively,which are higher compared to the original CLOCs model by 1.02%,2.29%,and 0.41%,respectively.Compared with the original CLOCs algorithm,our algorithm has higher accuracy and better performance in 3D vehicle detection. 展开更多
关键词 3D vehicle detection multimodal fusion CLOCs network structure optimization attention module
下载PDF
Optimal Deep Convolutional Neural Network for Vehicle Detection in Remote Sensing Images
4
作者 Saeed Masoud Alshahrani Saud S.Alotaibi +5 位作者 Shaha Al-Otaibi Mohamed Mousa Anwer Mustafa Hilal Amgad Atta Abdelmageed Abdelwahed Motwakel Mohamed I.Eldesouki 《Computers, Materials & Continua》 SCIE EI 2023年第2期3117-3131,共15页
Object detection(OD)in remote sensing images(RSI)acts as a vital part in numerous civilian and military application areas,like urban planning,geographic information system(GIS),and search and rescue functions.Vehicle ... Object detection(OD)in remote sensing images(RSI)acts as a vital part in numerous civilian and military application areas,like urban planning,geographic information system(GIS),and search and rescue functions.Vehicle recognition from RSIs remained a challenging process because of the difficulty of background data and the redundancy of recognition regions.The latest advancements in deep learning(DL)approaches permit the design of effectual OD approaches.This study develops an Artificial Ecosystem Optimizer with Deep Convolutional Neural Network for Vehicle Detection(AEODCNN-VD)model on Remote Sensing Images.The proposed AEODCNN-VD model focuses on the identification of vehicles accurately and rapidly.To detect vehicles,the presented AEODCNN-VD model employs single shot detector(SSD)with Inception network as a baseline model.In addition,Multiway Feature Pyramid Network(MFPN)is used for handling objects of varying sizes in RSIs.The features from the Inception model are passed into theMFPNformultiway andmultiscale feature fusion.Finally,the fused features are passed into bounding box and class prediction networks.For enhancing the detection efficiency of the AEODCNN-VD approach,AEO based hyperparameter optimizer is used,which is stimulated by the energy transfer strategies such as production,consumption,and decomposition in an ecosystem.The performance validation of the presentedmethod on benchmark datasets showed promising performance over recent DL models. 展开更多
关键词 Object detection remote sensing vehicle detection artificial ecosystem optimizer convolutional neural network
下载PDF
Pedestrian and Vehicle Detection Based on Pruning YOLOv4 with Cloud-Edge Collaboration
5
作者 Huabin Wang Ruichao Mo +3 位作者 Yuping Chen Weiwei Lin Minxian Xu Bo Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第11期2025-2047,共23页
Nowadays,the rapid development of edge computing has driven an increasing number of deep learning applications deployed at the edge of the network,such as pedestrian and vehicle detection,to provide efficient intellig... Nowadays,the rapid development of edge computing has driven an increasing number of deep learning applications deployed at the edge of the network,such as pedestrian and vehicle detection,to provide efficient intelligent services to mobile users.However,as the accuracy requirements continue to increase,the components of deep learning models for pedestrian and vehicle detection,such as YOLOv4,become more sophisticated and the computing resources required for model training are increasing dramatically,which in turn leads to significant challenges in achieving effective deployment on resource-constrained edge devices while ensuring the high accuracy performance.For addressing this challenge,a cloud-edge collaboration-based pedestrian and vehicle detection framework is proposed in this paper,which enables sufficient training of models by utilizing the abundant computing resources in the cloud,and then deploying the well-trained models on edge devices,thus reducing the computing resource requirements for model training on edge devices.Furthermore,to reduce the size of the model deployed on edge devices,an automatic pruning method combines the convolution layer and BN layer is proposed to compress the pedestrian and vehicle detection model size.Experimental results show that the framework proposed in this paper is able to deploy the pruned model on a real edge device,Jetson TX2,with 6.72 times higher FPS.Meanwhile,the channel pruning reduces the volume and the number of parameters to 96.77%for the model,and the computing amount is reduced to 81.37%. 展开更多
关键词 Pedestrian and vehicle detection YOLOv4 channel pruning cloud-edge collaboration
下载PDF
A Light-weight Deep Neural Network for Vehicle Detection in Complex Tunnel Environments
6
作者 ZHENG Lie REN Dandan 《Instrumentation》 2023年第1期32-44,共13页
With the rapid development of social economy,transportation has become faster and more efficient.As an important part of goods transportation,the safe maintenance of tunnel highways has become particularly important.T... With the rapid development of social economy,transportation has become faster and more efficient.As an important part of goods transportation,the safe maintenance of tunnel highways has become particularly important.The maintenance of tunnel roads has become more difficult due to problems such as sealing,narrowness and lack of light.Currently,target detection methods are advantageous in detecting tunnel vehicles in a timely manner through monitoring.Therefore,in order to prevent vehicle misdetection and missed detection in this complex environment,we propose aYOLOv5-Vehicle model based on the YOLOv5 network.This model is improved in three ways.Firstly,The backbone network of YOLOv5 is replaced by the lightweight MobileNetV3 network to extract features,which reduces the number of model parameters;Next,all convolutions in the neck module are improved to the depth-wise separable convolutions to further reduce the number of model parameters and computation,and improve the detection speed of the model;Finally,to ensure the accuracy of the model,the CBAM attention mechanism is introduced to improve the detection accuracy and precision of the model.Experiments results demonstrate that the YOLOv5-Vehicle model can improve the accuracy. 展开更多
关键词 CBAM Depth-wise Separable Convolution MobileNetV3 vehicle detection YOLOV5
下载PDF
Vehicle Detection Based on Visual Saliency and Deep Sparse Convolution Hierarchical Model 被引量:4
7
作者 CAI Yingfeng WANG Hai +2 位作者 CHEN Xiaobo GAO Li CHEN Long 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2016年第4期765-772,共8页
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high ... Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high processing times and low vehicle detection performance.To address this issue,a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed.A visual saliency calculation is firstly used to generate a small vehicle candidate area.The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection.The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group,which outperforms the existing state-of-the-art algorithms.More importantly,high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle. 展开更多
关键词 vehicle detection visual saliency deep model convolution neural network
下载PDF
Image and Feature Space Based Domain Adaptation for Vehicle Detection 被引量:1
8
作者 Ying Tian Libing Wang +1 位作者 Hexin Gu Lin Fan 《Computers, Materials & Continua》 SCIE EI 2020年第12期2397-2412,共16页
The application of deep learning in the field of object detection has experienced much progress.However,due to the domain shift problem,applying an off-the-shelf detector to another domain leads to a significant perfo... The application of deep learning in the field of object detection has experienced much progress.However,due to the domain shift problem,applying an off-the-shelf detector to another domain leads to a significant performance drop.A large number of ground truth labels are required when using another domain to train models,demanding a large amount of human and financial resources.In order to avoid excessive resource requirements and performance drop caused by domain shift,this paper proposes a new domain adaptive approach to cross-domain vehicle detection.Our approach improves the cross-domain vehicle detection model from image space and feature space.We employ objectives of the generative adversarial network and cycle consistency loss for image style transfer in image space.For feature space,we align feature distributions between the source domain and the target domain to improve the detection accuracy.Experiments are carried out using the method with two different datasets,proving that this technique effectively improves the accuracy of vehicle detection in the target domain. 展开更多
关键词 Deep learning cross-domain vehicle detection
下载PDF
Vision-Based On-Road Nighttime Vehicle Detection and Tracking Using Taillight and Headlight Features
9
作者 Shahnaj Parvin Liton Jude Rozario Md. Ezharul Islam 《Journal of Computer and Communications》 2021年第3期29-53,共25页
An important and challenging aspect of developing an intelligent transportation system is the identification of nighttime vehicles. Most accidents occur at night owing to the absence of night lighting conditions. Vehi... An important and challenging aspect of developing an intelligent transportation system is the identification of nighttime vehicles. Most accidents occur at night owing to the absence of night lighting conditions. Vehicle detection has become a vital subject for research to ensure safety and avoid accidents. New vision-based on-road nighttime vehicle detection and tracking system are suggested in this survey paper using taillight and headlight features. Using computer vision and some image processing techniques, the proposed system can identify vehicles based on taillight and headlight features. For vehicle tracking, a centroid tracking algorithm has been used. Euclidean Distance method has been used for measuring the distances between two neighboring objects and tracks the nearest neighbor. In the proposed system two flexible fixed Region of Interest (ROI) have been used, one is the Headlight ROI, and another is the Taillight ROI that could adapt to different resolutions of the images and videos. The achievement of this research work is that the proposed two ROIs can work simultaneously in a frame to identify oncoming and preceding vehicles at night. The segmentation techniques and double thresholding method have been used to extract the red and white components from the scene to identify the vehicle headlights and taillights. To evaluate the capability of the proposed process, two types of datasets have been used. Experimental findings indicate that the performance of the proposed technique is reliable and effective in distinct nighttime environments for detection and tracking of vehicles. The proposed method has been able to detect and track double lights as well as single light such as motorcycle light and achieved average accuracy and average processing time of vehicle detection about 97.22% and 0.01 s per frame respectively. 展开更多
关键词 vehicle detection Double Threshold NIGHTTIME HEADLIGHT TAILLIGHT vehicle Tracking
下载PDF
Improved YOLOv8s-Based Night Vehicle Detection
10
作者 万欣蕾 司占军 《印刷与数字媒体技术研究》 CAS 2024年第4期76-85,共10页
With the gradual development of automatic driving technology,people’s attention is no longer limited to daily automatic driving target detection.In response to the problem that it is difficult to achieve fast and acc... With the gradual development of automatic driving technology,people’s attention is no longer limited to daily automatic driving target detection.In response to the problem that it is difficult to achieve fast and accurate detection of visual targets in complex scenes of automatic driving at night,a detection algorithm based on improved YOLOv8s was proposed.Firsly,By adding Triplet Attention module into the lower sampling layer of the original model,the model can effectively retain and enhance feature information related to target detection on the lower-resolution feature map.This enhancement improved the robustness of the target detection network and reduced instances of missed detections.Secondly,the Soft-NMS algorithm was introduced to address the challenges of dealing with dense targets,overlapping objects,and complex scenes.This algorithm effectively reduced false and missed positives,thereby improved overall detection performance when faced with highly overlapping detection results.Finally,the experimental results on the MPDIoU loss function dataset showed that compared with the original model,the improved method,in which mAP and accuracy are increased by 2.9%and 2.8%respectively,can achieve better detection accuracy and speed in night vehicle detection.It can effectively improve the problem of target detection in night scenes. 展开更多
关键词 vehicle detection Yolov8 Attention mechanism
下载PDF
Using deep learning in an embedded system for real-time target detection based on images from an unmanned aerial vehicle: vehicle detection as a case study 被引量:1
11
作者 Fang Huang Shengyi Chen +2 位作者 Qi Wang Yingjie Chen Dandan Zhang 《International Journal of Digital Earth》 SCIE EI 2023年第1期910-936,共27页
For a majority of remote sensing applications of unmanned aerial vehicles(UAVs),the data need to be downloaded to ground devices for processing,but this procedure cannot satisfy the demands of real-time target detecti... For a majority of remote sensing applications of unmanned aerial vehicles(UAVs),the data need to be downloaded to ground devices for processing,but this procedure cannot satisfy the demands of real-time target detection.Our objective in this study is to develop a real-time system based on an embedded technology for image acquisition,target detection,the transmission and display of the results,and user interaction while providing support for the interactions between multiple UAVs and users.This work is divided into three parts:(1)We design the technical procedure and the framework for the implementation of a real-time target detection system according to application requirements.(2)We develop an efficient and reliable data transmission module to realize real-time cross-platform communication between airborne embedded devices and ground-side servers.(3)We optimize the YOLOv4 algorithm by using the K-Means algorithm and TensorRT inference to improve the accuracy and speed of the NVIDIA Jetson TX2.In experiments involving static detection,it had an overall confidence of 89.6%and a rate of missed detection of 3.8%;in experiments involving dynamic detection,it had an overall confidence and a rate of missed detection of 88.2%and 4.6%,respectively. 展开更多
关键词 Unmanned aerial vehicle(UAV) embedded system deep learning YOLOv4 algorithm data transmission vehicle detection
原文传递
Semantic Segmentation and YOLO Detector over Aerial Vehicle Images
12
作者 Asifa Mehmood Qureshi Abdul Haleem Butt +5 位作者 Abdulwahab Alazeb Naif Al Mudawi Mohammad Alonazi Nouf Abdullah Almujally Ahmad Jalal Hui Liu 《Computers, Materials & Continua》 SCIE EI 2024年第8期3315-3332,共18页
Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overa... Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overall accuracy.Deep learning is considered to be an efficient method for object detection in vision-based systems.In this paper,we proposed a vision-based vehicle detection and tracking system based on a You Look Only Once version 5(YOLOv5)detector combined with a segmentation technique.The model consists of six steps.In the first step,all the extracted traffic sequence images are subjected to pre-processing to remove noise and enhance the contrast level of the images.These pre-processed images are segmented by labelling each pixel to extract the uniform regions to aid the detection phase.A single-stage detector YOLOv5 is used to detect and locate vehicles in images.Each detection was exposed to Speeded Up Robust Feature(SURF)feature extraction to track multiple vehicles.Based on this,a unique number is assigned to each vehicle to easily locate them in the succeeding image frames by extracting them using the feature-matching technique.Further,we implemented a Kalman filter to track multiple vehicles.In the end,the vehicle path is estimated by using the centroid points of the rectangular bounding box predicted by the tracking algorithm.The experimental results and comparison reveal that our proposed vehicle detection and tracking system outperformed other state-of-the-art systems.The proposed implemented system provided 94.1%detection precision for Roundabout and 96.1%detection precision for Vehicle Aerial Imaging from Drone(VAID)datasets,respectively. 展开更多
关键词 Semantic segmentation YOLOv5 vehicle detection and tracking Kalman filter SURF
下载PDF
3D Vehicle Detection Based on LiDAR and Camera Fusion 被引量:2
13
作者 Yingfeng Cai Tiantian Zhang +3 位作者 Hai Wang Yicheng Li Qingchao Liu Xiaobo Chen 《Automotive Innovation》 EI CSCD 2019年第4期276-283,共8页
Nowadays,the deep learning for object detection has become more popular and is widely adopted in many fields.This paper focuses on the research of LiDAR and camera sensor fusion technology for vehicle detection to ens... Nowadays,the deep learning for object detection has become more popular and is widely adopted in many fields.This paper focuses on the research of LiDAR and camera sensor fusion technology for vehicle detection to ensure extremely high detection accuracy.The proposed network architecture takes full advantage of the deep information of both the LiDAR point cloud and RGB image in object detection.First,the LiDAR point cloud and RGB image are fed into the system.Then a high-resolution feature map is used to generate a reliable 3D object proposal for both the LiDAR point cloud and RGB image.Finally,3D box regression is performed to predict the extent and orientation of vehicles in 3D space.Experiments on the challenging KITTI benchmark show that the proposed approach obtains ideal detection results and the detection time of each frame is about 0.12 s.This approach could establish a basis for further research in autonomous vehicles. 展开更多
关键词 vehicle detection LiDAR point cloud RGB image FUSION
原文传递
A framework for cloned vehicle detection 被引量:1
14
作者 Minxi Li Jiali Mao +1 位作者 Xiaodong Qi Cheqing Jin 《Frontiers of Computer Science》 SCIE EI CSCD 2020年第5期181-198,共18页
Rampant cloned vehicle offenses have caused great damage to transportation management as well as public safety and even the world economy.It necessitates an efficient detection mechanism to identify the vehicles with ... Rampant cloned vehicle offenses have caused great damage to transportation management as well as public safety and even the world economy.It necessitates an efficient detection mechanism to identify the vehicles with fake license plates accurately,and further explore the motives through discerning the behaviors of cloned vehicles.The ubiquitous inspection spots that deployed in the city have been collecting moving information of passing vehicles,which opens up a new opportunity for cloned vehicle detection.Existing detection methods cannot detect the cloned vehicle effectively due to that they use the fixed speed threshold.In this paper,we propose a two-phase framework,called CVDF,to detect cloned vehicles and discriminate behavior patterns of vehicles that use the same plate number.In the detection phase,cloned vehicles are identified based on speed thresholds extracted from historical trajectory and behavior abnormality analysis within the local neighborhood.In the behavior analysis phase,consider the traces of vehicles that uses the same license plate will be mixed together,we aim to differentiate the trajectories through matching degree-based clustering and then extract frequent temporal behavior patterns.The experimental results on the real-world data show that CVDF framework has high detection precision and could reveal cloned vehicles’behavior effectively.Our proposal provides a scientific basis for traffic management authority to solve the crime of cloned vehicle. 展开更多
关键词 cloned vehicle detection object identification behavior pattern mining
原文传递
FIR-YOLACT:Fusion of ICIoU and Res2Net for YOLACT on Real-Time Vehicle Instance Segmentation
15
作者 Wen Dong Ziyan Liu +1 位作者 Mo Yang Ying Wu 《Computers, Materials & Continua》 SCIE EI 2023年第12期3551-3572,共22页
Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving syst... Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving systems.The vehicle instance segmentation can perform instance-level semantic parsing of vehicle information,which is more accurate and reliable than object detection.However,the existing instance segmentation algorithms still have the problems of poor mask prediction accuracy and low detection speed.Therefore,this paper proposes an advanced real-time instance segmentation model named FIR-YOLACT,which fuses the ICIoU(Improved Complete Intersection over Union)and Res2Net for the YOLACT algorithm.Specifically,the ICIoU function can effectively solve the degradation problem of the original CIoU loss function,and improve the training convergence speed and detection accuracy.The Res2Net module fused with the ECA(Efficient Channel Attention)Net is added to the model’s backbone network,which improves the multi-scale detection capability and mask prediction accuracy.Furthermore,the Cluster NMS(Non-Maximum Suppression)algorithm is introduced in the model’s bounding box regression to enhance the performance of detecting similarly occluded objects.The experimental results demonstrate the superiority of FIR-YOLACT to the based methods and the effectiveness of all components.The processing speed reaches 28 FPS,which meets the demands of real-time vehicle instance segmentation. 展开更多
关键词 Instance segmentation real-time vehicle detection YOLACT Res2Net ICIoU
下载PDF
Patch-based vehicle logo detection with patch intensity and weight matrix 被引量:3
16
作者 刘海明 黄樟灿 Ahmed Mahgoub Ahmed Talab 《Journal of Central South University》 SCIE EI CAS CSCD 2015年第12期4679-4686,共8页
A patch-based method for detecting vehicle logos using prior knowledge is proposed.By representing the coarse region of the logo with the weight matrix of patch intensity and position,the proposed method is robust to ... A patch-based method for detecting vehicle logos using prior knowledge is proposed.By representing the coarse region of the logo with the weight matrix of patch intensity and position,the proposed method is robust to bad and complex environmental conditions.The bounding-box of the logo is extracted by a thershloding approach.Experimental results show that 93.58% location accuracy is achieved with 1100 images under various environmental conditions,indicating that the proposed method is effective and suitable for the location of vehicle logo in practical applications. 展开更多
关键词 vehicle logo detection prior knowledge gradient extraction patch intensity weight matrix background removing
下载PDF
GDMNet: A Unified Multi-Task Network for Panoptic Driving Perception
17
作者 Yunxiang Liu Haili Ma +1 位作者 Jianlin Zhu Qiangbo Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第8期2963-2978,共16页
To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentat... To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentation,lane detection,and traffic object detection.Firstly,in the encoding stage,features are extracted,and Generalized Efficient Layer Aggregation Network(GELAN)is utilized to enhance feature extraction and gradient flow.Secondly,in the decoding stage,specialized detection heads are designed;the drivable area segmentation head employs DySample to expand feature maps,the lane detection head merges early-stage features and processes the output through the Focal Modulation Network(FMN).Lastly,the Minimum Point Distance IoU(MPDIoU)loss function is employed to compute the matching degree between traffic object detection boxes and predicted boxes,facilitating model training adjustments.Experimental results on the BDD100K dataset demonstrate that the proposed network achieves a drivable area segmentation mean intersection over union(mIoU)of 92.2%,lane detection accuracy and intersection over union(IoU)of 75.3%and 26.4%,respectively,and traffic object detection recall and mAP of 89.7%and 78.2%,respectively.The detection performance surpasses that of other single-task or multi-task algorithm models. 展开更多
关键词 Autonomous driving multitask learning drivable area segmentation lane detection vehicle detection
下载PDF
A Real-Time Multi-Vehicle Tracking Framework in Intelligent Vehicular Networks 被引量:1
18
作者 Huiyuan Fu Jun Guan +2 位作者 Feng Jing Chuanming Wang Huadong Ma 《China Communications》 SCIE CSCD 2021年第6期89-99,共11页
In this paper,we provide a new approach for intelligent traffic transportation in the intelligent vehicular networks,which aims at collecting the vehicles’locations,trajectories and other key driving parameters for t... In this paper,we provide a new approach for intelligent traffic transportation in the intelligent vehicular networks,which aims at collecting the vehicles’locations,trajectories and other key driving parameters for the time-critical autonomous driving’s requirement.The key of our method is a multi-vehicle tracking framework in the traffic monitoring scenario..Our proposed framework is composed of three modules:multi-vehicle detection,multi-vehicle association and miss-detected vehicle tracking.For the first module,we integrate self-attention mechanism into detector of using key point estimation for better detection effect.For the second module,we apply the multi-dimensional information for robustness promotion,including vehicle re-identification(Re-ID)features,historical trajectory information,and spatial position information For the third module,we re-track the miss-detected vehicles with occlusions in the first detection module.Besides,we utilize the asymmetric convolution and depth-wise separable convolution to reduce the model’s parameters for speed-up.Extensive experimental results show the effectiveness of our proposed multi-vehicle tracking framework. 展开更多
关键词 multiple object tracking vehicle detection vehicle re-identification single object tracking machine learning
下载PDF
A review of vehicle detection methods based on computer vision
19
作者 Changxi Ma Fansong Xue 《Journal of Intelligent and Connected Vehicles》 EI 2024年第1期1-18,共18页
With the increasing number of vehicles,there has been an unprecedented pressure on the operation and maintenance of intelligent transportation systems and transportation infrastructure.In order to achieve faster and m... With the increasing number of vehicles,there has been an unprecedented pressure on the operation and maintenance of intelligent transportation systems and transportation infrastructure.In order to achieve faster and more accurate identification of traffic vehicles,computer vision and deep learning technology play a vital role and have made significant advancements.This study summarizes the current research status,latest findings,and future development trends of traditional detection algorithms and deep learning-based detection algorithms.Among the detection algorithms based on deep learning,this study focuses on the representative convolutional neural network models.Specifically,it examines the two-stage and one-stage detection algorithms,which have been extensively utilized in the field of intelligent transportation systems.Compared to traditional detection algorithms,deep learning-based detection algorithms can achieve higher accuracy and efficiency.The single-stage detection algorithm is more efficient for real-time detection,while the two-stage detection algorithm is more accurate than the single-stage detection algorithm.In the follow-up research,it is important to consider the balance between detection efficiency and detection accuracy.Additionally,vehicle missed detection and false detection in complex scenes,such as bad weather and vehicle overlap,should be taken into account.This will ensure better application of the research findings in engineering practice. 展开更多
关键词 intelligent transportation system computer vision deep learning vehicle detection object detection algorithm
原文传递
Temporal-spatial dynamic characteristics of vehicle emissions on intercity roads in Guangdong Province based on vehicle identity detection data
20
作者 Hui Ding Yongming Zhao +2 位作者 Shenhua Miao Tong Chen Yonghong Liu 《Journal of Environmental Sciences》 SCIE EI CAS CSCD 2023年第8期126-138,共13页
Estimating intercity vehicle emissions precisely would benefit collaborative control in multiple cities.Considering the variability of emissions caused by vehicles,roads,and traffic,the 24-hour change characteristics ... Estimating intercity vehicle emissions precisely would benefit collaborative control in multiple cities.Considering the variability of emissions caused by vehicles,roads,and traffic,the 24-hour change characteristics of air pollutants(CO,HC,NO_(X),PM_(2.5))on the intercity road network of Guangdong Province by vehicle categories and road links were revealed based on vehicle identity detection data in real-life traffic for each hour in July 2018.The results showed that the spatial diversity of emissions caused by the unbalanced economywas obvious.The vehicle emissions in the Pearl River Delta region(PRD)with a higher economic level were approximately 1–2 times those in the non-Pearl RiverDelta region(non-PRD).Provincial roads with high loads became potential sources of high emissions.Therefore,emission control policies must emphasize the PRD and key roads by travel guidance to achieve greater reduction.Gasoline passenger cars with a large proportion of traffic dominated morning and evening peaks in the 24-hour period and were the dominant contributors to CO and HC emissions,contributing more than 50%in the daytime(7:00–23:00)and higher than 26%at night(0:00–6:00).Diesel trucks made up 10%of traffic,but were the dominant player at night,contributed 50%–90%to NO_(X) and PM_(2.5) emissions,with amarked 24-hour change rule of more than 80%at night(23:00–5:00)and less than 60%during daytime.Therefore,targeted control measures by time-section should be set up on collaborative control.These findings provide time-varying decision support for variable vehicle emission control on a large scale. 展开更多
关键词 Intercity roads Dynamic vehicle emissions vehicle identity detection data Diesel trucks
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部