The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accide...The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accident prevention,cost reduction,and enhanced traffic regularity.Despite these benefits,IoV technology is susceptible to cyber-attacks,which can exploit vulnerabilities in the vehicle network,leading to perturbations,disturbances,non-recognition of traffic signs,accidents,and vehicle immobilization.This paper reviews the state-of-the-art achievements and developments in applying Deep Transfer Learning(DTL)models for Intrusion Detection Systems in the Internet of Vehicles(IDS-IoV)based on anomaly detection.IDS-IoV leverages anomaly detection through machine learning and DTL techniques to mitigate the risks posed by cyber-attacks.These systems can autonomously create specific models based on network data to differentiate between regular traffic and cyber-attacks.Among these techniques,transfer learning models are particularly promising due to their efficacy with tagged data,reduced training time,lower memory usage,and decreased computational complexity.We evaluate DTL models against criteria including the ability to transfer knowledge,detection rate,accurate analysis of complex data,and stability.This review highlights the significant progress made in the field,showcasing how DTL models enhance the performance and reliability of IDS-IoV systems.By examining recent advancements,we provide insights into how DTL can effectively address cyber-attack challenges in IoV environments,ensuring safer and more efficient transportation networks.展开更多
To address the challenges of high complexity,poor real-time performance,and low detection rates for small target vehicles in existing vehicle object detection algorithms,this paper proposes a real-time lightweight arc...To address the challenges of high complexity,poor real-time performance,and low detection rates for small target vehicles in existing vehicle object detection algorithms,this paper proposes a real-time lightweight architecture based on You Only Look Once(YOLO)v5m.Firstly,a lightweight upsampling operator called Content-Aware Reassembly of Features(CARAFE)is introduced in the feature fusion layer of the network to maximize the extraction of deep-level features for small target vehicles,reducing the missed detection rate and false detection rate.Secondly,a new prediction layer for tiny targets is added,and the feature fusion network is redesigned to enhance the detection capability for small targets.Finally,this paper applies L1 regularization to train the improved network,followed by pruning and fine-tuning operations to remove redundant channels,reducing computational and parameter complexity and enhancing the detection efficiency of the network.Training is conducted on the VisDrone2019-DET dataset.The experimental results show that the proposed algorithmreduces parameters and computation by 63.8% and 65.8%,respectively.The average detection accuracy improves by 5.15%,and the detection speed reaches 47 images per second,satisfying real-time requirements.Compared with existing approaches,including YOLOv5m and classical vehicle detection algorithms,our method achieves higher accuracy and faster speed for real-time detection of small target vehicles in edge computing.展开更多
Vehicle detection plays a crucial role in the field of autonomous driving technology.However,directly applying deep learning-based object detection algorithms to complex road scene images often leads to subpar perform...Vehicle detection plays a crucial role in the field of autonomous driving technology.However,directly applying deep learning-based object detection algorithms to complex road scene images often leads to subpar performance and slow inference speeds in vehicle detection.Achieving a balance between accuracy and detection speed is crucial for real-time object detection in real-world road scenes.This paper proposes a high-precision and fast vehicle detector called the feature-guided bidirectional pyramid network(FBPN).Firstly,to tackle challenges like vehicle occlusion and significant background interference,the efficient feature filtering module(EFFM)is introduced into the deep network,which amplifies the disparities between the features of the vehicle and the background.Secondly,the proposed global attention localization module(GALM)in the model neck effectively perceives the detailed position information of the target,improving both the accuracy and inference speed of themodel.Finally,the detection accuracy of small-scale vehicles is further enhanced through the utilization of a four-layer feature pyramid structure.Experimental results show that FBPN achieves an average precision of 60.8% and 97.8% on the BDD100K and KITTI datasets,respectively,with inference speeds reaching 344.83 frames/s and 357.14 frames/s.FBPN demonstrates its effectiveness and superiority by striking a balance between detection accuracy and inference speed,outperforming several state-of-the-art methods.展开更多
Accurate and reliable fault detection is essential for the safe operation of electric vehicles.Support vector data description(SVDD)has been widely used in the field of fault detection.However,constructing the hypersp...Accurate and reliable fault detection is essential for the safe operation of electric vehicles.Support vector data description(SVDD)has been widely used in the field of fault detection.However,constructing the hypersphere boundary only describes the distribution of unlabeled samples,while the distribution of faulty samples cannot be effectively described and easilymisses detecting faulty data due to the imbalance of sample distribution.Meanwhile,selecting parameters is critical to the detection performance,and empirical parameterization is generally timeconsuming and laborious and may not result in finding the optimal parameters.Therefore,this paper proposes a semi-supervised data-driven method based on which the SVDD algorithm is improved and achieves excellent fault detection performance.By incorporating faulty samples into the underlying SVDD model,training deals better with the problem of missing detection of faulty samples caused by the imbalance in the distribution of abnormal samples,and the hypersphere boundary ismodified to classify the samplesmore accurately.The Bayesian Optimization NSVDD(BO-NSVDD)model was constructed to quickly and accurately optimize hyperparameter combinations.In the experiments,electric vehicle operation data with four common fault types are used to evaluate the performance with other five models,and the results show that the BO-NSVDD model presents superior detection performance for each type of fault data,especially in the imperceptible early and minor faults,which has seen very obvious advantages.Finally,the strong robustness of the proposed method is verified by adding different intensities of noise in the dataset.展开更多
With the gradual development of automatic driving technology,people’s attention is no longer limited to daily automatic driving target detection.In response to the problem that it is difficult to achieve fast and acc...With the gradual development of automatic driving technology,people’s attention is no longer limited to daily automatic driving target detection.In response to the problem that it is difficult to achieve fast and accurate detection of visual targets in complex scenes of automatic driving at night,a detection algorithm based on improved YOLOv8s was proposed.Firsly,By adding Triplet Attention module into the lower sampling layer of the original model,the model can effectively retain and enhance feature information related to target detection on the lower-resolution feature map.This enhancement improved the robustness of the target detection network and reduced instances of missed detections.Secondly,the Soft-NMS algorithm was introduced to address the challenges of dealing with dense targets,overlapping objects,and complex scenes.This algorithm effectively reduced false and missed positives,thereby improved overall detection performance when faced with highly overlapping detection results.Finally,the experimental results on the MPDIoU loss function dataset showed that compared with the original model,the improved method,in which mAP and accuracy are increased by 2.9%and 2.8%respectively,can achieve better detection accuracy and speed in night vehicle detection.It can effectively improve the problem of target detection in night scenes.展开更多
Vision-based vehicle detection in adverse weather conditions such as fog,haze,and mist is a challenging research area in the fields of autonomous vehicles,collision avoidance,and Internet of Things(IoT)-enabled edge/f...Vision-based vehicle detection in adverse weather conditions such as fog,haze,and mist is a challenging research area in the fields of autonomous vehicles,collision avoidance,and Internet of Things(IoT)-enabled edge/fog computing traffic surveillance and monitoring systems.Efficient and cost-effective vehicle detection at high accuracy and speed in foggy weather is essential to avoiding road traffic collisions in real-time.To evaluate vision-based vehicle detection performance in foggy weather conditions,state-of-the-art Vehicle Detection in Adverse Weather Nature(DAWN)and Foggy Driving(FD)datasets are self-annotated using the YOLO LABEL tool and customized to four vehicle detection classes:cars,buses,motorcycles,and trucks.The state-of-the-art single-stage deep learning algorithms YOLO-V5,and YOLO-V8 are considered for the task of vehicle detection.Furthermore,YOLO-V5s is enhanced by introducing attention modules Convolutional Block Attention Module(CBAM),Normalized-based Attention Module(NAM),and Simple Attention Module(SimAM)after the SPPF module as well as YOLO-V5l with BiFPN.Their vehicle detection accuracy parameters and running speed is validated on cloud(Google Colab)and edge(local)systems.The mAP50 score of YOLO-V5n is 72.60%,YOLOV5s is 75.20%,YOLO-V5m is 73.40%,and YOLO-V5l is 77.30%;and YOLO-V8n is 60.20%,YOLO-V8s is 73.50%,YOLO-V8m is 73.80%,and YOLO-V8l is 72.60%on DAWN dataset.The mAP50 score of YOLO-V5n is 43.90%,YOLO-V5s is 40.10%,YOLO-V5m is 49.70%,and YOLO-V5l is 57.30%;and YOLO-V8n is 41.60%,YOLO-V8s is 46.90%,YOLO-V8m is 42.90%,and YOLO-V8l is 44.80%on FD dataset.The vehicle detection speed of YOLOV5n is 59 Frame Per Seconds(FPS),YOLO-V5s is 47 FPS,YOLO-V5m is 38 FPS,and YOLO-V5l is 30 FPS;and YOLO-V8n is 185 FPS,YOLO-V8s is 109 FPS,YOLO-V8m is 72 FPS,and YOLO-V8l is 63 FPS on DAWN dataset.The vehicle detection speed of YOLO-V5n is 26 FPS,YOLO-V5s is 24 FPS,YOLO-V5m is 22 FPS,and YOLO-V5l is 17 FPS;and YOLO-V8n is 313 FPS,YOLO-V8s is 182 FPS,YOLO-V8m is 99 FPS,and YOLO-V8l is 60 FPS on FD dataset.YOLO-V5s,YOLO-V5s variants and YOLO-V5l_BiFPN,and YOLO-V8 algorithms are efficient and cost-effective solution for real-time vision-based vehicle detection in foggy weather.展开更多
Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overa...Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overall accuracy.Deep learning is considered to be an efficient method for object detection in vision-based systems.In this paper,we proposed a vision-based vehicle detection and tracking system based on a You Look Only Once version 5(YOLOv5)detector combined with a segmentation technique.The model consists of six steps.In the first step,all the extracted traffic sequence images are subjected to pre-processing to remove noise and enhance the contrast level of the images.These pre-processed images are segmented by labelling each pixel to extract the uniform regions to aid the detection phase.A single-stage detector YOLOv5 is used to detect and locate vehicles in images.Each detection was exposed to Speeded Up Robust Feature(SURF)feature extraction to track multiple vehicles.Based on this,a unique number is assigned to each vehicle to easily locate them in the succeeding image frames by extracting them using the feature-matching technique.Further,we implemented a Kalman filter to track multiple vehicles.In the end,the vehicle path is estimated by using the centroid points of the rectangular bounding box predicted by the tracking algorithm.The experimental results and comparison reveal that our proposed vehicle detection and tracking system outperformed other state-of-the-art systems.The proposed implemented system provided 94.1%detection precision for Roundabout and 96.1%detection precision for Vehicle Aerial Imaging from Drone(VAID)datasets,respectively.展开更多
While vehicle detection on highways has been reported before, to the best of our knowledge, intelligent monitoring system that aims at detecting hydraulic excavators and dump trucks on state-owned land has not been ex...While vehicle detection on highways has been reported before, to the best of our knowledge, intelligent monitoring system that aims at detecting hydraulic excavators and dump trucks on state-owned land has not been explored thoroughly yet. In this paper, we present an automatic, video-based algorithm for detecting hydraulic excavators and dump trucks. Derived from lessons learned from video processing, we proposed methods for foreground detection based on an improved frame difference algorithm, and then detected hydraulic excavators and dump trucks in the respective region of interest. From our analysis, we proposed methods based on inverse valley feature of mechanical arm and spatial-temporal reasoning for hydraulic excavator detection. In addition, we explored dump truck detection strategies that combine structured component projection with the spatial relationship. Experiments on real-monitoring sites demonstrated the promising performance of our system.展开更多
Object detection(OD)in remote sensing images(RSI)acts as a vital part in numerous civilian and military application areas,like urban planning,geographic information system(GIS),and search and rescue functions.Vehicle ...Object detection(OD)in remote sensing images(RSI)acts as a vital part in numerous civilian and military application areas,like urban planning,geographic information system(GIS),and search and rescue functions.Vehicle recognition from RSIs remained a challenging process because of the difficulty of background data and the redundancy of recognition regions.The latest advancements in deep learning(DL)approaches permit the design of effectual OD approaches.This study develops an Artificial Ecosystem Optimizer with Deep Convolutional Neural Network for Vehicle Detection(AEODCNN-VD)model on Remote Sensing Images.The proposed AEODCNN-VD model focuses on the identification of vehicles accurately and rapidly.To detect vehicles,the presented AEODCNN-VD model employs single shot detector(SSD)with Inception network as a baseline model.In addition,Multiway Feature Pyramid Network(MFPN)is used for handling objects of varying sizes in RSIs.The features from the Inception model are passed into theMFPNformultiway andmultiscale feature fusion.Finally,the fused features are passed into bounding box and class prediction networks.For enhancing the detection efficiency of the AEODCNN-VD approach,AEO based hyperparameter optimizer is used,which is stimulated by the energy transfer strategies such as production,consumption,and decomposition in an ecosystem.The performance validation of the presentedmethod on benchmark datasets showed promising performance over recent DL models.展开更多
Globally traffic signs are used by all countries for healthier traffic flow and to protect drivers and pedestrians.Consequently,traffic signs have been of great importance for every civilized country,which makes resea...Globally traffic signs are used by all countries for healthier traffic flow and to protect drivers and pedestrians.Consequently,traffic signs have been of great importance for every civilized country,which makes researchers give more focus on the automatic detection of traffic signs.Detecting these traffic signs is challenging due to being in the dark,far away,partially occluded,and affected by the lighting or the presence of similar objects.An innovative traffic sign detection method for red and blue signs in color images is proposed to resolve these issues.This technique aimed to devise an efficient,robust and accurate approach.To attain this,initially,the approach presented a new formula,inspired by existing work,to enhance the image using red and green channels instead of blue,which segmented using a threshold calculated from the correlational property of the image.Next,a new set of features is proposed,motivated by existing features.Texture and color features are fused after getting extracted on the channel of Red,Green,and Blue(RGB),Hue,Saturation,and Value(HSV),and YCbCr color models of images.Later,the set of features is employed on different classification frameworks,from which quadratic support vector machine(SVM)outnumbered the others with an accuracy of 98.5%.The proposed method is tested on German Traffic Sign Detection Benchmark(GTSDB)images.The results are satisfactory when compared to the preceding work.展开更多
Nowadays,the rapid development of edge computing has driven an increasing number of deep learning applications deployed at the edge of the network,such as pedestrian and vehicle detection,to provide efficient intellig...Nowadays,the rapid development of edge computing has driven an increasing number of deep learning applications deployed at the edge of the network,such as pedestrian and vehicle detection,to provide efficient intelligent services to mobile users.However,as the accuracy requirements continue to increase,the components of deep learning models for pedestrian and vehicle detection,such as YOLOv4,become more sophisticated and the computing resources required for model training are increasing dramatically,which in turn leads to significant challenges in achieving effective deployment on resource-constrained edge devices while ensuring the high accuracy performance.For addressing this challenge,a cloud-edge collaboration-based pedestrian and vehicle detection framework is proposed in this paper,which enables sufficient training of models by utilizing the abundant computing resources in the cloud,and then deploying the well-trained models on edge devices,thus reducing the computing resource requirements for model training on edge devices.Furthermore,to reduce the size of the model deployed on edge devices,an automatic pruning method combines the convolution layer and BN layer is proposed to compress the pedestrian and vehicle detection model size.Experimental results show that the framework proposed in this paper is able to deploy the pruned model on a real edge device,Jetson TX2,with 6.72 times higher FPS.Meanwhile,the channel pruning reduces the volume and the number of parameters to 96.77%for the model,and the computing amount is reduced to 81.37%.展开更多
3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be...3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be a more effective decision-level fusion algorithm,but it does not fully utilize the extracted features of 3D and 2D.Therefore,we proposed a 3D vehicle detection algorithm based onmultimodal decision-level fusion.First,project the anchor point of the 3D detection bounding box into the 2D image,calculate the distance between 2D and 3D anchor points,and use this distance as a new fusion feature to enhance the feature redundancy of the network.Subsequently,add an attention module:squeeze-and-excitation networks,weight each feature channel to enhance the important features of the network,and suppress useless features.The experimental results show that the mean average precision of the algorithm in the KITTI dataset is 82.96%,which outperforms previous state-ofthe-art multimodal fusion-based methods,and the average accuracy in the Easy,Moderate and Hard evaluation indicators reaches 88.96%,82.60%,and 77.31%,respectively,which are higher compared to the original CLOCs model by 1.02%,2.29%,and 0.41%,respectively.Compared with the original CLOCs algorithm,our algorithm has higher accuracy and better performance in 3D vehicle detection.展开更多
Autonomous vehicles are currently regarded as an interesting topic in the AI field.For such vehicles,the lane where they are traveling should be detected.Most lane detection methods identify the whole road area with a...Autonomous vehicles are currently regarded as an interesting topic in the AI field.For such vehicles,the lane where they are traveling should be detected.Most lane detection methods identify the whole road area with all the lanes built on it.In addition to having a low accuracy rate and slow processing time,these methods require costly hardware and training datasets,and they fail under critical conditions.In this study,a novel detection algo-rithm for a lane where a car is currently traveling is proposed by combining simple traditional image processing with lightweight machine learning(ML)methods.First,a preparation phase removes all unwanted information to preserve the topographical representations of virtual edges within a one-pixel width around expected lanes.Then,a simple feature extraction phase obtains only the intersection point position and angle degree of each candidate edge.Subsequently,a proposed scheme that comprises consecutive lightweight ML models is applied to detect the correct lane by using the extracted features.This scheme is based on the density-based spatial clustering of applications with noise,random forest trees,a neural network,and rule-based methods.To increase accuracy and reduce processing time,each model supports the next one during detection.When a model detects a lane,the subsequent models are skipped.The models are trained on the Karlsruhe Institute of Technology and Toyota Technological Institute datasets.Results show that the proposed method is faster and achieves higher accuracy than state-of-the-art methods.This method is simple,can handle degradation conditions,and requires low-cost hardware and training datasets.展开更多
Establishing a system for measuring plant health and bacterial infection is critical in agriculture.Previously,the farmers themselves,who observed them with their eyes and relied on their experience in analysis,which ...Establishing a system for measuring plant health and bacterial infection is critical in agriculture.Previously,the farmers themselves,who observed them with their eyes and relied on their experience in analysis,which could have been incorrect.Plant inspection can determine which plants reflect the quantity of green light and near-infrared using infrared light,both visible and eye using a drone.The goal of this study was to create algorithms for assessing bacterial infections in rice using images from unmanned aerial vehicles(UAVs)with an ensemble classification technique.Convolution neural networks in unmanned aerial vehi-cles image were used.To convey this interest,the rice’s health and bacterial infec-tion inside the photo were detected.The project entailed using pictures to identify bacterial illnesses in rice.The shape and distinct characteristics of each infection were observed.Rice symptoms were defined using machine learning and image processing techniques.Two steps of a convolution neural network based on an image from a UAV were used in this study to determine whether this area will be affected by bacteria.The proposed algorithms can be utilized to classify the types of rice deceases with an accuracy rate of 89.84 percent.展开更多
With the rapid development of social economy,transportation has become faster and more efficient.As an important part of goods transportation,the safe maintenance of tunnel highways has become particularly important.T...With the rapid development of social economy,transportation has become faster and more efficient.As an important part of goods transportation,the safe maintenance of tunnel highways has become particularly important.The maintenance of tunnel roads has become more difficult due to problems such as sealing,narrowness and lack of light.Currently,target detection methods are advantageous in detecting tunnel vehicles in a timely manner through monitoring.Therefore,in order to prevent vehicle misdetection and missed detection in this complex environment,we propose aYOLOv5-Vehicle model based on the YOLOv5 network.This model is improved in three ways.Firstly,The backbone network of YOLOv5 is replaced by the lightweight MobileNetV3 network to extract features,which reduces the number of model parameters;Next,all convolutions in the neck module are improved to the depth-wise separable convolutions to further reduce the number of model parameters and computation,and improve the detection speed of the model;Finally,to ensure the accuracy of the model,the CBAM attention mechanism is introduced to improve the detection accuracy and precision of the model.Experiments results demonstrate that the YOLOv5-Vehicle model can improve the accuracy.展开更多
In order to decrease vehicle crashes, a new rear view vehicle detection system based on monocular vision is designed. First, a small and flexible hardware platform based on a DM642 digtal signal processor (DSP) micr...In order to decrease vehicle crashes, a new rear view vehicle detection system based on monocular vision is designed. First, a small and flexible hardware platform based on a DM642 digtal signal processor (DSP) micro-controller is built. Then, a two-step vehicle detection algorithm is proposed. In the first step, a fast vehicle edge and symmetry fusion algorithm is used and a low threshold is set so that all the possible vehicles have a nearly 100% detection rate (TP) and the non-vehicles have a high false detection rate (FP), i. e., all the possible vehicles can be obtained. In the second step, a classifier using a probabilistic neural network (PNN) which is based on multiple scales and an orientation Gabor feature is trained to classify the possible vehicles and eliminate the false detected vehicles from the candidate vehicles generated in the first step. Experimental results demonstrate that the proposed system maintains a high detection rate and a low false detection rate under different road, weather and lighting conditions.展开更多
The lifespan models of commercial 18650-type lithium ion batteries (nominal capacity of 1150 mA-h) were presented. The lifespan was extrapolated based on this model. The results indicate that the relationship of cap...The lifespan models of commercial 18650-type lithium ion batteries (nominal capacity of 1150 mA-h) were presented. The lifespan was extrapolated based on this model. The results indicate that the relationship of capacity retention and cycle number can be expressed by Gaussian function. The selecting function and optimal precision were verified through actual match detection and a range of alternating current impedance testing. The cycle life model with high precision (〉99%) is beneficial to shortening the orediction time and cutting the prediction cost.展开更多
To ensure revulsive driving of intelligent vehicles at intersections, a method is presented to detect and recognize the traffic lights. First, the stabling siding at intersections is detected by applying Hough transfo...To ensure revulsive driving of intelligent vehicles at intersections, a method is presented to detect and recognize the traffic lights. First, the stabling siding at intersections is detected by applying Hough transformation. Then, the colors of traffic lights are detected with color space transformation. Finally, self-associative memory is used to recognize the countdown characters of the traffic lights. Test results at 20 real intersections show that the ratio of correct stabling siding recognition reaches up to 90%;and the ratios of recognition of traffic lights and divided characters are 85% and 97%, respectively. The research proves that the method is efficient for the detection of stabling siding and is robust enough to recognize the characters from images with noise and broken edges.展开更多
This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm...This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.展开更多
The blockchain-empowered Internet of Vehicles(IoV)enables various services and achieves data security and privacy,significantly advancing modern vehicle systems.However,the increased frequency of data transmission and...The blockchain-empowered Internet of Vehicles(IoV)enables various services and achieves data security and privacy,significantly advancing modern vehicle systems.However,the increased frequency of data transmission and complex network connections among nodes also make them more susceptible to adversarial attacks.As a result,an efficient intrusion detection system(IDS)becomes crucial for securing the IoV environment.Existing IDSs based on convolutional neural networks(CNN)often suffer from high training time and storage requirements.In this paper,we propose a lightweight IDS solution to protect IoV against both intra-vehicle and external threats.Our approach achieves superior performance,as demonstrated by key metrics such as accuracy and precision.Specifically,our method achieves accuracy rates ranging from 99.08% to 100% on the Car-Hacking dataset,with a remarkably short training time.展开更多
基金This paper is financed by the European Union-NextGenerationEU,through the National Recovery and Resilience Plan of the Republic of Bulgaria,Project No.BG-RRP-2.004-0001-C01.
文摘The high performance of IoT technology in transportation networks has led to the increasing adoption of Internet of Vehicles(IoV)technology.The functional advantages of IoV include online communication services,accident prevention,cost reduction,and enhanced traffic regularity.Despite these benefits,IoV technology is susceptible to cyber-attacks,which can exploit vulnerabilities in the vehicle network,leading to perturbations,disturbances,non-recognition of traffic signs,accidents,and vehicle immobilization.This paper reviews the state-of-the-art achievements and developments in applying Deep Transfer Learning(DTL)models for Intrusion Detection Systems in the Internet of Vehicles(IDS-IoV)based on anomaly detection.IDS-IoV leverages anomaly detection through machine learning and DTL techniques to mitigate the risks posed by cyber-attacks.These systems can autonomously create specific models based on network data to differentiate between regular traffic and cyber-attacks.Among these techniques,transfer learning models are particularly promising due to their efficacy with tagged data,reduced training time,lower memory usage,and decreased computational complexity.We evaluate DTL models against criteria including the ability to transfer knowledge,detection rate,accurate analysis of complex data,and stability.This review highlights the significant progress made in the field,showcasing how DTL models enhance the performance and reliability of IDS-IoV systems.By examining recent advancements,we provide insights into how DTL can effectively address cyber-attack challenges in IoV environments,ensuring safer and more efficient transportation networks.
基金funded by the General Project of Key Research and Develop-ment Plan of Shaanxi Province(No.2022NY-087).
文摘To address the challenges of high complexity,poor real-time performance,and low detection rates for small target vehicles in existing vehicle object detection algorithms,this paper proposes a real-time lightweight architecture based on You Only Look Once(YOLO)v5m.Firstly,a lightweight upsampling operator called Content-Aware Reassembly of Features(CARAFE)is introduced in the feature fusion layer of the network to maximize the extraction of deep-level features for small target vehicles,reducing the missed detection rate and false detection rate.Secondly,a new prediction layer for tiny targets is added,and the feature fusion network is redesigned to enhance the detection capability for small targets.Finally,this paper applies L1 regularization to train the improved network,followed by pruning and fine-tuning operations to remove redundant channels,reducing computational and parameter complexity and enhancing the detection efficiency of the network.Training is conducted on the VisDrone2019-DET dataset.The experimental results show that the proposed algorithmreduces parameters and computation by 63.8% and 65.8%,respectively.The average detection accuracy improves by 5.15%,and the detection speed reaches 47 images per second,satisfying real-time requirements.Compared with existing approaches,including YOLOv5m and classical vehicle detection algorithms,our method achieves higher accuracy and faster speed for real-time detection of small target vehicles in edge computing.
基金funded by Ministry of Science and Technology of the People’s Republic of China,Grant Numbers 2022YFC3800502Chongqing Science and Technology Commission,Grant Number cstc2020jscx-dxwtBX0019,CSTB2022TIAD-KPX0118,cstc2020jscx-cylhX0005 and cstc2021jscx-gksbX0058.
文摘Vehicle detection plays a crucial role in the field of autonomous driving technology.However,directly applying deep learning-based object detection algorithms to complex road scene images often leads to subpar performance and slow inference speeds in vehicle detection.Achieving a balance between accuracy and detection speed is crucial for real-time object detection in real-world road scenes.This paper proposes a high-precision and fast vehicle detector called the feature-guided bidirectional pyramid network(FBPN).Firstly,to tackle challenges like vehicle occlusion and significant background interference,the efficient feature filtering module(EFFM)is introduced into the deep network,which amplifies the disparities between the features of the vehicle and the background.Secondly,the proposed global attention localization module(GALM)in the model neck effectively perceives the detailed position information of the target,improving both the accuracy and inference speed of themodel.Finally,the detection accuracy of small-scale vehicles is further enhanced through the utilization of a four-layer feature pyramid structure.Experimental results show that FBPN achieves an average precision of 60.8% and 97.8% on the BDD100K and KITTI datasets,respectively,with inference speeds reaching 344.83 frames/s and 357.14 frames/s.FBPN demonstrates its effectiveness and superiority by striking a balance between detection accuracy and inference speed,outperforming several state-of-the-art methods.
基金supported partially by NationalNatural Science Foundation of China(NSFC)(No.U21A20146)Collaborative Innovation Project of Anhui Universities(No.GXXT-2020-070)+8 种基金Cooperation Project of Anhui Future Technology Research Institute and Enterprise(No.2023qyhz32)Development of a New Dynamic Life Prediction Technology for Energy Storage Batteries(No.KH10003598)Opening Project of Key Laboratory of Electric Drive and Control of Anhui Province(No.DQKJ202304)Anhui Provincial Department of Education New Era Education Quality Project(No.2023dshwyx019)Special Fund for Collaborative Innovation between Anhui Polytechnic University and Jiujiang District(No.2022cyxtb10)Key Research and Development Program of Wuhu City(No.2022yf42)Open Research Fund of Anhui Key Laboratory of Detection Technology and Energy Saving Devices(No.JCKJ2021B06)Anhui Provincial Graduate Student Innovation and Entrepreneurship Practice Project(No.2022cxcysj123)Key Scientific Research Project for Anhui Universities(No.2022AH050981).
文摘Accurate and reliable fault detection is essential for the safe operation of electric vehicles.Support vector data description(SVDD)has been widely used in the field of fault detection.However,constructing the hypersphere boundary only describes the distribution of unlabeled samples,while the distribution of faulty samples cannot be effectively described and easilymisses detecting faulty data due to the imbalance of sample distribution.Meanwhile,selecting parameters is critical to the detection performance,and empirical parameterization is generally timeconsuming and laborious and may not result in finding the optimal parameters.Therefore,this paper proposes a semi-supervised data-driven method based on which the SVDD algorithm is improved and achieves excellent fault detection performance.By incorporating faulty samples into the underlying SVDD model,training deals better with the problem of missing detection of faulty samples caused by the imbalance in the distribution of abnormal samples,and the hypersphere boundary ismodified to classify the samplesmore accurately.The Bayesian Optimization NSVDD(BO-NSVDD)model was constructed to quickly and accurately optimize hyperparameter combinations.In the experiments,electric vehicle operation data with four common fault types are used to evaluate the performance with other five models,and the results show that the BO-NSVDD model presents superior detection performance for each type of fault data,especially in the imperceptible early and minor faults,which has seen very obvious advantages.Finally,the strong robustness of the proposed method is verified by adding different intensities of noise in the dataset.
文摘With the gradual development of automatic driving technology,people’s attention is no longer limited to daily automatic driving target detection.In response to the problem that it is difficult to achieve fast and accurate detection of visual targets in complex scenes of automatic driving at night,a detection algorithm based on improved YOLOv8s was proposed.Firsly,By adding Triplet Attention module into the lower sampling layer of the original model,the model can effectively retain and enhance feature information related to target detection on the lower-resolution feature map.This enhancement improved the robustness of the target detection network and reduced instances of missed detections.Secondly,the Soft-NMS algorithm was introduced to address the challenges of dealing with dense targets,overlapping objects,and complex scenes.This algorithm effectively reduced false and missed positives,thereby improved overall detection performance when faced with highly overlapping detection results.Finally,the experimental results on the MPDIoU loss function dataset showed that compared with the original model,the improved method,in which mAP and accuracy are increased by 2.9%and 2.8%respectively,can achieve better detection accuracy and speed in night vehicle detection.It can effectively improve the problem of target detection in night scenes.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RG23129).
文摘Vision-based vehicle detection in adverse weather conditions such as fog,haze,and mist is a challenging research area in the fields of autonomous vehicles,collision avoidance,and Internet of Things(IoT)-enabled edge/fog computing traffic surveillance and monitoring systems.Efficient and cost-effective vehicle detection at high accuracy and speed in foggy weather is essential to avoiding road traffic collisions in real-time.To evaluate vision-based vehicle detection performance in foggy weather conditions,state-of-the-art Vehicle Detection in Adverse Weather Nature(DAWN)and Foggy Driving(FD)datasets are self-annotated using the YOLO LABEL tool and customized to four vehicle detection classes:cars,buses,motorcycles,and trucks.The state-of-the-art single-stage deep learning algorithms YOLO-V5,and YOLO-V8 are considered for the task of vehicle detection.Furthermore,YOLO-V5s is enhanced by introducing attention modules Convolutional Block Attention Module(CBAM),Normalized-based Attention Module(NAM),and Simple Attention Module(SimAM)after the SPPF module as well as YOLO-V5l with BiFPN.Their vehicle detection accuracy parameters and running speed is validated on cloud(Google Colab)and edge(local)systems.The mAP50 score of YOLO-V5n is 72.60%,YOLOV5s is 75.20%,YOLO-V5m is 73.40%,and YOLO-V5l is 77.30%;and YOLO-V8n is 60.20%,YOLO-V8s is 73.50%,YOLO-V8m is 73.80%,and YOLO-V8l is 72.60%on DAWN dataset.The mAP50 score of YOLO-V5n is 43.90%,YOLO-V5s is 40.10%,YOLO-V5m is 49.70%,and YOLO-V5l is 57.30%;and YOLO-V8n is 41.60%,YOLO-V8s is 46.90%,YOLO-V8m is 42.90%,and YOLO-V8l is 44.80%on FD dataset.The vehicle detection speed of YOLOV5n is 59 Frame Per Seconds(FPS),YOLO-V5s is 47 FPS,YOLO-V5m is 38 FPS,and YOLO-V5l is 30 FPS;and YOLO-V8n is 185 FPS,YOLO-V8s is 109 FPS,YOLO-V8m is 72 FPS,and YOLO-V8l is 63 FPS on DAWN dataset.The vehicle detection speed of YOLO-V5n is 26 FPS,YOLO-V5s is 24 FPS,YOLO-V5m is 22 FPS,and YOLO-V5l is 17 FPS;and YOLO-V8n is 313 FPS,YOLO-V8s is 182 FPS,YOLO-V8m is 99 FPS,and YOLO-V8l is 60 FPS on FD dataset.YOLO-V5s,YOLO-V5s variants and YOLO-V5l_BiFPN,and YOLO-V8 algorithms are efficient and cost-effective solution for real-time vision-based vehicle detection in foggy weather.
基金This researchwas supported by the Deanship of ScientificResearch at Najran University,under the Research Group Funding Program Grant Code(NU/RG/SERC/12/30)This research is supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R410)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThis study is supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overall accuracy.Deep learning is considered to be an efficient method for object detection in vision-based systems.In this paper,we proposed a vision-based vehicle detection and tracking system based on a You Look Only Once version 5(YOLOv5)detector combined with a segmentation technique.The model consists of six steps.In the first step,all the extracted traffic sequence images are subjected to pre-processing to remove noise and enhance the contrast level of the images.These pre-processed images are segmented by labelling each pixel to extract the uniform regions to aid the detection phase.A single-stage detector YOLOv5 is used to detect and locate vehicles in images.Each detection was exposed to Speeded Up Robust Feature(SURF)feature extraction to track multiple vehicles.Based on this,a unique number is assigned to each vehicle to easily locate them in the succeeding image frames by extracting them using the feature-matching technique.Further,we implemented a Kalman filter to track multiple vehicles.In the end,the vehicle path is estimated by using the centroid points of the rectangular bounding box predicted by the tracking algorithm.The experimental results and comparison reveal that our proposed vehicle detection and tracking system outperformed other state-of-the-art systems.The proposed implemented system provided 94.1%detection precision for Roundabout and 96.1%detection precision for Vehicle Aerial Imaging from Drone(VAID)datasets,respectively.
文摘While vehicle detection on highways has been reported before, to the best of our knowledge, intelligent monitoring system that aims at detecting hydraulic excavators and dump trucks on state-owned land has not been explored thoroughly yet. In this paper, we present an automatic, video-based algorithm for detecting hydraulic excavators and dump trucks. Derived from lessons learned from video processing, we proposed methods for foreground detection based on an improved frame difference algorithm, and then detected hydraulic excavators and dump trucks in the respective region of interest. From our analysis, we proposed methods based on inverse valley feature of mechanical arm and spatial-temporal reasoning for hydraulic excavator detection. In addition, we explored dump truck detection strategies that combine structured component projection with the spatial relationship. Experiments on real-monitoring sites demonstrated the promising performance of our system.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R136)PrincessNourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR28).
文摘Object detection(OD)in remote sensing images(RSI)acts as a vital part in numerous civilian and military application areas,like urban planning,geographic information system(GIS),and search and rescue functions.Vehicle recognition from RSIs remained a challenging process because of the difficulty of background data and the redundancy of recognition regions.The latest advancements in deep learning(DL)approaches permit the design of effectual OD approaches.This study develops an Artificial Ecosystem Optimizer with Deep Convolutional Neural Network for Vehicle Detection(AEODCNN-VD)model on Remote Sensing Images.The proposed AEODCNN-VD model focuses on the identification of vehicles accurately and rapidly.To detect vehicles,the presented AEODCNN-VD model employs single shot detector(SSD)with Inception network as a baseline model.In addition,Multiway Feature Pyramid Network(MFPN)is used for handling objects of varying sizes in RSIs.The features from the Inception model are passed into theMFPNformultiway andmultiscale feature fusion.Finally,the fused features are passed into bounding box and class prediction networks.For enhancing the detection efficiency of the AEODCNN-VD approach,AEO based hyperparameter optimizer is used,which is stimulated by the energy transfer strategies such as production,consumption,and decomposition in an ecosystem.The performance validation of the presentedmethod on benchmark datasets showed promising performance over recent DL models.
基金supported in part by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education under Grant NRF-2019R1A2C1006159 and Grant NRF-2021R1A6A1A03039493in part by the 2022 Yeungnam University Research Grant.
文摘Globally traffic signs are used by all countries for healthier traffic flow and to protect drivers and pedestrians.Consequently,traffic signs have been of great importance for every civilized country,which makes researchers give more focus on the automatic detection of traffic signs.Detecting these traffic signs is challenging due to being in the dark,far away,partially occluded,and affected by the lighting or the presence of similar objects.An innovative traffic sign detection method for red and blue signs in color images is proposed to resolve these issues.This technique aimed to devise an efficient,robust and accurate approach.To attain this,initially,the approach presented a new formula,inspired by existing work,to enhance the image using red and green channels instead of blue,which segmented using a threshold calculated from the correlational property of the image.Next,a new set of features is proposed,motivated by existing features.Texture and color features are fused after getting extracted on the channel of Red,Green,and Blue(RGB),Hue,Saturation,and Value(HSV),and YCbCr color models of images.Later,the set of features is employed on different classification frameworks,from which quadratic support vector machine(SVM)outnumbered the others with an accuracy of 98.5%.The proposed method is tested on German Traffic Sign Detection Benchmark(GTSDB)images.The results are satisfactory when compared to the preceding work.
基金supported by Key-Area Research and Development Program of Guangdong Province(2021B0101420002)the Major Key Project of PCL(PCL2021A09)+3 种基金National Natural Science Foundation of China(62072187)Guangdong Major Project of Basic and Applied Basic Research(2019B030302002)Guangdong Marine Economic Development Special Fund Project(GDNRC[2022]17)Guangzhou Development Zone Science and Technology(2021GH10,2020GH10).
文摘Nowadays,the rapid development of edge computing has driven an increasing number of deep learning applications deployed at the edge of the network,such as pedestrian and vehicle detection,to provide efficient intelligent services to mobile users.However,as the accuracy requirements continue to increase,the components of deep learning models for pedestrian and vehicle detection,such as YOLOv4,become more sophisticated and the computing resources required for model training are increasing dramatically,which in turn leads to significant challenges in achieving effective deployment on resource-constrained edge devices while ensuring the high accuracy performance.For addressing this challenge,a cloud-edge collaboration-based pedestrian and vehicle detection framework is proposed in this paper,which enables sufficient training of models by utilizing the abundant computing resources in the cloud,and then deploying the well-trained models on edge devices,thus reducing the computing resource requirements for model training on edge devices.Furthermore,to reduce the size of the model deployed on edge devices,an automatic pruning method combines the convolution layer and BN layer is proposed to compress the pedestrian and vehicle detection model size.Experimental results show that the framework proposed in this paper is able to deploy the pruned model on a real edge device,Jetson TX2,with 6.72 times higher FPS.Meanwhile,the channel pruning reduces the volume and the number of parameters to 96.77%for the model,and the computing amount is reduced to 81.37%.
基金supported by the Financial Support of the Key Research and Development Projects of Anhui (202104a05020003)the Natural Science Foundation of Anhui Province (2208085MF173)the Anhui Development and Reform Commission Supports R&D and Innovation Projects ([2020]479).
文摘3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be a more effective decision-level fusion algorithm,but it does not fully utilize the extracted features of 3D and 2D.Therefore,we proposed a 3D vehicle detection algorithm based onmultimodal decision-level fusion.First,project the anchor point of the 3D detection bounding box into the 2D image,calculate the distance between 2D and 3D anchor points,and use this distance as a new fusion feature to enhance the feature redundancy of the network.Subsequently,add an attention module:squeeze-and-excitation networks,weight each feature channel to enhance the important features of the network,and suppress useless features.The experimental results show that the mean average precision of the algorithm in the KITTI dataset is 82.96%,which outperforms previous state-ofthe-art multimodal fusion-based methods,and the average accuracy in the Easy,Moderate and Hard evaluation indicators reaches 88.96%,82.60%,and 77.31%,respectively,which are higher compared to the original CLOCs model by 1.02%,2.29%,and 0.41%,respectively.Compared with the original CLOCs algorithm,our algorithm has higher accuracy and better performance in 3D vehicle detection.
基金funded by DEANSHIP OF SCIENTIFIC RESEARCH AT UMM AL-QURA UNIVERSITY,Grant Number 22UQU4361009DSR04.
文摘Autonomous vehicles are currently regarded as an interesting topic in the AI field.For such vehicles,the lane where they are traveling should be detected.Most lane detection methods identify the whole road area with all the lanes built on it.In addition to having a low accuracy rate and slow processing time,these methods require costly hardware and training datasets,and they fail under critical conditions.In this study,a novel detection algo-rithm for a lane where a car is currently traveling is proposed by combining simple traditional image processing with lightweight machine learning(ML)methods.First,a preparation phase removes all unwanted information to preserve the topographical representations of virtual edges within a one-pixel width around expected lanes.Then,a simple feature extraction phase obtains only the intersection point position and angle degree of each candidate edge.Subsequently,a proposed scheme that comprises consecutive lightweight ML models is applied to detect the correct lane by using the extracted features.This scheme is based on the density-based spatial clustering of applications with noise,random forest trees,a neural network,and rule-based methods.To increase accuracy and reduce processing time,each model supports the next one during detection.When a model detects a lane,the subsequent models are skipped.The models are trained on the Karlsruhe Institute of Technology and Toyota Technological Institute datasets.Results show that the proposed method is faster and achieves higher accuracy than state-of-the-art methods.This method is simple,can handle degradation conditions,and requires low-cost hardware and training datasets.
基金funded by King Mongkut’s University of Technology North Bangkok(Contract no.KMUTNB-63-KNOW-044).
文摘Establishing a system for measuring plant health and bacterial infection is critical in agriculture.Previously,the farmers themselves,who observed them with their eyes and relied on their experience in analysis,which could have been incorrect.Plant inspection can determine which plants reflect the quantity of green light and near-infrared using infrared light,both visible and eye using a drone.The goal of this study was to create algorithms for assessing bacterial infections in rice using images from unmanned aerial vehicles(UAVs)with an ensemble classification technique.Convolution neural networks in unmanned aerial vehi-cles image were used.To convey this interest,the rice’s health and bacterial infec-tion inside the photo were detected.The project entailed using pictures to identify bacterial illnesses in rice.The shape and distinct characteristics of each infection were observed.Rice symptoms were defined using machine learning and image processing techniques.Two steps of a convolution neural network based on an image from a UAV were used in this study to determine whether this area will be affected by bacteria.The proposed algorithms can be utilized to classify the types of rice deceases with an accuracy rate of 89.84 percent.
文摘With the rapid development of social economy,transportation has become faster and more efficient.As an important part of goods transportation,the safe maintenance of tunnel highways has become particularly important.The maintenance of tunnel roads has become more difficult due to problems such as sealing,narrowness and lack of light.Currently,target detection methods are advantageous in detecting tunnel vehicles in a timely manner through monitoring.Therefore,in order to prevent vehicle misdetection and missed detection in this complex environment,we propose aYOLOv5-Vehicle model based on the YOLOv5 network.This model is improved in three ways.Firstly,The backbone network of YOLOv5 is replaced by the lightweight MobileNetV3 network to extract features,which reduces the number of model parameters;Next,all convolutions in the neck module are improved to the depth-wise separable convolutions to further reduce the number of model parameters and computation,and improve the detection speed of the model;Finally,to ensure the accuracy of the model,the CBAM attention mechanism is introduced to improve the detection accuracy and precision of the model.Experiments results demonstrate that the YOLOv5-Vehicle model can improve the accuracy.
基金The National Key Technology R&D Program of China during the 11th Five-Year Plan Period(2009BAG13A04)Jiangsu Transportation Science Research Program(No.08X09)Program of Suzhou Science and Technology(No.SG201076)
文摘In order to decrease vehicle crashes, a new rear view vehicle detection system based on monocular vision is designed. First, a small and flexible hardware platform based on a DM642 digtal signal processor (DSP) micro-controller is built. Then, a two-step vehicle detection algorithm is proposed. In the first step, a fast vehicle edge and symmetry fusion algorithm is used and a low threshold is set so that all the possible vehicles have a nearly 100% detection rate (TP) and the non-vehicles have a high false detection rate (FP), i. e., all the possible vehicles can be obtained. In the second step, a classifier using a probabilistic neural network (PNN) which is based on multiple scales and an orientation Gabor feature is trained to classify the possible vehicles and eliminate the false detected vehicles from the candidate vehicles generated in the first step. Experimental results demonstrate that the proposed system maintains a high detection rate and a low false detection rate under different road, weather and lighting conditions.
基金Projects(51204209,51274240)supported by the National Natural Science Foundation of ChinaProject(HNDLKJ[2012]001-1)supported by Henan Electric Power Science&Technology Supporting Program,China
文摘The lifespan models of commercial 18650-type lithium ion batteries (nominal capacity of 1150 mA-h) were presented. The lifespan was extrapolated based on this model. The results indicate that the relationship of capacity retention and cycle number can be expressed by Gaussian function. The selecting function and optimal precision were verified through actual match detection and a range of alternating current impedance testing. The cycle life model with high precision (〉99%) is beneficial to shortening the orediction time and cutting the prediction cost.
基金The Cultivation Fund of the Key Scientific and Technical Innovation Project of Higher Education of Ministry of Education (No.705020)
文摘To ensure revulsive driving of intelligent vehicles at intersections, a method is presented to detect and recognize the traffic lights. First, the stabling siding at intersections is detected by applying Hough transformation. Then, the colors of traffic lights are detected with color space transformation. Finally, self-associative memory is used to recognize the countdown characters of the traffic lights. Test results at 20 real intersections show that the ratio of correct stabling siding recognition reaches up to 90%;and the ratios of recognition of traffic lights and divided characters are 85% and 97%, respectively. The research proves that the method is efficient for the detection of stabling siding and is robust enough to recognize the characters from images with noise and broken edges.
基金Supported by National Natural Science Foundation of China(Grant Nos.52025121,52394263)National Key R&D Plan of China(Grant No.2023YFD2000301).
文摘This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.
基金supported in part by the Open Research Fund of Joint Laboratory on Cyberspace Security,China Southern Power Grid(Grant No.CSS2022KF03)the Science and Technology Planning Project of Guangzhou,China(GrantNo.202201010388)the Fundamental Research Funds for the Central Universities.
文摘The blockchain-empowered Internet of Vehicles(IoV)enables various services and achieves data security and privacy,significantly advancing modern vehicle systems.However,the increased frequency of data transmission and complex network connections among nodes also make them more susceptible to adversarial attacks.As a result,an efficient intrusion detection system(IDS)becomes crucial for securing the IoV environment.Existing IDSs based on convolutional neural networks(CNN)often suffer from high training time and storage requirements.In this paper,we propose a lightweight IDS solution to protect IoV against both intra-vehicle and external threats.Our approach achieves superior performance,as demonstrated by key metrics such as accuracy and precision.Specifically,our method achieves accuracy rates ranging from 99.08% to 100% on the Car-Hacking dataset,with a remarkably short training time.