期刊文献+
共找到19篇文章
< 1 >
每页显示 20 50 100
Depth-Guided Vision Transformer With Normalizing Flows for Monocular 3D Object Detection
1
作者 Cong Pan Junran Peng Zhaoxiang Zhang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第3期673-689,共17页
Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input t... Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input to augment the RGB images.Depth-based methods attempt to convert estimated depth maps to pseudo-LiDAR and then use LiDAR-based object detectors or focus on the perspective of image and depth fusion learning.However,they demonstrate limited performance and efficiency as a result of depth inaccuracy and complex fusion mode with convolutions.Different from these approaches,our proposed depth-guided vision transformer with a normalizing flows(NF-DVT)network uses normalizing flows to build priors in depth maps to achieve more accurate depth information.Then we develop a novel Swin-Transformer-based backbone with a fusion module to process RGB image patches and depth map patches with two separate branches and fuse them using cross-attention to exchange information with each other.Furthermore,with the help of pixel-wise relative depth values in depth maps,we develop new relative position embeddings in the cross-attention mechanism to capture more accurate sequence ordering of input tokens.Our method is the first Swin-Transformer-based backbone architecture for monocular 3D object detection.The experimental results on the KITTI and the challenging Waymo Open datasets show the effectiveness of our proposed method and superior performance over previous counterparts. 展开更多
关键词 Monocular 3d object detection normalizing flows Swin Transformer
下载PDF
3D Vehicle Detection Algorithm Based onMultimodal Decision-Level Fusion
2
作者 Peicheng Shi Heng Qi +1 位作者 Zhiqiang Liu Aixi Yang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第6期2007-2023,共17页
3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be... 3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be a more effective decision-level fusion algorithm,but it does not fully utilize the extracted features of 3D and 2D.Therefore,we proposed a 3D vehicle detection algorithm based onmultimodal decision-level fusion.First,project the anchor point of the 3D detection bounding box into the 2D image,calculate the distance between 2D and 3D anchor points,and use this distance as a new fusion feature to enhance the feature redundancy of the network.Subsequently,add an attention module:squeeze-and-excitation networks,weight each feature channel to enhance the important features of the network,and suppress useless features.The experimental results show that the mean average precision of the algorithm in the KITTI dataset is 82.96%,which outperforms previous state-ofthe-art multimodal fusion-based methods,and the average accuracy in the Easy,Moderate and Hard evaluation indicators reaches 88.96%,82.60%,and 77.31%,respectively,which are higher compared to the original CLOCs model by 1.02%,2.29%,and 0.41%,respectively.Compared with the original CLOCs algorithm,our algorithm has higher accuracy and better performance in 3D vehicle detection. 展开更多
关键词 3d vehicle detection multimodal fusion CLOCs network structure optimization attention module
下载PDF
MFF-Net: Multimodal Feature Fusion Network for 3D Object Detection
3
作者 Peicheng Shi Zhiqiang Liu +1 位作者 Heng Qi Aixi Yang 《Computers, Materials & Continua》 SCIE EI 2023年第6期5615-5637,共23页
In complex traffic environment scenarios,it is very important for autonomous vehicles to accurately perceive the dynamic information of other vehicles around the vehicle in advance.The accuracy of 3D object detection ... In complex traffic environment scenarios,it is very important for autonomous vehicles to accurately perceive the dynamic information of other vehicles around the vehicle in advance.The accuracy of 3D object detection will be affected by problems such as illumination changes,object occlusion,and object detection distance.To this purpose,we face these challenges by proposing a multimodal feature fusion network for 3D object detection(MFF-Net).In this research,this paper first uses the spatial transformation projection algorithm to map the image features into the feature space,so that the image features are in the same spatial dimension when fused with the point cloud features.Then,feature channel weighting is performed using an adaptive expression augmentation fusion network to enhance important network features,suppress useless features,and increase the directionality of the network to features.Finally,this paper increases the probability of false detection and missed detection in the non-maximum suppression algo-rithm by increasing the one-dimensional threshold.So far,this paper has constructed a complete 3D target detection network based on multimodal feature fusion.The experimental results show that the proposed achieves an average accuracy of 82.60%on the Karlsruhe Institute of Technology and Toyota Technological Institute(KITTI)dataset,outperforming previous state-of-the-art multimodal fusion networks.In Easy,Moderate,and hard evaluation indicators,the accuracy rate of this paper reaches 90.96%,81.46%,and 75.39%.This shows that the MFF-Net network has good performance in 3D object detection. 展开更多
关键词 3d object detection multimodal fusion neural network autonomous driving attention mechanism
下载PDF
Monocular 3D object detection with Pseudo-LiDAR confidence sampling and hierarchical geometric feature extraction in 6G network
4
作者 Jianlong Zhang Guangzu Fang +3 位作者 Bin Wang Xiaobo Zhou Qingqi Pei Chen Chen 《Digital Communications and Networks》 SCIE CSCD 2023年第4期827-835,共9页
The high bandwidth and low latency of 6G network technology enable the successful application of monocular 3D object detection on vehicle platforms.Monocular 3D-object-detection-based Pseudo-LiDAR is a low-cost,lowpow... The high bandwidth and low latency of 6G network technology enable the successful application of monocular 3D object detection on vehicle platforms.Monocular 3D-object-detection-based Pseudo-LiDAR is a low-cost,lowpower solution compared to LiDAR solutions in the field of autonomous driving.However,this technique has some problems,i.e.,(1)the poor quality of generated Pseudo-LiDAR point clouds resulting from the nonlinear error distribution of monocular depth estimation and(2)the weak representation capability of point cloud features due to the neglected global geometric structure features of point clouds existing in LiDAR-based 3D detection networks.Therefore,we proposed a Pseudo-LiDAR confidence sampling strategy and a hierarchical geometric feature extraction module for monocular 3D object detection.We first designed a point cloud confidence sampling strategy based on a 3D Gaussian distribution to assign small confidence to the points with great error in depth estimation and filter them out according to the confidence.Then,we present a hierarchical geometric feature extraction module by aggregating the local neighborhood features and a dual transformer to capture the global geometric features in the point cloud.Finally,our detection framework is based on Point-Voxel-RCNN(PV-RCNN)with high-quality Pseudo-LiDAR and enriched geometric features as input.From the experimental results,our method achieves satisfactory results in monocular 3D object detection. 展开更多
关键词 Monocular 3d object detection Pseudo-LiDAR Confidence sampling Hierarchical geometric feature extraction
下载PDF
3D Object Detection with Attention:Shell-Based Modeling
5
作者 Xiaorui Zhang Ziquan Zhao +1 位作者 Wei Sun Qi Cui 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期537-550,共14页
LIDAR point cloud-based 3D object detection aims to sense the surrounding environment by anchoring objects with the Bounding Box(BBox).However,under the three-dimensional space of autonomous driving scenes,the previou... LIDAR point cloud-based 3D object detection aims to sense the surrounding environment by anchoring objects with the Bounding Box(BBox).However,under the three-dimensional space of autonomous driving scenes,the previous object detection methods,due to the pre-processing of the original LIDAR point cloud into voxels or pillars,lose the coordinate information of the original point cloud,slow detection speed,and gain inaccurate bounding box positioning.To address the issues above,this study proposes a new two-stage network structure to extract point cloud features directly by PointNet++,which effectively preserves the original point cloud coordinate information.To improve the detection accuracy,a shell-based modeling method is proposed.It roughly determines which spherical shell the coordinates belong to.Then,the results are refined to ground truth,thereby narrowing the localization range and improving the detection accuracy.To improve the recall of 3D object detection with bounding boxes,this paper designs a self-attention module for 3D object detection with a skip connection structure.Some of these features are highlighted by weighting them on the feature dimensions.After training,it makes the feature weights that are favorable for object detection get larger.Thus,the extracted features are more adapted to the object detection task.Extensive comparison experiments and ablation experiments conducted on the KITTI dataset verify the effectiveness of our proposed method in improving recall and precision. 展开更多
关键词 3d object detection autonomous driving point cloud shell-based modeling self-attention mechanism
下载PDF
Point Cloud Processing Methods for 3D Point Cloud Detection Tasks
6
作者 WANG Chongchong LI Yao +2 位作者 WANG Beibei CAO Hong ZHANG Yanyong 《ZTE Communications》 2023年第4期38-46,共9页
Light detection and ranging(LiDAR)sensors play a vital role in acquiring 3D point cloud data and extracting valuable information about objects for tasks such as autonomous driving,robotics,and virtual reality(VR).Howe... Light detection and ranging(LiDAR)sensors play a vital role in acquiring 3D point cloud data and extracting valuable information about objects for tasks such as autonomous driving,robotics,and virtual reality(VR).However,the sparse and disordered nature of the 3D point cloud poses significant challenges to feature extraction.Overcoming limitations is critical for 3D point cloud processing.3D point cloud object detection is a very challenging and crucial task,in which point cloud processing and feature extraction methods play a crucial role and have a significant impact on subsequent object detection performance.In this overview of outstanding work in object detection from the 3D point cloud,we specifically focus on summarizing methods employed in 3D point cloud processing.We introduce the way point clouds are processed in classical 3D object detection algorithms,and their improvements to solve the problems existing in point cloud processing.Different voxelization methods and point cloud sampling strategies will influence the extracted features,thereby impacting the final detection performance. 展开更多
关键词 point cloud processing 3d object detection point cloud voxelization bird's eye view deep learning
下载PDF
3D obstacle detection of indoor mobile robots by floor detection and rejection 被引量:1
7
作者 Donggeun Cha Woojin Chung 《Journal of Measurement Science and Instrumentation》 CAS 2013年第4期381-384,共4页
Obstacle detection is essential for mobile robots to avoid collision with obstacles.Mobile robots usually operate in indoor environments,where they encounter various kinds of obstacles;however,2D range sensor can sens... Obstacle detection is essential for mobile robots to avoid collision with obstacles.Mobile robots usually operate in indoor environments,where they encounter various kinds of obstacles;however,2D range sensor can sense obstacles only in 2D plane.In contrast,by using 3D range sensor,it is possible to detect ground and aerial obstacles that 2D range sensor cannot sense.In this paper,we present a 3D obstacle detection method that will help overcome the limitations of 2D range sensor with regard to obstacle detection.The indoor environment typically consists of a flat floor.The position of the floor can be determined by estimating the plane using the least squares method.Having determined the position of the floor,the points of obstacles can be known by rejecting the points of the floor.In the experimental section,we show the results of this approach using a Kinect sensor. 展开更多
关键词 3d obstacle detection mobile robot Kinect sensor
下载PDF
Traffic Accident Detection Based on Deformable Frustum Proposal and Adaptive Space Segmentation
8
作者 Peng Chen Weiwei Zhang +1 位作者 Ziyao Xiao Yongxiang Tian 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第1期97-109,共13页
Road accident detection plays an important role in abnormal scene reconstruction for Intelligent Transportation Systems and abnormal events warning for autonomous driving.This paper presents a novel 3D object detector... Road accident detection plays an important role in abnormal scene reconstruction for Intelligent Transportation Systems and abnormal events warning for autonomous driving.This paper presents a novel 3D object detector and adaptive space partitioning algorithm to infer traffic accidents quantitatively.Using 2D region proposals in an RGB image,this method generates deformable frustums based on point cloud for each 2D region proposal and then frustum-wisely extracts features based on the farthest point sampling network(FPS-Net)and feature extraction network(FE-Net).Subsequently,the encoder-decoder network(ED-Net)implements 3D-oriented bounding box(OBB)regression.Meanwhile,the adaptive least square regression(ALSR)method is proposed to split 3D OBB.Finally,the reduced OBB intersection test is carried out to detect traffic accidents via separating surface theorem(SST).In the experiments of KITTI benchmark,our proposed 3D object detector outperforms other state-of-theartmethods.Meanwhile,collision detection algorithm achieves the satisfactory performance of 91.8%accuracy on our SHTA dataset. 展开更多
关键词 Traffic accident detection 3d object detection deformable frustum proposal adaptive space segmentation
下载PDF
ERROR ANALYSIS OF 3D DETECTING SYSTEM BASED ON WHOLE-FIELD PARALLEL CONFOCAL MICROSCOPE
9
作者 Wang Yonghong Yu Xiaofen 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2005年第4期623-626,共4页
Compared with the traditional scanning confocal microscopy, the effect of various factors on characteristic in multi-beam parallel confocal system is discussed, the error factors in multi-beam parallel confocal system... Compared with the traditional scanning confocal microscopy, the effect of various factors on characteristic in multi-beam parallel confocal system is discussed, the error factors in multi-beam parallel confocal system are analyzed. The factors influencing the characteristics of the multi-beam parallel confocal system are discussed. The construction and working principle of the non-scanning 3D detecting system is introduced, and some experiment results prove the effect of various factors on the detecting system. 展开更多
关键词 3d profile Parallel detecting Confocal Microlens array
下载PDF
Spatial-Temporal Correlation 3D Vehicle Detection and Tracking System with Multiple Surveillance Cameras
10
作者 薛炜彭 吴明虎 王琳 《Journal of Shanghai Jiaotong university(Science)》 EI 2023年第1期52-60,共9页
Compared to 3D object detection using a single camera,multiple cameras can overcome some limitations on field-of-view,occlusion,and low detection confidence.This study employs multiple surveillance cameras and develop... Compared to 3D object detection using a single camera,multiple cameras can overcome some limitations on field-of-view,occlusion,and low detection confidence.This study employs multiple surveillance cameras and develops a cooperative 3D object detection and tracking framework by incorporating temporal and spatial information.The framework consists of a 3D vehicle detection model,cooperatively spatial-temporal relation scheme,and heuristic camera constellation method.Specifically,the proposed cross-camera association scheme combines the geometric relationship between multiple cameras and objects in corresponding detections.The spatial-temporal method is designed to associate vehicles between different points of view at a single timestamp and fulfill vehicle tracking in the time aspect.The proposed framework is evaluated based on a synthetic cooperative dataset and shows high reliability,where the cooperative perception can recall more than 66%of the trajectory instead of 11%for single-point sensing.This could contribute to full-range surveillance for intelligent transportation systems. 展开更多
关键词 multi-object tracking 3d detection multiple sensors cooperative perception spatial-temporal correlation intelligent transportation system
原文传递
V2I Based Environment Perception for Autonomous Vehicles at Intersections 被引量:2
11
作者 Xuting Duan Hang Jiang +3 位作者 Daxin Tian Tianyuan Zou Jianshan Zhou Yue Cao 《China Communications》 SCIE CSCD 2021年第7期1-12,共12页
In recent years,autonomous driving technology has made good progress,but the noncooperative intelligence of vehicle for autonomous driving still has many technical bottlenecks when facing urban road autonomous driving... In recent years,autonomous driving technology has made good progress,but the noncooperative intelligence of vehicle for autonomous driving still has many technical bottlenecks when facing urban road autonomous driving challenges.V2I(Vehicle-to-Infrastructure)communication is a potential solution to enable cooperative intelligence of vehicles and roads.In this paper,the RGB-PVRCNN,an environment perception framework,is proposed to improve the environmental awareness of autonomous vehicles at intersections by leveraging V2I communication technology.This framework integrates vision feature based on PVRCNN.The normal distributions transform(NDT)point cloud registration algorithm is deployed both on onboard and roadside to obtain the position of the autonomous vehicles and to build the local map objects detected by roadside multi-sensor system are sent back to autonomous vehicles to enhance the perception ability of autonomous vehicles for benefiting path planning and traffic efficiency at the intersection.The field-testing results show that our method can effectively extend the environmental perception ability and range of autonomous vehicles at the intersection and outperform the PointPillar algorithm and the VoxelRCNN algorithm in detection accuracy. 展开更多
关键词 V2I environmental perception autonomous vehicles 3d objects detection
下载PDF
ARM3D:Attention-based relation module for indoor 3D object detection 被引量:2
12
作者 Yuqing Lan Yao Duan +4 位作者 Chenyi Liu Chenyang Zhu Yueshan Xiong Hui Huang Kai Xu 《Computational Visual Media》 SCIE EI CSCD 2022年第3期395-414,共20页
Relation contexts have been proved to be useful for many challenging vision tasks.In the field of3D object detection,previous methods have been taking the advantage of context encoding,graph embedding,or explicit rela... Relation contexts have been proved to be useful for many challenging vision tasks.In the field of3D object detection,previous methods have been taking the advantage of context encoding,graph embedding,or explicit relation reasoning to extract relation contexts.However,there exist inevitably redundant relation contexts due to noisy or low-quality proposals.In fact,invalid relation contexts usually indicate underlying scene misunderstanding and ambiguity,which may,on the contrary,reduce the performance in complex scenes.Inspired by recent attention mechanism like Transformer,we propose a novel 3D attention-based relation module(ARM3D).It encompasses objectaware relation reasoning to extract pair-wise relation contexts among qualified proposals and an attention module to distribute attention weights towards different relation contexts.In this way,ARM3D can take full advantage of the useful relation contexts and filter those less relevant or even confusing contexts,which mitigates the ambiguity in detection.We have evaluated the effectiveness of ARM3D by plugging it into several state-of-the-art 3D object detectors and showing more accurate and robust detection results.Extensive experiments show the capability and generalization of ARM3D on 3D object detection.Our source code is available at https://github.com/lanlan96/ARM3D. 展开更多
关键词 attention mechanism scene understanding relational reasoning 3d indoor object detection
原文传递
RGB Image‑ and Lidar‑Based 3D Object Detection Under Multiple Lighting Scenarios
13
作者 Wentao Chen Wei Tian +1 位作者 Xiang Xie Wilhelm Stork 《Automotive Innovation》 EI CSCD 2022年第3期251-259,共9页
In recent years,camera-and lidar-based 3D object detection has achieved great progress.However,the related researches mainly focus on normal illumination conditions;the performance of their 3D detection algorithms wil... In recent years,camera-and lidar-based 3D object detection has achieved great progress.However,the related researches mainly focus on normal illumination conditions;the performance of their 3D detection algorithms will decrease under low lighting scenarios such as in the night.This work attempts to improve the fusion strategies on 3D vehicle detection accuracy in multiple lighting conditions.First,distance and uncertainty information is incorporated to guide the“painting”of semantic information onto point cloud during the data preprocessing.Moreover,a multitask framework is designed,which incorpo-rates uncertainty learning to improve detection accuracy under low-illumination scenarios.In the validation on KITTI and Dark-KITTI benchmark,the proposed method increases the vehicle detection accuracy on the KITTI benchmark by 1.35%and the generality of the model is validated on the proposed Dark-KITTI dataset,with a gain of 0.64%for vehicle detection. 展开更多
关键词 3d object detection Multi-sensor fusion Uncertainty estimation Semantic segmentation PointPainting
原文传递
PointGAT: Graph attention networks for 3D object detection
14
作者 Haoran Zhou Wei Wang +1 位作者 Gang Liu Qingguo Zhou 《Intelligent and Converged Networks》 EI 2022年第2期204-216,共13页
3D object detection is a critical technology in many applications,and among the various detection methods,pointcloud-based methods have been the most popular research topic in recent years.Since Graph Neural Network(G... 3D object detection is a critical technology in many applications,and among the various detection methods,pointcloud-based methods have been the most popular research topic in recent years.Since Graph Neural Network(GNN)is considered to be effective in dealing with pointclouds,in this work,we combined it with the attention mechanism and proposed a 3D object detection method named PointGAT.Our proposed PointGAT outperforms previous approaches on the KITTI test dataset.Experiments in real campus scenarios also demonstrate the potential of our method for further applications. 展开更多
关键词 3d object detection pointcloud graph neural network attention mechanism
原文传递
LWD-3D:Lightweight Detector Based on Self-Attention for 3D Object Detection
15
作者 Shuo Yang Huimin Lu +2 位作者 Tohru Kamiya Yoshihisa Nakatoh Seiichi Serikawa 《CAAI Artificial Intelligence Research》 2022年第2期137-143,共7页
Lightweight modules play a key role in 3D object detection tasks for autonomous driving,which are necessary for the application of 3D object detectors.At present,research still focuses on constructing complex models a... Lightweight modules play a key role in 3D object detection tasks for autonomous driving,which are necessary for the application of 3D object detectors.At present,research still focuses on constructing complex models and calculations to improve the detection precision at the expense of the running rate.However,building a lightweight model to learn the global features from point cloud data for 3D object detection is a significant problem.In this paper,we focus on combining convolutional neural networks with selfattention-based vision transformers to realize lightweight and high-speed computing for 3D object detection.We propose lightweight detection 3D(LWD-3D),which is a point cloud conversion and lightweight vision transformer for autonomous driving.LWD-3D utilizes a one-shot regression framework in 2D space and generates a 3D object bounding box from point cloud data,which provides a new feature representation method based on a vision transformer for 3D detection applications.The results of experiment on the KITTI 3D dataset show that LWD-3D achieves real-time detection(time per image<20 ms).LWD-3D obtains a mean average precision(mAP)75%higher than that of another 3D real-time detector with half the number of parameters.Our research extends the application of visual transformers to 3D object detection tasks. 展开更多
关键词 3d object detection point clouds vision transformer one-shot regression real-time
原文传递
基于遮挡率优化算法的火气系统布点设计与评估技术
16
作者 王闻博 巫前进 马云鹂 《仪器仪表用户》 2021年第9期1-5,共5页
本文简述了FGS火气系统布点设计与评估软件火焰检测不准确的问题,探讨了火焰探测器在不同遮挡模式下检测有效性的差异。通过研究红外辐射能量与遮挡模式、遮挡率的关系,引入火焰遮挡模型优化算法,并应用于专业的火气系统布点设计与评估... 本文简述了FGS火气系统布点设计与评估软件火焰检测不准确的问题,探讨了火焰探测器在不同遮挡模式下检测有效性的差异。通过研究红外辐射能量与遮挡模式、遮挡率的关系,引入火焰遮挡模型优化算法,并应用于专业的火气系统布点设计与评估软件Detect3D。该软件采用定量的方法,计算火焰、可燃气体、有毒气体等探测器的覆盖率,对火气探测器进行布点设计,以及评估和优化,以提升探测器覆盖有效性,是一种减缓风险及危险事故后果严重性的有效技术手段。 展开更多
关键词 火气系统 布点 火焰探测器 遮挡率 Detect3d
下载PDF
A volumetric change detection framework using UAV oblique photogrammetry–a case study of ultra-high-resolution monitoring of progressive building collapse
17
作者 Ningli Xu Debao Huang +5 位作者 Shuang Song Xiao Ling Chris Strasbaugh Alper Yilmaz Halil Sezen Rongjun Qin 《International Journal of Digital Earth》 SCIE 2021年第11期1705-1720,共16页
In this paper,we present a case study that performs an unmanned aerial vehicle(UAV)based fine-scale 3D change detection and monitoring of progressive collapse performance of a building during a demolition event.Multi-... In this paper,we present a case study that performs an unmanned aerial vehicle(UAV)based fine-scale 3D change detection and monitoring of progressive collapse performance of a building during a demolition event.Multi-temporal oblique photogrammetry images are collected with 3D point clouds generated at different stages of the demolition.The geometric accuracy of the generated point clouds has been evaluated against both airborne and terrestrial LiDAR point clouds,achieving an average distance of 12 cm and 16 cm for roof and façade respectively.We propose a hierarchical volumetric change detection framework that unifies multi-temporal UAV images for pose estimation(free of ground control points),reconstruction,and a coarse-to-fine 3D density change analysis.This work has provided a solution capable of addressing change detection on full 3D time-series datasets where dramatic scene content changes are presented progressively.Our change detection results on the building demolition event have been evaluated against the manually marked ground-truth changes and have achieved an F-1 score varying from 0.78 to 0.92,with consistently high precision(0.92–0.99).Volumetric changes through the demolition progress are derived from change detection and have been shown to favorably reflect the qualitative and quantitative building demolition progression. 展开更多
关键词 3d change detection multitemporal data registration oblique photogrammetry
原文传递
Rapid development methodology of agricultural robot navigation system working in GNSS-denied environment
18
作者 Run-Mao Zhao Zheng Zhu +5 位作者 Jian-Neng Chen Tao-Jie Yu Jun-Jie Ma Guo-Shuai Fan Min Wu Pei-Chen Huang 《Advances in Manufacturing》 SCIE EI CAS CSCD 2023年第4期601-617,共17页
Robotic autonomous operating systems in global n40avigation satellite system(GNSS)-denied agricultural environments(green houses,feeding farms,and under canopy)have recently become a research hotspot.3D light detectio... Robotic autonomous operating systems in global n40avigation satellite system(GNSS)-denied agricultural environments(green houses,feeding farms,and under canopy)have recently become a research hotspot.3D light detection and ranging(LiDAR)locates the robot depending on environment and has become a popular perception sensor to navigate agricultural robots.A rapid development methodology of a 3D LiDAR-based navigation system for agricultural robots is proposed in this study,which includes:(i)individual plant clustering and its location estimation method(improved Euclidean clustering algorithm);(ii)robot path planning and tracking control method(Lyapunov direct method);(iii)construction of a robot-LiDAR-plant unified virtual simulation environment(combination use of Gazebo and SolidWorks);and(vi)evaluating the accuracy of the navigation system(triple evaluation:virtual simulation test,physical simulation test,and field test).Applying the proposed methodology,a navigation system for a grape field operation robot has been developed.The virtual simulation test,physical simulation test with GNSS as ground truth,and field test with path tracer showed that the robot could travel along the planned path quickly and smoothly.The maximum and mean absolute errors of path tracking are 2.72 cm,1.02 cm;3.12 cm,1.31 cm,respectively,which meet the accuracy requirements of field operations,establishing the effectiveness of the proposed methodology.The proposed methodology has good scalability and can be implemented in a wide variety of field robot,which is promising to shorten the development cycle of agricultural robot navigation system working in GNSS-denied environment. 展开更多
关键词 Agricultural robot Global navigation satellite system(GNSS)-denied environment Navigation system 3d light detection and ranging(LiDAR) Rapid developing METHODOLOGY
原文传递
Fabrication of the ZnO/NiO p–n junction foam for the enhanced sensing performance
19
作者 Jing-Jing Liang Ming-Gang Zhao +2 位作者 Long-Jiang Ding Si-Si Fan Shou-Gang Chen 《Chinese Chemical Letters》 SCIE CAS CSCD 2017年第3期670-674,共5页
P-Type NiO foam with rough nanostructured surface was prepared by the surface treatment of Ni foam,and then it was decorated with n-type ZnO nanopyramids to construct a 3D p–n junction foam. The p–n junction foam wa... P-Type NiO foam with rough nanostructured surface was prepared by the surface treatment of Ni foam,and then it was decorated with n-type ZnO nanopyramids to construct a 3D p–n junction foam. The p–n junction foam was used for electrochemical detection of dopamine and the sensing performance was improved significantly compared with the single NiO and ZnO. High sensitivity(171 mμA/mmol/L), fast response(2 s), excellent selectivity and stability were achieved. It was attributed to the introduction of numerous p–n junction interfaces, the interfacial potential barrier played as a tuning factor for the electrochemical determination of dopamine. The results demonstrated it would be an important way to improve the biosensing performance by introducing the p–n junction interfaces. 展开更多
关键词 ZnO/NiO p–n junction 3d architecture Electrochemical detection Dopamine
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部