期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Point Cloud Processing Methods for 3D Point Cloud Detection Tasks
1
作者 WANG Chongchong LI Yao +2 位作者 WANG Beibei CAO Hong ZHANG Yanyong 《ZTE Communications》 2023年第4期38-46,共9页
Light detection and ranging(LiDAR)sensors play a vital role in acquiring 3D point cloud data and extracting valuable information about objects for tasks such as autonomous driving,robotics,and virtual reality(VR).Howe... Light detection and ranging(LiDAR)sensors play a vital role in acquiring 3D point cloud data and extracting valuable information about objects for tasks such as autonomous driving,robotics,and virtual reality(VR).However,the sparse and disordered nature of the 3D point cloud poses significant challenges to feature extraction.Overcoming limitations is critical for 3D point cloud processing.3D point cloud object detection is a very challenging and crucial task,in which point cloud processing and feature extraction methods play a crucial role and have a significant impact on subsequent object detection performance.In this overview of outstanding work in object detection from the 3D point cloud,we specifically focus on summarizing methods employed in 3D point cloud processing.We introduce the way point clouds are processed in classical 3D object detection algorithms,and their improvements to solve the problems existing in point cloud processing.Different voxelization methods and point cloud sampling strategies will influence the extracted features,thereby impacting the final detection performance. 展开更多
关键词 point cloud processing 3D object detection point cloud voxelization bird's eye view deep learning
下载PDF
Quantifying Design Parameters of Symbology Page for Automotive Head up Display
2
作者 Gupta Sharad Karar Vinod +2 位作者 Saini Surender Singh Jaggi Neena Bajpai Phun Phun 《Computer Technology and Application》 2011年第8期658-662,共5页
This paper gives an overview of studies on parameters displayed on the Automotive Head Up Display (A-HUD) including calculation and construction of symbology page based on study results. A study has been made on vit... This paper gives an overview of studies on parameters displayed on the Automotive Head Up Display (A-HUD) including calculation and construction of symbology page based on study results. A study has been made on vital parameters required for car drivers and design calculations have been made based on design parameters like field of view, distance from the design eye position, minimum character size viewable from a distance of 1.5m between driver and the projected image, and optical magnification factor. lhe display format suitable for A-HUD applications depends upon the parameters required to be displayed. The aspect ratio chosen is 4:3. This paper also provides method to design the symbology page embedding six vital parameters with their relative positioning and size considering relative position between display device and optical elements which has been considered with a magnification factor of 2.5. The field of view obtained is 6.7° × 4.8°. 展开更多
关键词 Automotive head up display (A-HUD) MAGNIFICATION symbology character size human factors field of view (FOV). design eye position (DEP).
下载PDF
Recognition of field roads based on improved U-Net++Network
3
作者 Lili Yang Yuanbo Li +4 位作者 Mengshuai Chang Yuanyuan Xu Bingbing Hu Xinxin Wang Caicong Wu 《International Journal of Agricultural and Biological Engineering》 SCIE 2023年第2期171-178,共8页
Unmanned driving of agricultural machinery has garnered significant attention in recent years,especially with the development of precision farming and sensor technologies.To achieve high performance and low cost,perce... Unmanned driving of agricultural machinery has garnered significant attention in recent years,especially with the development of precision farming and sensor technologies.To achieve high performance and low cost,perception tasks are of great importance.In this study,a low-cost and high-safety method was proposed for field road recognition in unmanned agricultural machinery.The approach of this study utilized point clouds,with low-resolution Lidar point clouds as inputs,generating high-resolution point clouds and Bird's Eye View(BEV)images that were encoded with several basic statistics.Using a BEV representation,road detection was reduced to a single-scale problem that could be addressed with an improved U-Net++neural network.Three enhancements were proposed for U-Net++:1)replacing the convolutional kernel in the original U-Net++ with an Asymmetric Convolution Block(ACBlock);2)adding a multi-branch Asymmetric Dilated Convolutional Block(MADC)in the highest semantic information layer;3)adding an Attention Gate(AG)model to the long-skip-connection in the decoding stage.The results of experiments of this study showed that our algorithm achieved a Mean Intersection Over Union of 96.54% on the 16-channel point clouds,which was 7.35 percentage points higher than U-Net++.Furthermore,the average processing time of the model was about 70 ms,meeting the time requirements of unmanned driving in agricultural machinery.The proposed method of this study can be applied to enhance the perception ability of unmanned agricultural machinery thereby increasing the safety of field road driving. 展开更多
关键词 image segmentation unmanned agricultural machinery field roads point cloud super-resolution point cloud bird’s eye view
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部