Light detection and ranging(LiDAR)sensors play a vital role in acquiring 3D point cloud data and extracting valuable information about objects for tasks such as autonomous driving,robotics,and virtual reality(VR).Howe...Light detection and ranging(LiDAR)sensors play a vital role in acquiring 3D point cloud data and extracting valuable information about objects for tasks such as autonomous driving,robotics,and virtual reality(VR).However,the sparse and disordered nature of the 3D point cloud poses significant challenges to feature extraction.Overcoming limitations is critical for 3D point cloud processing.3D point cloud object detection is a very challenging and crucial task,in which point cloud processing and feature extraction methods play a crucial role and have a significant impact on subsequent object detection performance.In this overview of outstanding work in object detection from the 3D point cloud,we specifically focus on summarizing methods employed in 3D point cloud processing.We introduce the way point clouds are processed in classical 3D object detection algorithms,and their improvements to solve the problems existing in point cloud processing.Different voxelization methods and point cloud sampling strategies will influence the extracted features,thereby impacting the final detection performance.展开更多
This paper gives an overview of studies on parameters displayed on the Automotive Head Up Display (A-HUD) including calculation and construction of symbology page based on study results. A study has been made on vit...This paper gives an overview of studies on parameters displayed on the Automotive Head Up Display (A-HUD) including calculation and construction of symbology page based on study results. A study has been made on vital parameters required for car drivers and design calculations have been made based on design parameters like field of view, distance from the design eye position, minimum character size viewable from a distance of 1.5m between driver and the projected image, and optical magnification factor. lhe display format suitable for A-HUD applications depends upon the parameters required to be displayed. The aspect ratio chosen is 4:3. This paper also provides method to design the symbology page embedding six vital parameters with their relative positioning and size considering relative position between display device and optical elements which has been considered with a magnification factor of 2.5. The field of view obtained is 6.7° × 4.8°.展开更多
Unmanned driving of agricultural machinery has garnered significant attention in recent years,especially with the development of precision farming and sensor technologies.To achieve high performance and low cost,perce...Unmanned driving of agricultural machinery has garnered significant attention in recent years,especially with the development of precision farming and sensor technologies.To achieve high performance and low cost,perception tasks are of great importance.In this study,a low-cost and high-safety method was proposed for field road recognition in unmanned agricultural machinery.The approach of this study utilized point clouds,with low-resolution Lidar point clouds as inputs,generating high-resolution point clouds and Bird's Eye View(BEV)images that were encoded with several basic statistics.Using a BEV representation,road detection was reduced to a single-scale problem that could be addressed with an improved U-Net++neural network.Three enhancements were proposed for U-Net++:1)replacing the convolutional kernel in the original U-Net++ with an Asymmetric Convolution Block(ACBlock);2)adding a multi-branch Asymmetric Dilated Convolutional Block(MADC)in the highest semantic information layer;3)adding an Attention Gate(AG)model to the long-skip-connection in the decoding stage.The results of experiments of this study showed that our algorithm achieved a Mean Intersection Over Union of 96.54% on the 16-channel point clouds,which was 7.35 percentage points higher than U-Net++.Furthermore,the average processing time of the model was about 70 ms,meeting the time requirements of unmanned driving in agricultural machinery.The proposed method of this study can be applied to enhance the perception ability of unmanned agricultural machinery thereby increasing the safety of field road driving.展开更多
文摘Light detection and ranging(LiDAR)sensors play a vital role in acquiring 3D point cloud data and extracting valuable information about objects for tasks such as autonomous driving,robotics,and virtual reality(VR).However,the sparse and disordered nature of the 3D point cloud poses significant challenges to feature extraction.Overcoming limitations is critical for 3D point cloud processing.3D point cloud object detection is a very challenging and crucial task,in which point cloud processing and feature extraction methods play a crucial role and have a significant impact on subsequent object detection performance.In this overview of outstanding work in object detection from the 3D point cloud,we specifically focus on summarizing methods employed in 3D point cloud processing.We introduce the way point clouds are processed in classical 3D object detection algorithms,and their improvements to solve the problems existing in point cloud processing.Different voxelization methods and point cloud sampling strategies will influence the extracted features,thereby impacting the final detection performance.
文摘This paper gives an overview of studies on parameters displayed on the Automotive Head Up Display (A-HUD) including calculation and construction of symbology page based on study results. A study has been made on vital parameters required for car drivers and design calculations have been made based on design parameters like field of view, distance from the design eye position, minimum character size viewable from a distance of 1.5m between driver and the projected image, and optical magnification factor. lhe display format suitable for A-HUD applications depends upon the parameters required to be displayed. The aspect ratio chosen is 4:3. This paper also provides method to design the symbology page embedding six vital parameters with their relative positioning and size considering relative position between display device and optical elements which has been considered with a magnification factor of 2.5. The field of view obtained is 6.7° × 4.8°.
基金financially supported by the National Key R&D Program of China and Shandong Province,China(Grant No.2021YFB3901300).
文摘Unmanned driving of agricultural machinery has garnered significant attention in recent years,especially with the development of precision farming and sensor technologies.To achieve high performance and low cost,perception tasks are of great importance.In this study,a low-cost and high-safety method was proposed for field road recognition in unmanned agricultural machinery.The approach of this study utilized point clouds,with low-resolution Lidar point clouds as inputs,generating high-resolution point clouds and Bird's Eye View(BEV)images that were encoded with several basic statistics.Using a BEV representation,road detection was reduced to a single-scale problem that could be addressed with an improved U-Net++neural network.Three enhancements were proposed for U-Net++:1)replacing the convolutional kernel in the original U-Net++ with an Asymmetric Convolution Block(ACBlock);2)adding a multi-branch Asymmetric Dilated Convolutional Block(MADC)in the highest semantic information layer;3)adding an Attention Gate(AG)model to the long-skip-connection in the decoding stage.The results of experiments of this study showed that our algorithm achieved a Mean Intersection Over Union of 96.54% on the 16-channel point clouds,which was 7.35 percentage points higher than U-Net++.Furthermore,the average processing time of the model was about 70 ms,meeting the time requirements of unmanned driving in agricultural machinery.The proposed method of this study can be applied to enhance the perception ability of unmanned agricultural machinery thereby increasing the safety of field road driving.