As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from bo...As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from both its environment and other agents,an agent can use various methods and sensor types to localize itself.With its high flexibility and robustness,collaborative positioning has become a widely used method in both military and civilian applications.This paper introduces the basic fundamental concepts and applications of collaborative positioning,and reviews recent progress in the field based on camera,LiDAR(Light Detection and Ranging),wireless sensor,and their integration.The paper compares the current methods with respect to their sensor type,summarizes their main paradigms,and analyzes their evaluation experiments.Finally,the paper discusses the main challenges and open issues that require further research.展开更多
稀疏卷积在处理激光雷达点云单目标跟踪时的潜力尚未得到充分发掘.目前,绝大多数点云跟踪算法使用基于球邻域的骨干网络,其显存计算资源占用大并且目标感知的关系建模不充分.针对此问题,本文提出一种基于稀疏卷积结构的LiDAR(Lightlaser...稀疏卷积在处理激光雷达点云单目标跟踪时的潜力尚未得到充分发掘.目前,绝大多数点云跟踪算法使用基于球邻域的骨干网络,其显存计算资源占用大并且目标感知的关系建模不充分.针对此问题,本文提出一种基于稀疏卷积结构的LiDAR(Lightlaser Detection And Ranging)点云跟踪算法,并创新性地融合了空间点与体素双通道的关系建模模块,以高效适应稀疏框架下目标判别信息的嵌入.首先,本文采用3D稀疏卷积残差网络来分别提取模板和搜索区域的特征,并利用反卷积来获取逐点特征来保证跟踪任务中对空间位置特性的要求.其次,关系建模模块进一步在模板与搜索区域特征之间计算相似度语义查询表.为了捕捉到模板与搜索区域间细粒度的关联性,该模块一方面在空间点通道中利用近邻算法找出每个搜索区域点的模板近邻点,并根据语义查询表提取对应特征;另一方面,在体素通道中以每个搜索区域点为中心构建局部多尺度体素,并根据落入体素单元的模板点索引计算语义查询表中值的累计和.最后,将双通道的特征融合并送入基于鸟瞰图的候选包围盒生成模块来回归目标包围盒.为了验证所提出方法的优越性,本文在KITTI和NuScenes数据集进行了测试,对比其他使用稀疏卷积的算法,本文方法平均成功率和精确率分别提升了11.0%和12.0%.本文方法在继承了稀疏卷积高效特点的同时还实现了跟踪精度的提高.展开更多
The perception module of advanced driver assistance systems plays a vital role.Perception schemes often use a single sensor for data processing and environmental perception or adopt the information processing results ...The perception module of advanced driver assistance systems plays a vital role.Perception schemes often use a single sensor for data processing and environmental perception or adopt the information processing results of various sensors for the fusion of the detection layer.This paper proposes a multi-scale and multi-sensor data fusion strategy in the front end of perception and accomplishes a multi-sensor function disparity map generation scheme.A binocular stereo vision sensor composed of two cameras and a light deterction and ranging(LiDAR)sensor is used to jointly perceive the environment,and a multi-scale fusion scheme is employed to improve the accuracy of the disparity map.This solution not only has the advantages of dense perception of binocular stereo vision sensors but also considers the perception accuracy of LiDAR sensors.Experiments demonstrate that the multi-scale multi-sensor scheme proposed in this paper significantly improves disparity map estimation.展开更多
In this paper, a novel fusion framework is proposed for night-vision applications such as pedestrian recognition,vehicle navigation and surveillance. The underlying concept is to combine low-light visible and infrared...In this paper, a novel fusion framework is proposed for night-vision applications such as pedestrian recognition,vehicle navigation and surveillance. The underlying concept is to combine low-light visible and infrared imagery into a single output to enhance visual perception. The proposed framework is computationally simple since it is only realized in the spatial domain. The core idea is to obtain an initial fused image by averaging all the source images. The initial fused image is then enhanced by selecting the most salient features guided from the root mean square error(RMSE) and fractal dimension of the visual and infrared images to obtain the final fused image.Extensive experiments on different scene imaginary demonstrate that it is consistently superior to the conventional image fusion methods in terms of visual and quantitative evaluations.展开更多
We investigated a strategy to improve predicting capacity of plot-scale above-ground biomass (AGB) by fusion of LiDAR and Land- sat5 TM derived biophysical variables for subtropical rainforest and eucalypts dominate...We investigated a strategy to improve predicting capacity of plot-scale above-ground biomass (AGB) by fusion of LiDAR and Land- sat5 TM derived biophysical variables for subtropical rainforest and eucalypts dominated forest in topographically complex landscapes in North-eastern Australia. Investigation was carried out in two study areas separately and in combination. From each plot of both study areas, LiDAR derived structural parameters of vegetation and reflectance of all Landsat bands, vegetation indices were employed. The regression analysis was carded out separately for LiDAR and Landsat derived variables indi- vidually and in combination. Strong relationships were found with LiDAR alone for eucalypts dominated forest and combined sites compared to the accuracy of AGB estimates by Landsat data. Fusing LiDAR with Landsat5 TM derived variables increased overall performance for the eucalypt forest and combined sites data by describing extra variation (3% for eucalypt forest and 2% combined sites) of field estimated plot-scale above-ground biomass. In contrast, separate LiDAR and imagery data, andfusion of LiDAR and Landsat data performed poorly across structurally complex closed canopy subtropical minforest. These findings reinforced that obtaining accurate estimates of above ground biomass using remotely sensed data is a function of the complexity of horizontal and vertical structural diversity of vegetation.展开更多
Light detection and ranging(LiDAR)has contributed immensely to forest mapping and 3D tree modelling.From the perspective of data acquisition,the integration of LiDAR data from different platforms would enrich forest i...Light detection and ranging(LiDAR)has contributed immensely to forest mapping and 3D tree modelling.From the perspective of data acquisition,the integration of LiDAR data from different platforms would enrich forest information at the tree and plot levels.This research develops a general framework to integrate ground-based and UAV-LiDAR(ULS)data to better estimate tree parameters based on quantitative structure modelling(QSM).This is accomplished in three sequential steps.First,the ground-based/ULS LiDAR data were co-registered based on the local density peaks of the clustered canopy.Next,redundancy and noise were removed for the ground-based/ULS LiDAR data fusion.Finally,tree modeling and biophysical parameter retrieval were based on QSM.Experiments were performed for Backpack/Handheld/UAV-based multi-platform mobile LiDAR data of a subtropical forest,including poplar and dawn redwood species.Generally,ground-based/ULS LiDAR data fusion outperforms ground-based LiDAR with respect to tree parameter estimation compared to field data.The fusion-derived tree height,tree volume,and crown volume significantly improved by up to 9.01%,5.28%,and 18.61%,respectively,in terms of rRMSE.By contrast,the diameter at breast height(DBH)is the parameter that has the least benefits from fusion,and rRMSE remains approximately the same,because stems are already well sampled from ground data.Additionally,particularly for dense forests,the fusion-derived tree parameters were improved compared to those derived from ground-based LiDAR.Ground-based LiDAR can potentially be used to estimate tree parameters in low-stand-density forests,whereby the improvement owing to fusion is not significant.展开更多
The rise of urban traffic flow highlights the growing importance of traffic safety.In order to reduce the occurrence rate of traffic accidents,and improve front vision information of vehicle drivers,the method to impr...The rise of urban traffic flow highlights the growing importance of traffic safety.In order to reduce the occurrence rate of traffic accidents,and improve front vision information of vehicle drivers,the method to improve visual information of the vehicle driver in low visibility conditions is put forward based on infrared and visible image fusion technique.The wavelet image confusion algorithm is adopted to decompose the image into low-frequency approximation components and high-frequency detail components.Low-frequency component contains information representing gray value differences.High-frequency component contains the detail information of the image,which is frequently represented by gray standard deviation to assess image quality.To extract feature information of low-frequency component and high-frequency component with different emphases,different fusion operators are used separately by low-frequency and high-frequency components.In the processing of low-frequency component,the fusion rule of weighted regional energy proportion is adopted to improve the brightness of the image,and the fusion rule of weighted regional proportion of standard deviation is used in all the three high-frequency components to enhance the image contrast.The experiments on image fusion of infrared and visible light demonstrate that this image fusion method can effectively improve the image brightness and contrast,and it is suitable for vision enhancement of the low-visibility images.展开更多
Radar and LiDAR are two environmental sensors commonly used in autonomous vehicles,Lidars are accurate in determining objects’positions but significantly less accurate as Radars on measuring their velocities.However,...Radar and LiDAR are two environmental sensors commonly used in autonomous vehicles,Lidars are accurate in determining objects’positions but significantly less accurate as Radars on measuring their velocities.However,Radars relative to Lidars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution.In order to compensate for the low detection accuracy,incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LiDAR,in this paper,an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles.By employing the Unscented Kalman Filter,Radar and LiDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle.Finally,the real vehicle test under various driving environment scenarios is carried out.The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy.Compared with a single sensor,it has obvious advantages and can improve the intelligence level of autonomous cars.展开更多
基金National Natural Science Foundation of China(Grant No.62101138)Shandong Natural Science Foundation(Grant No.ZR2021QD148)+1 种基金Guangdong Natural Science Foundation(Grant No.2022A1515012573)Guangzhou Basic and Applied Basic Research Project(Grant No.202102020701)for providing funds for publishing this paper。
文摘As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from both its environment and other agents,an agent can use various methods and sensor types to localize itself.With its high flexibility and robustness,collaborative positioning has become a widely used method in both military and civilian applications.This paper introduces the basic fundamental concepts and applications of collaborative positioning,and reviews recent progress in the field based on camera,LiDAR(Light Detection and Ranging),wireless sensor,and their integration.The paper compares the current methods with respect to their sensor type,summarizes their main paradigms,and analyzes their evaluation experiments.Finally,the paper discusses the main challenges and open issues that require further research.
文摘稀疏卷积在处理激光雷达点云单目标跟踪时的潜力尚未得到充分发掘.目前,绝大多数点云跟踪算法使用基于球邻域的骨干网络,其显存计算资源占用大并且目标感知的关系建模不充分.针对此问题,本文提出一种基于稀疏卷积结构的LiDAR(Lightlaser Detection And Ranging)点云跟踪算法,并创新性地融合了空间点与体素双通道的关系建模模块,以高效适应稀疏框架下目标判别信息的嵌入.首先,本文采用3D稀疏卷积残差网络来分别提取模板和搜索区域的特征,并利用反卷积来获取逐点特征来保证跟踪任务中对空间位置特性的要求.其次,关系建模模块进一步在模板与搜索区域特征之间计算相似度语义查询表.为了捕捉到模板与搜索区域间细粒度的关联性,该模块一方面在空间点通道中利用近邻算法找出每个搜索区域点的模板近邻点,并根据语义查询表提取对应特征;另一方面,在体素通道中以每个搜索区域点为中心构建局部多尺度体素,并根据落入体素单元的模板点索引计算语义查询表中值的累计和.最后,将双通道的特征融合并送入基于鸟瞰图的候选包围盒生成模块来回归目标包围盒.为了验证所提出方法的优越性,本文在KITTI和NuScenes数据集进行了测试,对比其他使用稀疏卷积的算法,本文方法平均成功率和精确率分别提升了11.0%和12.0%.本文方法在继承了稀疏卷积高效特点的同时还实现了跟踪精度的提高.
基金the National Key R&D Program of China(2018AAA0103103).
文摘The perception module of advanced driver assistance systems plays a vital role.Perception schemes often use a single sensor for data processing and environmental perception or adopt the information processing results of various sensors for the fusion of the detection layer.This paper proposes a multi-scale and multi-sensor data fusion strategy in the front end of perception and accomplishes a multi-sensor function disparity map generation scheme.A binocular stereo vision sensor composed of two cameras and a light deterction and ranging(LiDAR)sensor is used to jointly perceive the environment,and a multi-scale fusion scheme is employed to improve the accuracy of the disparity map.This solution not only has the advantages of dense perception of binocular stereo vision sensors but also considers the perception accuracy of LiDAR sensors.Experiments demonstrate that the multi-scale multi-sensor scheme proposed in this paper significantly improves disparity map estimation.
基金supported in part by the National Natural Science Foundation of China (61533017,U1501251)
文摘In this paper, a novel fusion framework is proposed for night-vision applications such as pedestrian recognition,vehicle navigation and surveillance. The underlying concept is to combine low-light visible and infrared imagery into a single output to enhance visual perception. The proposed framework is computationally simple since it is only realized in the spatial domain. The core idea is to obtain an initial fused image by averaging all the source images. The initial fused image is then enhanced by selecting the most salient features guided from the root mean square error(RMSE) and fractal dimension of the visual and infrared images to obtain the final fused image.Extensive experiments on different scene imaginary demonstrate that it is consistently superior to the conventional image fusion methods in terms of visual and quantitative evaluations.
基金made possible by a scholarship from the Australian Government(International Postgraduate Research Scholarship-awarded in 2009)a Southern Cross University Postgraduate Research Scholarship(SCUPRS in 2009)
文摘We investigated a strategy to improve predicting capacity of plot-scale above-ground biomass (AGB) by fusion of LiDAR and Land- sat5 TM derived biophysical variables for subtropical rainforest and eucalypts dominated forest in topographically complex landscapes in North-eastern Australia. Investigation was carried out in two study areas separately and in combination. From each plot of both study areas, LiDAR derived structural parameters of vegetation and reflectance of all Landsat bands, vegetation indices were employed. The regression analysis was carded out separately for LiDAR and Landsat derived variables indi- vidually and in combination. Strong relationships were found with LiDAR alone for eucalypts dominated forest and combined sites compared to the accuracy of AGB estimates by Landsat data. Fusing LiDAR with Landsat5 TM derived variables increased overall performance for the eucalypt forest and combined sites data by describing extra variation (3% for eucalypt forest and 2% combined sites) of field estimated plot-scale above-ground biomass. In contrast, separate LiDAR and imagery data, andfusion of LiDAR and Landsat data performed poorly across structurally complex closed canopy subtropical minforest. These findings reinforced that obtaining accurate estimates of above ground biomass using remotely sensed data is a function of the complexity of horizontal and vertical structural diversity of vegetation.
基金supported by the National Natural Science Foundation of China(Project No.42171361)the Research Grants Council of the Hong Kong Special Administrative Region,China,under Project PolyU 25211819the Hong Kong Polytechnic University under Projects 1-ZE8E and 1-ZVN6.
文摘Light detection and ranging(LiDAR)has contributed immensely to forest mapping and 3D tree modelling.From the perspective of data acquisition,the integration of LiDAR data from different platforms would enrich forest information at the tree and plot levels.This research develops a general framework to integrate ground-based and UAV-LiDAR(ULS)data to better estimate tree parameters based on quantitative structure modelling(QSM).This is accomplished in three sequential steps.First,the ground-based/ULS LiDAR data were co-registered based on the local density peaks of the clustered canopy.Next,redundancy and noise were removed for the ground-based/ULS LiDAR data fusion.Finally,tree modeling and biophysical parameter retrieval were based on QSM.Experiments were performed for Backpack/Handheld/UAV-based multi-platform mobile LiDAR data of a subtropical forest,including poplar and dawn redwood species.Generally,ground-based/ULS LiDAR data fusion outperforms ground-based LiDAR with respect to tree parameter estimation compared to field data.The fusion-derived tree height,tree volume,and crown volume significantly improved by up to 9.01%,5.28%,and 18.61%,respectively,in terms of rRMSE.By contrast,the diameter at breast height(DBH)is the parameter that has the least benefits from fusion,and rRMSE remains approximately the same,because stems are already well sampled from ground data.Additionally,particularly for dense forests,the fusion-derived tree parameters were improved compared to those derived from ground-based LiDAR.Ground-based LiDAR can potentially be used to estimate tree parameters in low-stand-density forests,whereby the improvement owing to fusion is not significant.
基金the Science and Technology Development Program of Beijing Municipal Commission of Education (No.KM201010011002)the National College Students'Scientific Research and Entrepreneurial Action Plan(SJ201401011)
文摘The rise of urban traffic flow highlights the growing importance of traffic safety.In order to reduce the occurrence rate of traffic accidents,and improve front vision information of vehicle drivers,the method to improve visual information of the vehicle driver in low visibility conditions is put forward based on infrared and visible image fusion technique.The wavelet image confusion algorithm is adopted to decompose the image into low-frequency approximation components and high-frequency detail components.Low-frequency component contains information representing gray value differences.High-frequency component contains the detail information of the image,which is frequently represented by gray standard deviation to assess image quality.To extract feature information of low-frequency component and high-frequency component with different emphases,different fusion operators are used separately by low-frequency and high-frequency components.In the processing of low-frequency component,the fusion rule of weighted regional energy proportion is adopted to improve the brightness of the image,and the fusion rule of weighted regional proportion of standard deviation is used in all the three high-frequency components to enhance the image contrast.The experiments on image fusion of infrared and visible light demonstrate that this image fusion method can effectively improve the image brightness and contrast,and it is suitable for vision enhancement of the low-visibility images.
基金Supported by National Natural Science Foundation of China(Grant Nos.U20A20333,61906076,51875255,U1764257,U1762264),Jiangsu Provincial Natural Science Foundation of China(Grant Nos.BK20180100,BK20190853)Six Talent Peaks Project of Jiangsu Province(Grant No.2018-TD-GDZB-022)+1 种基金China Postdoctoral Science Foundation(Grant No.2020T130258)Jiangsu Provincial Key Research and Development Program of China(Grant No.BE2020083-2).
文摘Radar and LiDAR are two environmental sensors commonly used in autonomous vehicles,Lidars are accurate in determining objects’positions but significantly less accurate as Radars on measuring their velocities.However,Radars relative to Lidars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution.In order to compensate for the low detection accuracy,incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LiDAR,in this paper,an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles.By employing the Unscented Kalman Filter,Radar and LiDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle.Finally,the real vehicle test under various driving environment scenarios is carried out.The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy.Compared with a single sensor,it has obvious advantages and can improve the intelligence level of autonomous cars.