在自动驾驶感知系统中视觉传感器与激光雷达是关键的信息来源,但在目前的3D目标检测任务中大部分纯点云的网络检测能力都优于图像和激光点云融合的网络,现有的研究将其原因总结为图像与雷达信息的视角错位以及异构特征难以匹配,单阶段...在自动驾驶感知系统中视觉传感器与激光雷达是关键的信息来源,但在目前的3D目标检测任务中大部分纯点云的网络检测能力都优于图像和激光点云融合的网络,现有的研究将其原因总结为图像与雷达信息的视角错位以及异构特征难以匹配,单阶段融合算法难以充分融合二者的特征.为此,本文提出一种新的多层多模态融合的3D目标检测方法:首先,前融合阶段通过在2D检测框形成的锥视区内对点云进行局部顺序的色彩信息(Red Green Blue,RGB)涂抹编码;然后将编码后点云输入融合了自注意力机制上下文感知的通道扩充PointPillars检测网络;后融合阶段将2D候选框与3D候选框在非极大抑制之前编码为两组稀疏张量,利用相机激光雷达对象候选融合网络得出最终的3D目标检测结果.在KITTI数据集上进行的实验表明,本融合检测方法相较于纯点云网络的基线上有了显著的性能提升,平均mAP提高了6.24%.展开更多
针对自动驾驶路面上目标漏检和错检的问题,提出一种基于改进Centerfusion的自动驾驶3D目标检测模型。该模型通过将相机信息和雷达特征融合,构成多通道特征数据输入,从而增强目标检测网络的鲁棒性,减少漏检问题;为了能够得到更加准确丰富...针对自动驾驶路面上目标漏检和错检的问题,提出一种基于改进Centerfusion的自动驾驶3D目标检测模型。该模型通过将相机信息和雷达特征融合,构成多通道特征数据输入,从而增强目标检测网络的鲁棒性,减少漏检问题;为了能够得到更加准确丰富的3D目标检测信息,引入了改进的注意力机制,用于增强视锥网格中的雷达点云和视觉信息融合;使用改进的损失函数优化边框预测的准确度。在Nuscenes数据集上进行模型验证和对比,实验结果表明,相较于传统的Centerfusion模型,提出的模型平均检测精度均值(mean Average Precision,mAP)提高了1.3%,Nuscenes检测分数(Nuscenes Detection Scores,NDS)提高了1.2%。展开更多
Flexible tactile sensors have broad applications in human physiological monitoring,robotic operation and human-machine interaction.However,the research of wearable and flexible tactile sensors with high sensitivity,wi...Flexible tactile sensors have broad applications in human physiological monitoring,robotic operation and human-machine interaction.However,the research of wearable and flexible tactile sensors with high sensitivity,wide sensing range and ability to detect three-dimensional(3D)force is still very challenging.Herein,a flexible tactile electronic skin sensor based on carbon nanotubes(CNTs)/polydimethylsiloxane(PDMS)nanocomposites is presented for 3D contact force detection.The 3D forces were acquired from combination of four specially designed cells in a sensing element.Contributed from the double-sided rough porous structure and specific surface morphology of nanocomposites,the piezoresistive sensor possesses high sensitivity of 12.1 kPa?1 within the range of 600 Pa and 0.68 kPa?1 in the regime exceeding 1 kPa for normal pressure,as well as 59.9 N?1 in the scope of<0.05 N and>2.3 N?1 in the region of<0.6 N for tangential force with ultra-low response time of 3.1 ms.In addition,multi-functional detection in human body monitoring was employed with single sensing cell and the sensor array was integrated into a robotic arm for objects grasping control,indicating the capacities in intelligent robot applications.展开更多
The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-d...The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-dimensions using a mobile platform. The system incorporates 4 ultrasonic sensors scanner system, an HD web camera as well as an inertial measurement unit (IMU). The whole platform is mountable on mobile facilities, such as a wheelchair. The proposed mapping approach took advantage of the precision of the 3D point clouds produced by the ultrasonic sensors system despite their scarcity to help build a more definite 3D scene. Using a robust iterative algorithm, it combined the structure from motion generated 3D point clouds with the ultrasonic sensors and IMU generated 3D point clouds to derive a much more precise point cloud using the depth measurements from the ultrasonic sensors. Because of their ability to recognize features of objects in the targeted scene, the ultrasonic generated point clouds performed feature extraction on the consecutive point cloud to ensure a perfect alignment. The range measured by ultrasonic sensors contributed to the depth correction of the generated 3D images (the 3D scenes). Experiments revealed that the system generated not only dense but precise 3D maps of the environments. The results showed that the designed 3D modeling platform is able to help in assistive living environment for self-navigation, obstacle alert, and other driving assisting tasks.展开更多
使用正交排列的电容式传感单元的三维电场传感器(Three-dimensional electric field sensor,3D EFS)测量空间电场时,轴间耦合效应会严重影响EFS的测量精度。为此,提出了一种电场屏蔽电极,以降低3D EFS的轴间耦合效应,提高测量精度。首先...使用正交排列的电容式传感单元的三维电场传感器(Three-dimensional electric field sensor,3D EFS)测量空间电场时,轴间耦合效应会严重影响EFS的测量精度。为此,提出了一种电场屏蔽电极,以降低3D EFS的轴间耦合效应,提高测量精度。首先,利用多物理场仿真软件构建电场分布模型。其次,根据仿真结果建立带屏蔽电极的3D EFS电容式传感单元的屏蔽电极模型。最后,建立任意角度的测试平台,将带有屏蔽电极的3D EFS和无屏蔽电极的3D EFS进行实验对比。结果显示,有屏蔽电极的3D EFS的测量偏差在3.2%以内,比无屏蔽电极的3D EFS的测量偏差减少12%。因此,所设计的基于电场屏蔽结构的3D EFS可以使解耦矩阵更加可靠,有效降低空间电场测量偏差。展开更多
文摘在自动驾驶感知系统中视觉传感器与激光雷达是关键的信息来源,但在目前的3D目标检测任务中大部分纯点云的网络检测能力都优于图像和激光点云融合的网络,现有的研究将其原因总结为图像与雷达信息的视角错位以及异构特征难以匹配,单阶段融合算法难以充分融合二者的特征.为此,本文提出一种新的多层多模态融合的3D目标检测方法:首先,前融合阶段通过在2D检测框形成的锥视区内对点云进行局部顺序的色彩信息(Red Green Blue,RGB)涂抹编码;然后将编码后点云输入融合了自注意力机制上下文感知的通道扩充PointPillars检测网络;后融合阶段将2D候选框与3D候选框在非极大抑制之前编码为两组稀疏张量,利用相机激光雷达对象候选融合网络得出最终的3D目标检测结果.在KITTI数据集上进行的实验表明,本融合检测方法相较于纯点云网络的基线上有了显著的性能提升,平均mAP提高了6.24%.
文摘针对自动驾驶路面上目标漏检和错检的问题,提出一种基于改进Centerfusion的自动驾驶3D目标检测模型。该模型通过将相机信息和雷达特征融合,构成多通道特征数据输入,从而增强目标检测网络的鲁棒性,减少漏检问题;为了能够得到更加准确丰富的3D目标检测信息,引入了改进的注意力机制,用于增强视锥网格中的雷达点云和视觉信息融合;使用改进的损失函数优化边框预测的准确度。在Nuscenes数据集上进行模型验证和对比,实验结果表明,相较于传统的Centerfusion模型,提出的模型平均检测精度均值(mean Average Precision,mAP)提高了1.3%,Nuscenes检测分数(Nuscenes Detection Scores,NDS)提高了1.2%。
基金funding from National Natural Science Foundation of China(NSFC Nos.61774157,81771388,61874121,and 61874012)Beijing Natural Science Foundation(No.4182075)the Capital Science and Technology Conditions Platform Project(Project ID:Z181100009518014).
文摘Flexible tactile sensors have broad applications in human physiological monitoring,robotic operation and human-machine interaction.However,the research of wearable and flexible tactile sensors with high sensitivity,wide sensing range and ability to detect three-dimensional(3D)force is still very challenging.Herein,a flexible tactile electronic skin sensor based on carbon nanotubes(CNTs)/polydimethylsiloxane(PDMS)nanocomposites is presented for 3D contact force detection.The 3D forces were acquired from combination of four specially designed cells in a sensing element.Contributed from the double-sided rough porous structure and specific surface morphology of nanocomposites,the piezoresistive sensor possesses high sensitivity of 12.1 kPa?1 within the range of 600 Pa and 0.68 kPa?1 in the regime exceeding 1 kPa for normal pressure,as well as 59.9 N?1 in the scope of<0.05 N and>2.3 N?1 in the region of<0.6 N for tangential force with ultra-low response time of 3.1 ms.In addition,multi-functional detection in human body monitoring was employed with single sensing cell and the sensor array was integrated into a robotic arm for objects grasping control,indicating the capacities in intelligent robot applications.
文摘The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-dimensions using a mobile platform. The system incorporates 4 ultrasonic sensors scanner system, an HD web camera as well as an inertial measurement unit (IMU). The whole platform is mountable on mobile facilities, such as a wheelchair. The proposed mapping approach took advantage of the precision of the 3D point clouds produced by the ultrasonic sensors system despite their scarcity to help build a more definite 3D scene. Using a robust iterative algorithm, it combined the structure from motion generated 3D point clouds with the ultrasonic sensors and IMU generated 3D point clouds to derive a much more precise point cloud using the depth measurements from the ultrasonic sensors. Because of their ability to recognize features of objects in the targeted scene, the ultrasonic generated point clouds performed feature extraction on the consecutive point cloud to ensure a perfect alignment. The range measured by ultrasonic sensors contributed to the depth correction of the generated 3D images (the 3D scenes). Experiments revealed that the system generated not only dense but precise 3D maps of the environments. The results showed that the designed 3D modeling platform is able to help in assistive living environment for self-navigation, obstacle alert, and other driving assisting tasks.