The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium i...The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium ions (He+) at 30.4 nm, the EUVC images the entire plasmasphere with a time resolution of 10 min and a spatial resolution of about 0.1 Earth radius (RE) in a single frame. We first present details about the data processing from EUVC and the data acquisition in the commissioning phase, and then report some initial results, which reflect the basic features of the plas- masphere well. The photon count and emission intensity of EUVC are consistent with previous observations and models, which indicate that the EUVC works normally and can provide high quality data for future studies.展开更多
Imagine that hundreds of video streams, taken by mobile phones during a rock concert, are uploaded to a server. One attractive application of such prominent dataset is to allow a user to create his own video with a de...Imagine that hundreds of video streams, taken by mobile phones during a rock concert, are uploaded to a server. One attractive application of such prominent dataset is to allow a user to create his own video with a deliberately chosen but virtual camera trajectory. In this paper we present algorithms for the main sub-tasks (spatial calibration, image interpolation) related to this problem. Calibration: Spatial calibration of individual video streams is one of the most basic tasks related to creating such a video. At its core, this requires to estimate the pairwise relative geometry of images taken by different cameras. It is also known as the relative pose problem [1], and is fundamental to many computer vision algorithms. In practice, efficiency and robustness are of highest relevance for big data applications such as the ones addressed in the EU-FET_SME project SceneNet. In this paper, we present an improved algorithm that exploits additional data from inertial sensors, such as accelerometer, magnetometer or gyroscopes, which by now are available in most mobile phones. Experimental results on synthetic and real data demonstrate the accuracy and efficiency of our algorithm. Interpolation: Given the calibrated cameras, we present a second algorithm that generates novel synthetic images along a predefined specific camera trajectory. Each frame is produced from two “neighboring” video streams that are selected from the data base. The interpolation algorithm is then based on the point cloud reconstructed in the spatial calibration phase and iteratively projects triangular patches from the existing images into the new view. We present convincing images synthesized with the proposed algorithm.展开更多
目前,ToF(Time of Flight)三维成像技术在人脸检测、3D目标识别、三维重建等视觉任务领域具有广阔的应用前景。然而,用ToF相机所获得的深度信息往往存在与像素、温度、深度畸变、多径干扰以及背景光相关的噪声干扰。现有的ToF优化算法...目前,ToF(Time of Flight)三维成像技术在人脸检测、3D目标识别、三维重建等视觉任务领域具有广阔的应用前景。然而,用ToF相机所获得的深度信息往往存在与像素、温度、深度畸变、多径干扰以及背景光相关的噪声干扰。现有的ToF优化算法耗时较大且很难保留目标的细节信息,这些问题严重影响了ToF相机的实际应用。针对以上问题,本文提出一种实时的基于振幅图的ToF深度图优化方法。首先通过ToF接收端采集的原始数据生成带有噪声的振幅图像。针对振幅图中的噪声,选用快速高效的双边网格滤波对振幅图进行去噪。然后,利用优化后的振幅图生成掩码以分割出深度图中前景和背景区域。同时,对深度图中的噪声以及误差像素用滤波的方式优化,最后将优化后的深度图和掩码融合生成最终的深度图。实验结果表明,本文所提算法可以实时有效地滤除深度图噪声,去除背景噪声的干扰,同时能很好地保留深度图中目标对象的细节信息。有助于ToF相机拥有更广泛的应用场景。展开更多
煤矿掘进巷道锚护位置的精准识别与定位是钻锚机器人实现智能永久支护亟需突破的关键技术。笔者提出一种基于视觉图像与激光点云融合的巷道锚护孔位智能识别定位方法,包括图像目标识别、点云图像特征融合和定位坐标提取3个步骤:①针对...煤矿掘进巷道锚护位置的精准识别与定位是钻锚机器人实现智能永久支护亟需突破的关键技术。笔者提出一种基于视觉图像与激光点云融合的巷道锚护孔位智能识别定位方法,包括图像目标识别、点云图像特征融合和定位坐标提取3个步骤:①针对煤矿井下低照度、水雾和粉尘等环境因素导致的锚孔轮廓成像模糊的问题,采用IA(Image-Adaptive)-SimAM-YOLOv7-tiny网络对巷道待锚护孔位进行视觉识别,该网络能够自适应地增强图像亮度和对比度,恢复锚孔边缘的高频信息,并使模型重点关注锚孔特征,提高锚孔检测的成功率;②求解激光雷达和工业相机联合标定的外参矩阵,将图像检测的锚孔边界框通过透视投影关系生成锥形感兴趣区域(Region Of Interest,ROI),获得对应的目标点云团簇;③采用点云处理算法提取锚护孔位边界点云,获得孔位中心坐标及其法向量,并通过坐标深度差比较判断锚孔识别的正确性。文中搭建了锚杆台车机械臂钻孔定位系统,对算法自主定位的精度以及准确度进行验证,试验结果表明:IA-SimAM-YOLOv7-tiny模型的平均精度均值(Mean Average Precision,mAP)为87.3%,较YOLOv7-tiny模型提高了4.6%;提出的融合算法定位误差为3 mm,单锚孔情况下系统平均识别时间为0.77 s,与单一视觉方法相比,采用激光与视觉多源融合不仅可以降低环境和小样本训练对定位性能的影响,而且可以获得锚护孔位的法向量,为机械臂调整钻孔位姿实现精准锚固提供依据。展开更多
文摘The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium ions (He+) at 30.4 nm, the EUVC images the entire plasmasphere with a time resolution of 10 min and a spatial resolution of about 0.1 Earth radius (RE) in a single frame. We first present details about the data processing from EUVC and the data acquisition in the commissioning phase, and then report some initial results, which reflect the basic features of the plas- masphere well. The photon count and emission intensity of EUVC are consistent with previous observations and models, which indicate that the EUVC works normally and can provide high quality data for future studies.
文摘Imagine that hundreds of video streams, taken by mobile phones during a rock concert, are uploaded to a server. One attractive application of such prominent dataset is to allow a user to create his own video with a deliberately chosen but virtual camera trajectory. In this paper we present algorithms for the main sub-tasks (spatial calibration, image interpolation) related to this problem. Calibration: Spatial calibration of individual video streams is one of the most basic tasks related to creating such a video. At its core, this requires to estimate the pairwise relative geometry of images taken by different cameras. It is also known as the relative pose problem [1], and is fundamental to many computer vision algorithms. In practice, efficiency and robustness are of highest relevance for big data applications such as the ones addressed in the EU-FET_SME project SceneNet. In this paper, we present an improved algorithm that exploits additional data from inertial sensors, such as accelerometer, magnetometer or gyroscopes, which by now are available in most mobile phones. Experimental results on synthetic and real data demonstrate the accuracy and efficiency of our algorithm. Interpolation: Given the calibrated cameras, we present a second algorithm that generates novel synthetic images along a predefined specific camera trajectory. Each frame is produced from two “neighboring” video streams that are selected from the data base. The interpolation algorithm is then based on the point cloud reconstructed in the spatial calibration phase and iteratively projects triangular patches from the existing images into the new view. We present convincing images synthesized with the proposed algorithm.
文摘目前,ToF(Time of Flight)三维成像技术在人脸检测、3D目标识别、三维重建等视觉任务领域具有广阔的应用前景。然而,用ToF相机所获得的深度信息往往存在与像素、温度、深度畸变、多径干扰以及背景光相关的噪声干扰。现有的ToF优化算法耗时较大且很难保留目标的细节信息,这些问题严重影响了ToF相机的实际应用。针对以上问题,本文提出一种实时的基于振幅图的ToF深度图优化方法。首先通过ToF接收端采集的原始数据生成带有噪声的振幅图像。针对振幅图中的噪声,选用快速高效的双边网格滤波对振幅图进行去噪。然后,利用优化后的振幅图生成掩码以分割出深度图中前景和背景区域。同时,对深度图中的噪声以及误差像素用滤波的方式优化,最后将优化后的深度图和掩码融合生成最终的深度图。实验结果表明,本文所提算法可以实时有效地滤除深度图噪声,去除背景噪声的干扰,同时能很好地保留深度图中目标对象的细节信息。有助于ToF相机拥有更广泛的应用场景。
文摘煤矿掘进巷道锚护位置的精准识别与定位是钻锚机器人实现智能永久支护亟需突破的关键技术。笔者提出一种基于视觉图像与激光点云融合的巷道锚护孔位智能识别定位方法,包括图像目标识别、点云图像特征融合和定位坐标提取3个步骤:①针对煤矿井下低照度、水雾和粉尘等环境因素导致的锚孔轮廓成像模糊的问题,采用IA(Image-Adaptive)-SimAM-YOLOv7-tiny网络对巷道待锚护孔位进行视觉识别,该网络能够自适应地增强图像亮度和对比度,恢复锚孔边缘的高频信息,并使模型重点关注锚孔特征,提高锚孔检测的成功率;②求解激光雷达和工业相机联合标定的外参矩阵,将图像检测的锚孔边界框通过透视投影关系生成锥形感兴趣区域(Region Of Interest,ROI),获得对应的目标点云团簇;③采用点云处理算法提取锚护孔位边界点云,获得孔位中心坐标及其法向量,并通过坐标深度差比较判断锚孔识别的正确性。文中搭建了锚杆台车机械臂钻孔定位系统,对算法自主定位的精度以及准确度进行验证,试验结果表明:IA-SimAM-YOLOv7-tiny模型的平均精度均值(Mean Average Precision,mAP)为87.3%,较YOLOv7-tiny模型提高了4.6%;提出的融合算法定位误差为3 mm,单锚孔情况下系统平均识别时间为0.77 s,与单一视觉方法相比,采用激光与视觉多源融合不仅可以降低环境和小样本训练对定位性能的影响,而且可以获得锚护孔位的法向量,为机械臂调整钻孔位姿实现精准锚固提供依据。