期刊文献+
共找到1,576篇文章
< 1 2 79 >
每页显示 20 50 100
Survey and evaluation of monocular visual-inertial SLAM algorithms for augmented reality 被引量:4
1
作者 Jinyu LI Bangbang YANG +3 位作者 Danpeng CHEN Nan WANG Guofeng ZHANG Hujun BAO 《Virtual Reality & Intelligent Hardware》 2019年第4期386-410,共25页
Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an ... Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an appropriate benchmark.For AR applications in practice,a variety of challenging situations(e.g.,fast motion,strong rotation,serious motion blur,dynamic interference)may be easily encountered since a home user may not carefully move the AR device,and the real environment may be quite complex.In addition,the frequency of camera lost should be minimized and the recovery from the failure status should be fast and accurate for good AR experience.Existing SLAM datasets/benchmarks generally only provide the evaluation of pose accuracy and their camera motions are somehow simple and do not fit well the common cases in the mobile AR applications.With the above motivation,we build a new visual-inertial dataset as well as a series of evaluation criteria for AR.We also review the existing monocular VSLAM/VISLAM approaches with detailed analyses and comparisons.Especially,we select 8 representative monocular VSLAM/VISLAM approaches/systems and quantitatively evaluate them on our benchmark.Our dataset,sample code and corresponding evaluation tools are available at the benchmark website http://www.zjucvg.net/eval-vislam/. 展开更多
关键词 visual-inertial SLAM ODOMETRY Tracking LOCALIZATION Mapping Augmented reality
下载PDF
Monocular Visual-Inertial and Robotic-Arm Calibration in a Unifying Framework
2
作者 Yinlong Zhang Wei Liang +3 位作者 Mingze Yuan Hongsheng He Jindong Tan Zhibo Pang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第1期146-159,共14页
Reliable and accurate calibration for camera,inertial measurement unit(IMU)and robot is a critical prerequisite for visual-inertial based robot pose estimation and surrounding environment perception.However,traditiona... Reliable and accurate calibration for camera,inertial measurement unit(IMU)and robot is a critical prerequisite for visual-inertial based robot pose estimation and surrounding environment perception.However,traditional calibrations suffer inaccuracy and inconsistency.To address these problems,this paper proposes a monocular visual-inertial and robotic-arm calibration in a unifying framework.In our method,the spatial relationship is geometrically correlated between the sensing units and robotic arm.The decoupled estimations on rotation and translation could reduce the coupled errors during the optimization.Additionally,the robotic calibration moving trajectory has been designed in a spiral pattern that enables full excitations on 6 DOF motions repeatably and consistently.The calibration has been evaluated on our developed platform.In the experiments,the calibration achieves the accuracy with rotation and translation RMSEs less than 0.7°and 0.01 m,respectively.The comparisons with state-of-the-art results prove our calibration consistency,accuracy and effectiveness. 展开更多
关键词 CALIBRATION inertial measurement unit(IMU) monocular camera robotic arm spiral moving trajectory
下载PDF
KLT-VIO:Real-time Monocular Visual-Inertial Odometry
3
作者 Yuhao Jin Hang Li Shoulin Yin 《IJLAI Transactions on Science and Engineering》 2024年第1期8-16,共9页
This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By inte... This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By integrating the optical flow method,the algorithm tracks both point and line features in images simultaneously,significantly reducing computational complexity and the matching time for line feature descriptors.Additionally,this paper advances the triangulation method for line features,using depth information from line segment endpoints to determine their Plcker coordinates in three-dimensional space.Tests on the EuRoC datasets show that the proposed algorithm outperforms PL-VIO in terms of processing speed per frame,with an approximate 5%to 10%improvement in both relative pose error(RPE)and absolute trajectory error(ATE).These results demonstrate that the proposed VIO algorithm is an efficient solution suitable for low-computing platforms requiring real-time localization and navigation. 展开更多
关键词 visual-inertial odometry Opticalflow Point features Line features Bundle adjustment
原文传递
Depth-Guided Vision Transformer With Normalizing Flows for Monocular 3D Object Detection
4
作者 Cong Pan Junran Peng Zhaoxiang Zhang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第3期673-689,共17页
Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input t... Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input to augment the RGB images.Depth-based methods attempt to convert estimated depth maps to pseudo-LiDAR and then use LiDAR-based object detectors or focus on the perspective of image and depth fusion learning.However,they demonstrate limited performance and efficiency as a result of depth inaccuracy and complex fusion mode with convolutions.Different from these approaches,our proposed depth-guided vision transformer with a normalizing flows(NF-DVT)network uses normalizing flows to build priors in depth maps to achieve more accurate depth information.Then we develop a novel Swin-Transformer-based backbone with a fusion module to process RGB image patches and depth map patches with two separate branches and fuse them using cross-attention to exchange information with each other.Furthermore,with the help of pixel-wise relative depth values in depth maps,we develop new relative position embeddings in the cross-attention mechanism to capture more accurate sequence ordering of input tokens.Our method is the first Swin-Transformer-based backbone architecture for monocular 3D object detection.The experimental results on the KITTI and the challenging Waymo Open datasets show the effectiveness of our proposed method and superior performance over previous counterparts. 展开更多
关键词 monocular 3D object detection normalizing flows Swin Transformer
下载PDF
PC-VINS-Mono: A Robust Mono Visual-Inertial Odometry with Photometric Calibration
5
作者 Yao Xiao Xiaogang Ruan Xiaoqing Zhu 《Journal of Autonomous Intelligence》 2018年第2期29-35,共7页
Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy ... Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy of the estimation results and the robustness of VIO. In high contrast lighting condition environment, images captured by auto exposure camera shows frequently change with its exposure time. As a result, the gray value of the same feature in the image show vary from frame to frame, which poses large challenge to the feature detection and tracking procedure. Moreover, this problem further been aggravated by the nonlinear camera response function and lens attenuation. However, very few VIO methods take full advantage of photometric camera calibration and discuss the influence of photometric calibration to the VIO. In this paper, we proposed a robust monocular visual-inertial odometry, PC-VINS-Mono, which can be understood as an extension of the opens-source VIO pipeline, VINS-Mono, with the capability of photometric calibration. We evaluate the proposed algorithm with the public dataset. Experimental results show that, with photometric calibration, our algorithm achieves better performance comparing to the VINS-Mono. 展开更多
关键词 PHOTOMETRIC Calibration visual-inertial ODOMETRY SIMULTANEOUS Localization and Mapping Robot Navigation
下载PDF
Monocular Depth Estimation with Sharp Boundary
6
作者 Xin Yang Qingling Chang +2 位作者 Shiting Xu Xinlin Liu Yan Cui 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第7期573-592,共20页
Monocular depth estimation is the basic task in computer vision.Its accuracy has tremendous improvement in the decade with the development of deep learning.However,the blurry boundary in the depth map is a serious pro... Monocular depth estimation is the basic task in computer vision.Its accuracy has tremendous improvement in the decade with the development of deep learning.However,the blurry boundary in the depth map is a serious problem.Researchers find that the blurry boundary is mainly caused by two factors.First,the low-level features,containing boundary and structure information,may be lost in deep networks during the convolution process.Second,themodel ignores the errors introduced by the boundary area due to the few portions of the boundary area in the whole area,during the backpropagation.Focusing on the factors mentioned above.Two countermeasures are proposed to mitigate the boundary blur problem.Firstly,we design a scene understanding module and scale transformmodule to build a lightweight fuse feature pyramid,which can deal with low-level feature loss effectively.Secondly,we propose a boundary-aware depth loss function to pay attention to the effects of the boundary’s depth value.Extensive experiments show that our method can predict the depth maps with clearer boundaries,and the performance of the depth accuracy based on NYU-Depth V2,SUN RGB-D,and iBims-1 are competitive. 展开更多
关键词 monocular depth estimation object boundary blurry boundary scene global information feature fusion scale transform boundary aware
下载PDF
Monocular 3D object detection with Pseudo-LiDAR confidence sampling and hierarchical geometric feature extraction in 6G network
7
作者 Jianlong Zhang Guangzu Fang +3 位作者 Bin Wang Xiaobo Zhou Qingqi Pei Chen Chen 《Digital Communications and Networks》 SCIE CSCD 2023年第4期827-835,共9页
The high bandwidth and low latency of 6G network technology enable the successful application of monocular 3D object detection on vehicle platforms.Monocular 3D-object-detection-based Pseudo-LiDAR is a low-cost,lowpow... The high bandwidth and low latency of 6G network technology enable the successful application of monocular 3D object detection on vehicle platforms.Monocular 3D-object-detection-based Pseudo-LiDAR is a low-cost,lowpower solution compared to LiDAR solutions in the field of autonomous driving.However,this technique has some problems,i.e.,(1)the poor quality of generated Pseudo-LiDAR point clouds resulting from the nonlinear error distribution of monocular depth estimation and(2)the weak representation capability of point cloud features due to the neglected global geometric structure features of point clouds existing in LiDAR-based 3D detection networks.Therefore,we proposed a Pseudo-LiDAR confidence sampling strategy and a hierarchical geometric feature extraction module for monocular 3D object detection.We first designed a point cloud confidence sampling strategy based on a 3D Gaussian distribution to assign small confidence to the points with great error in depth estimation and filter them out according to the confidence.Then,we present a hierarchical geometric feature extraction module by aggregating the local neighborhood features and a dual transformer to capture the global geometric features in the point cloud.Finally,our detection framework is based on Point-Voxel-RCNN(PV-RCNN)with high-quality Pseudo-LiDAR and enriched geometric features as input.From the experimental results,our method achieves satisfactory results in monocular 3D object detection. 展开更多
关键词 monocular 3D object detection Pseudo-LiDAR Confidence sampling Hierarchical geometric feature extraction
下载PDF
Dual Branch PnP Based Network for Monocular 6D Pose Estimation
8
作者 Jia-Yu Liang Hong-Bo Zhang +2 位作者 Qing Lei Ji-Xiang Du Tian-Liang Lin 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期3243-3256,共14页
Monocular 6D pose estimation is a functional task in the field of com-puter vision and robotics.In recent years,2D-3D correspondence-based methods have achieved improved performance in multiview and depth data-based s... Monocular 6D pose estimation is a functional task in the field of com-puter vision and robotics.In recent years,2D-3D correspondence-based methods have achieved improved performance in multiview and depth data-based scenes.However,for monocular 6D pose estimation,these methods are affected by the prediction results of the 2D-3D correspondences and the robustness of the per-spective-n-point(PnP)algorithm.There is still a difference in the distance from the expected estimation effect.To obtain a more effective feature representation result,edge enhancement is proposed to increase the shape information of the object by analyzing the influence of inaccurate 2D-3D matching on 6D pose regression and comparing the effectiveness of the intermediate representation.Furthermore,although the transformation matrix is composed of rotation and translation matrices from 3D model points to 2D pixel points,the two variables are essentially different and the same network cannot be used for both variables in the regression process.Therefore,to improve the effectiveness of the PnP algo-rithm,this paper designs a dual-branch PnP network to predict rotation and trans-lation information.Finally,the proposed method is verified on the public LM,LM-O and YCB-Video datasets.The ADD(S)values of the proposed method are 94.2 and 62.84 on the LM and LM-O datasets,respectively.The AUC of ADD(-S)value on YCB-Video is 81.1.These experimental results show that the performance of the proposed method is superior to that of similar methods. 展开更多
关键词 6D pose monocular RGB edge enhancement dual-branch PnP 2D-3D correspondence
下载PDF
基本型间歇性外斜视患儿单眼遮盖试验前后斜视度的变化
9
作者 刘育榕 刘彦孜 +1 位作者 孙思宇 王丽晶 《国际眼科杂志》 CAS 2024年第7期1165-1167,共3页
目的:比较单眼遮盖试验前后基本型间歇性外斜视患儿斜视度的变化。方法:前瞻性临床研究。选取2021-07/2022-09在我院行斜视矫正术的基本型间歇性外斜视患儿258例,其中男122例,女136例,年龄5-12(平均8.0±3.1)岁。术前均采用三棱镜... 目的:比较单眼遮盖试验前后基本型间歇性外斜视患儿斜视度的变化。方法:前瞻性临床研究。选取2021-07/2022-09在我院行斜视矫正术的基本型间歇性外斜视患儿258例,其中男122例,女136例,年龄5-12(平均8.0±3.1)岁。术前均采用三棱镜加交替遮盖法进行视远(6 m)及视近(33 cm)的斜视度的测量,然后遮盖患儿非主导眼40 min后再次测量视远及视近斜视度,遮盖期间患儿不能闭眼及视近,打开遮盖眼之前需遮盖另眼,再交替遮盖测量斜视度。结果:单眼遮盖试验前后看远(6 m)斜视度分别为28.23^(△)±10.79^(△)、29.79^(△)±10.85^(△)(t=-0.903,P=0.368),看近(33 cm)斜视度分别为33.14^(△)±8.89^(△)、36.90^(△)±10.76^(△)(t=-2.377,P=0.019)。结论:基本型间歇性外斜视患儿术前单眼遮盖试验对视近斜视度影响大,可以暴露最大斜视度,降低斜视术后欠矫率,为斜视手术方案提供更可靠的依据。 展开更多
关键词 间歇性外斜视 单眼遮盖试验 斜视度
下载PDF
一种四轴飞行器单目视觉测距算法
10
作者 顾兆军 韩强 +2 位作者 王家亮 陈辉 董楷 《小型微型计算机系统》 CSCD 北大核心 2024年第1期199-206,共8页
为解决四轴飞行器在执行目标检测与跟踪任务时无法判断机身与障碍物或跟踪目标之间安全距离问题,提出一种四轴飞行器单目视觉测距算法,算法融合基于帧间差分技术的水平测距模型与基于小孔成像的垂直测距模型.首先,算法可利用基于YOLOV4-... 为解决四轴飞行器在执行目标检测与跟踪任务时无法判断机身与障碍物或跟踪目标之间安全距离问题,提出一种四轴飞行器单目视觉测距算法,算法融合基于帧间差分技术的水平测距模型与基于小孔成像的垂直测距模型.首先,算法可利用基于YOLOV4-Tiny的目标检测算法识别视频帧中的物体类别,并可在视频帧中框选出被测物体,依据现场环境选择最佳的测距方法进行测距;其次,系统终端执行测距方法,基于被测物体的视频帧成像位置与设定参数之间的差值控制四轴飞行器飞行,并记录飞行数据;最后,通过测距模型对飞行数据进行处理,计算出四轴飞行器在初始位置和最终位置与被测物体之间的距离.通过在特洛(Tello)飞行器平台上的实际飞行验证,实验结果表明所提出的四轴飞行器单目视觉测距算法可以有效分析出视频帧中各物体的景深,测距误差百分比均值为4.07%.在四轴飞行器与障碍物或跟踪目标之间距离较近,却无法评估此距离是否处于安全距离时,所提出的算法可为搭载单目摄像头的四轴飞行器提供可靠的测距技术支持. 展开更多
关键词 单目视觉 四轴飞行器 目标跟踪 帧间差分 测距
下载PDF
应用激光跟踪仪和单目视觉检测三维水模体定位精度方法研究
11
作者 韩连福 褚芃 +3 位作者 罗明哲 白娴靓 李加福 朱小平 《中国测试》 CAS 北大核心 2024年第2期14-21,共8页
针对三维水模体检测方法较少、检测过程复杂等问题,提出基于激光跟踪仪和单目视觉的三维水模体定位精度检测方法。激光跟踪仪三维水模体定位精度检测对比实验验证三维水模体在有水和无水的情况下定位精度一致。单目视觉三维水模体定位... 针对三维水模体检测方法较少、检测过程复杂等问题,提出基于激光跟踪仪和单目视觉的三维水模体定位精度检测方法。激光跟踪仪三维水模体定位精度检测对比实验验证三维水模体在有水和无水的情况下定位精度一致。单目视觉三维水模体定位精度检测实验结果表明:X轴、Y轴、Z轴和空间对角线的定位误差均在0.050 mm以内,重复性分别为0.030 mm,0.064 mm,0.056 mm,0.140 mm,X轴和Y轴的垂直度为0.800 mm,Y轴和Z轴的垂直度为0.761 mm,X轴和Z轴的垂直度为0.503 mm,与激光跟踪仪的测量精度在一个量级。最后,经不确定度分析得到相邻点距离测量误差的标准差为0.0187 mm,满足检测要求。两种方法都能实现非接触测量,简化冗杂的检测过程,可满足医院对医用直线加速器的日常检测需求。 展开更多
关键词 三维水模体 激光跟踪仪 单目视觉 定位精度
下载PDF
复杂环境下基于视觉/惯性的单兵自主定位性能评估
12
作者 常伟 黄土顺 +1 位作者 韩枫 陈刚 《现代电子技术》 北大核心 2024年第13期117-122,共6页
针对在卫星信号拒止等复杂环境下单兵自主定位精度低的问题,提出基于视觉/惯性测量单元(IMU)的单兵自主定位技术。将高效的IMU初始化方法融合到ORB-SLAM3算法并应用于单兵自主定位技术中,评估了单兵实时运动整体定位精度,重点分析了改进... 针对在卫星信号拒止等复杂环境下单兵自主定位精度低的问题,提出基于视觉/惯性测量单元(IMU)的单兵自主定位技术。将高效的IMU初始化方法融合到ORB-SLAM3算法并应用于单兵自主定位技术中,评估了单兵实时运动整体定位精度,重点分析了改进ORB-SLAM3算法在单目/IMU、双目/IMU下的定位精度准确性和鲁棒性。实验表明:在复杂树林环境下,单兵在基于视觉/IMU自主定位时,改进算法的IMU初始化时间最少提高了10 s,双目/IMU的估计轨迹曲线更加吻合实际运动轨迹曲线。在绝对轨迹误差的各项评估指标中,双目/IMU比单目/IMU的均方根误差提高了24.67%,而且定位精度和鲁棒性都要优于单目/IMU,为城市巷战中单兵自主高精度定位的实现提供了一种切实可行的思路和举措。 展开更多
关键词 单兵自主定位 惯性 视觉 复杂环境 单目 双目
下载PDF
基于深度与实例分割融合的单目3D目标检测方法
13
作者 孙逊 冯睿锋 陈彦如 《计算机应用》 CSCD 北大核心 2024年第7期2208-2215,共8页
针对单目3D目标检测在视角变化引起的物体大小变化以及物体遮挡等情况下效果不佳的问题,提出一种融合深度信息和实例分割掩码的新型单目3D目标检测方法。首先,通过深度-掩码注意力融合(DMAF)模块,将深度信息与实例分割掩码结合,以提供... 针对单目3D目标检测在视角变化引起的物体大小变化以及物体遮挡等情况下效果不佳的问题,提出一种融合深度信息和实例分割掩码的新型单目3D目标检测方法。首先,通过深度-掩码注意力融合(DMAF)模块,将深度信息与实例分割掩码结合,以提供更准确的物体边界;其次,引入动态卷积,并利用DMAF模块得到的融合特征引导动态卷积核的生成,以处理不同尺度的物体;再次,在损失函数中引入2D-3D边界框一致性损失函数,调整预测的3D边界框与对应的2D检测框高度一致,以提高实例分割和3D目标检测任务的效果;最后,通过消融实验验证该方法的有效性,并在KITTI测试集上对该方法进行验证。实验结果表明,与仅使用深度估计图和实例分割掩码的方法相比,在中等难度下对车辆类别检测的平均精度提高了6.36个百分点,且3D目标检测和鸟瞰图目标检测任务的效果均优于D4LCN(Depth-guided Dynamic-Depthwise-Dilated Local Convolutional Network)、M3D-RPN(Monocular 3D Region Proposal Network)等对比方法。 展开更多
关键词 单目3D目标检测 深度学习 动态卷积 实例分割
下载PDF
基于检测框下边沿的单目视觉车辆测距研究
14
作者 刘宏利 王雨林 +1 位作者 邵磊 李季 《计算机科学》 CSCD 北大核心 2024年第S01期415-420,共6页
对车辆测距是当今驾驶领域的热门研究方向。针对传统测距方法测距精度受到车型大小影响的问题以及前车存在的X轴偏移问题,提出了基于检测框下边沿中心点的车辆测距模型。该模型通过使用单目视觉摄像头及车辆检测算法获取前方车辆的位置... 对车辆测距是当今驾驶领域的热门研究方向。针对传统测距方法测距精度受到车型大小影响的问题以及前车存在的X轴偏移问题,提出了基于检测框下边沿中心点的车辆测距模型。该模型通过使用单目视觉摄像头及车辆检测算法获取前方车辆的位置信息,并通过车辆检测框得出的下边沿中心点坐标,以及相机安装的俯仰角信息综合建立了车辆测距模型,解决了车型大小带来的误差问题;通过构建三角函数模型,解决了前车相对于实验车辆存在的X轴分量问题,并优化改进了前车安全距离的判定方式;设定车尾矩形框中心点横坐标与车辆外接矩形框宽度的比值λ,根据λ取值分情况讨论,使该模型更符合场景应用需要。并提出了基于测距关键点的逆透视变换模型,减小了测距误差。实验表明,改进后测距模型的测距精度不受车型大小的影响且能考虑到前车位置的X轴分量问题,改进后的测距模型相对于传统测距模型,测距误差降低了约1.5%,且测距精度明显提高。 展开更多
关键词 单目视觉 测距 逆透视变换 检测框下边沿中心点 目标检测
下载PDF
单目三维视觉测量技术研究进展
15
作者 宋乐 路斯莹 侯宇鹏 《传感技术学报》 CAS CSCD 北大核心 2024年第3期365-380,共16页
单目三维视觉测量在视觉测量领域具有低成本、简便性、结构紧凑等优势,是以智能化、网络化制造为特征的先进制造典型技术之一。经过不断发展,单目三维视觉测量技术已成功应用于无人机导航、智能机器人、工业检测、医疗健康等领域,如今... 单目三维视觉测量在视觉测量领域具有低成本、简便性、结构紧凑等优势,是以智能化、网络化制造为特征的先进制造典型技术之一。经过不断发展,单目三维视觉测量技术已成功应用于无人机导航、智能机器人、工业检测、医疗健康等领域,如今呈现出精准化、快捷化、微型化、自动化、动态化等发展趋势。以孔径数量为标准,将单目三维视觉测量技术分为单孔径及多孔径两大类,分别综述两类方法的研究现状和发展历程,重点论述了应用较广的运动恢复结构法(Structure From Motion,SFM)和光场三维测量方法,并对单目三维视觉测量技术的未来方向进行了展望。 展开更多
关键词 单目视觉 三维测量 SFM 光场 综述
下载PDF
基于声全息法和单目视觉技术的柴油机噪声可视化实现
16
作者 毕玉华 梁加宝 +1 位作者 姚国仲 吴彪 《机械设计与制造》 北大核心 2024年第8期276-281,共6页
柴油机属于多噪声耦合动力装置,其噪声的识别和控制是内燃机研究领域的难点。为实现柴油机噪声的可视化,本研究结合了近场声全息法与单目视觉技术,开发了基于Labview的声像匹配模块,测试系统人机界面友好,并通过已知声源试验验证了声像... 柴油机属于多噪声耦合动力装置,其噪声的识别和控制是内燃机研究领域的难点。为实现柴油机噪声的可视化,本研究结合了近场声全息法与单目视觉技术,开发了基于Labview的声像匹配模块,测试系统人机界面友好,并通过已知声源试验验证了声像匹配模块的正确性。将验证后的测试系统应用于高压共轨柴油机对主/次推力侧声源进行识别,结果表明:在大气压力为80kPa、转速为1800r/min的最大扭矩工况下,主推力侧出现较大辐射噪声的位置为中冷器进气管、曲轴定时齿形带轮、排气管、脚架和涡轮增压器等;次推力侧辐射噪声峰值出现在起动机位置。 展开更多
关键词 近场声全息 单目视觉系统 声像匹配 柴油机
下载PDF
二维主动位姿引导的单目空间姿态测量方法
17
作者 刘峰 郭英华 +2 位作者 王霖 高裴裴 张月桐 《红外与激光工程》 EI CSCD 北大核心 2024年第2期79-89,共11页
针对空间物体姿态快速测量问题,以构建视觉最小系统需求为依据,研究了一种基于二维主动位姿引导的单目视觉空间姿态测量方法,建立了单目相机、二维载台与倾角仪之间的姿态测量模型,实现了空间物体的姿态角的测量。该方法以大地倾角仪坐... 针对空间物体姿态快速测量问题,以构建视觉最小系统需求为依据,研究了一种基于二维主动位姿引导的单目视觉空间姿态测量方法,建立了单目相机、二维载台与倾角仪之间的姿态测量模型,实现了空间物体的姿态角的测量。该方法以大地倾角仪坐标系统一测量系统的测量基准,由精密二维载台引导单目相机覆盖地空大视野三维空间,通过前期标定设计完成了单目相机与二维载台之间的工装校准;建立了载台坐标系、摄像机坐标系以及大地倾角仪坐标系之间的姿态测量传递模型,实现了定轴旋转双视角拍照下的空间物体的姿态解算和角度测量。构建了实验验证环境,测角实验结果表明:在系统测量基准坐标系下,其俯仰角的测量误差≤0.82°,测量相对误差≤6.1%;其横滚角的测量误差≤0.43°,测量相对误差≤3.4%。 展开更多
关键词 视觉测量 姿态测量 单目视觉 二维载台
下载PDF
无人船红外图像单目视觉检测与跟踪研究
18
作者 熊守丽 《舰船科学技术》 北大核心 2024年第7期159-162,共4页
无人船可用于海洋探测、军事侦查、军事打击等领域,结合GPS、AIS、无线通信技术以及图像视觉技术使得无人船实现智能化控制。无人船红外图像的单目视觉检测对实现目标检测以及跟踪具有至关重要的作用。本文提出使用YOLOV7算法对红外图... 无人船可用于海洋探测、军事侦查、军事打击等领域,结合GPS、AIS、无线通信技术以及图像视觉技术使得无人船实现智能化控制。无人船红外图像的单目视觉检测对实现目标检测以及跟踪具有至关重要的作用。本文提出使用YOLOV7算法对红外图像目标进行识别,确定红外图像训练集,对图像进行预处理后完成模型的训练及更新,最终实现对红外图像目标的识别。将KCF算法应用于红外图像目标跟踪,研究KCF算法对目标的跟踪流程,使用KCF算法和DCF算法进行仿真分析发现,同等情况下,KCF算法的跟踪准确率为78%,优于DCF算法。 展开更多
关键词 无人船 单目视觉检测 YOLOV7 KCF算法 目标跟踪
下载PDF
工业机器人单目视觉装配系统研究
19
作者 王新 郭俊 《机械设计与制造》 北大核心 2024年第6期342-347,共6页
为了解决传统机器人只能通过示教或离线编程固定执行点到点装配任务的问题,进行了视觉引导的工业机器人装配方面的研究。首先对工业机器人视觉装配过程建立数学模型并标定,然后研究了基于机器视觉的零件位姿估计方法,针对法兰盘类金属... 为了解决传统机器人只能通过示教或离线编程固定执行点到点装配任务的问题,进行了视觉引导的工业机器人装配方面的研究。首先对工业机器人视觉装配过程建立数学模型并标定,然后研究了基于机器视觉的零件位姿估计方法,针对法兰盘类金属零件弱纹理及无角点特征的位姿估计问题,提出了一种基于自身特征和外部标记辅助的单目视觉位姿计算方法。最后搭建了工业机器人单目视觉装配实验平台验证该位姿计算方法的可行性。实验结果表明,该系统能够实现对法兰盘类零件的定位和装配,最小位置误差为2.07mm,最小姿态误差0.13。 展开更多
关键词 机器视觉 单目视觉 工业机器人 图像处理 位姿估计
下载PDF
基于双自由度转台的全向空间单目视觉室内定位测量方法
20
作者 吴军 王豪爽 +3 位作者 单腾飞 郭润夏 张晓瑜 陈玖圣 《中国光学(中英文)》 EI CAS CSCD 北大核心 2024年第3期605-616,共12页
针对传统单目视觉测量系统测量视场有限的问题,本文提出一种基于双自由度旋转平台的全向空间单目视觉测量方法。首先,对双自由度旋转平台的转轴参数进行标定,用副相机拍摄固定在双自由度转台上的棋盘格标定板,提取棋盘格角点的位置坐标... 针对传统单目视觉测量系统测量视场有限的问题,本文提出一种基于双自由度旋转平台的全向空间单目视觉测量方法。首先,对双自由度旋转平台的转轴参数进行标定,用副相机拍摄固定在双自由度转台上的棋盘格标定板,提取棋盘格角点的位置坐标,并将其转化到同一相机坐标系下。利用PCA(主成分分析)平面拟合得到初始位置转轴参数中的方向向量,使用空间最小二乘圆拟合方法得到初始位置下转轴参数中的位置参数。然后,通过转台转动的角度以及罗德里格斯公式将不同位置下相机获取的数据进行坐标系统一,实现水平和竖直方向全向空间下的目标测量。最后,通过高精度激光测距仪验证了本方法的测量精度,并通过与双目视觉测量系统、wMPS测量系统进行比对实验,验证了本方法的全向空间测量能力。实验结果表明,本方法测量精度基本达到双目视觉测量系统水平,但测量范围远大于双目视觉测量,可以满足全向空间测量要求。 展开更多
关键词 双自由度转台 单目视觉 全向空间测量 室内定位
下载PDF
上一页 1 2 79 下一页 到第
使用帮助 返回顶部