期刊文献+
共找到8篇文章
< 1 >
每页显示 20 50 100
Multilevel Disparity Reconstruction Network for Real-Time Stereo Matching 被引量:1
1
作者 刘卓然 赵旭 《Journal of Shanghai Jiaotong university(Science)》 EI 2022年第5期715-722,共8页
Recently,stereo matching algorithms based on end-to-end convolutional neural networks achieve excellent performance far exceeding traditional algorithms.Current state-of-the-art stereo matching networks mostly rely on... Recently,stereo matching algorithms based on end-to-end convolutional neural networks achieve excellent performance far exceeding traditional algorithms.Current state-of-the-art stereo matching networks mostly rely on full cost volume and 3D convolutions to regress dense disparity maps.These modules are computationally complex and high consumption of memory,and difficult to deploy in real-time applications.To overcome this problem,we propose multilevel disparity reconstruction network,MDRNet,a lightweight stereo matching network without any 3D convolutions.We use stacked residual pyramids to gradually reconstruct disparity maps from low-level resolution to full-level resolution,replacing common 3D computation and optimization convolutions.Our approach achieves a competitive performance compared with other algorithms on stereo benchmarks and real-time inference at 30 frames per second with 4×104 resolutions. 展开更多
关键词 stereo matching disparity reconstruction real-time stacked residual pyramid
原文传递
Development of a multi-view and geo-event-driven real-time collaborative GIS
2
作者 Sun Yaqin Xu Chen +4 位作者 Wu Jinxiong Yuan Hang Shi Haibo Zhan Xiumei Zhang Lin 《International Journal of Digital Earth》 SCIE EI 2022年第1期134-147,共14页
Supporting distributed real-time collaborative work has become an important development trend in GIS.It enables group users to cooperate and improve the use of scientific data and collaboration technologies.Real-time ... Supporting distributed real-time collaborative work has become an important development trend in GIS.It enables group users to cooperate and improve the use of scientific data and collaboration technologies.Real-time collaborative GIS(RCGIS)provides a platform for supporting synchronous collaboration using geospatial data in various forms.Traditional RCGIS generally works on a single map view,which has the disadvantage of collaborative interface confusion.The traditional system-driven mechanism of RCGIS is a computer-interaction event that has the disadvantage of an unfriendly collaborative perception.This paper presents a design and prototype of a RCGIS based on multi-view and geo-event driven mechanisms.The geo-event-driven mechanism provides users with a smoother collaborative perception and interactions that are more natural and friendly.The collaboration process for a GIS that is driven by geo-events is also discussed.A multi-view technique is used to make real-time GIS collaboration more orderly.The paper also proposes a synchronization strategy for public–private views.Finally,walk-through examples are used to demonstrate the use of RCGIS within a web environment. 展开更多
关键词 Geographical information COLLABORATION real-time multi-view geo-event driven
原文传递
基于多RGBD摄像机的动态场景实时三维重建系统 被引量:4
3
作者 段勇 裴明涛 《北京理工大学学报》 EI CAS CSCD 北大核心 2014年第11期1157-1162,共6页
使用多台基于FPGA嵌入立体计算的RGBD摄像机搭建动态场景实时三维重建系统.RGBD摄像机能够以视频速度输出场景的彩色(RGB)图像及对应的稠密视差(disparity)图像,由视差图像可进一步得到场景的深度图.多台RGBD摄像机运行在统一的外部时... 使用多台基于FPGA嵌入立体计算的RGBD摄像机搭建动态场景实时三维重建系统.RGBD摄像机能够以视频速度输出场景的彩色(RGB)图像及对应的稠密视差(disparity)图像,由视差图像可进一步得到场景的深度图.多台RGBD摄像机运行在统一的外部时钟和控制信号下,可实现对目标场景数据的同步采集.为了提高各视点所获取的场景深度图质量,根据多RGBD摄像机系统视点分布较为稀疏的特点,使用概率密度函数估计的方法进行多视点深度图的融合.融合后的深度图由PC集群进行处理,可实时生成所拍摄场景的三维空间点云.实验结果表明,本文系统可以有效地重建包含多个运动目标的大型动态场景. 展开更多
关键词 立体视觉 多视点实时三维重建 深度图融合
下载PDF
面向裸眼3D显示的实时多视角视频采集系统设计 被引量:1
4
作者 张平 汤勇明 +1 位作者 夏军 吴忠 《电子器件》 CAS 北大核心 2014年第2期210-214,共5页
针对裸眼3D显示屏研究了一种实时多视角视频采集系统。该系统使用高清网络摄像头阵列进行多视点原始画面拍摄,通过网络传输视频流,视频数据经过解码、矫正、合成等处理后变为具有多个视点(水平方向)的实时视频图像。实验结果表明,该系... 针对裸眼3D显示屏研究了一种实时多视角视频采集系统。该系统使用高清网络摄像头阵列进行多视点原始画面拍摄,通过网络传输视频流,视频数据经过解码、矫正、合成等处理后变为具有多个视点(水平方向)的实时视频图像。实验结果表明,该系统较好地解决了实际拍摄中存在的畸变、图像对齐等问题,实现了实时4个视点(水平方向)的全高清视频输出,能直接在特定的裸眼3D显示器上观看。 展开更多
关键词 立体视觉 实时多视角视频 摄像机阵列 矫正
下载PDF
CNLPA-MVS:Coarse-Hypotheses Guided Non-Local PAtchMatch Multi-View Stereo 被引量:1
5
作者 Qitong Zhang Shan Luo +1 位作者 Lei Wang Jieqing Feng 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第3期572-587,共16页
In multi-view stereo,unreliable matching in low-textured regions has a negative impact on the completeness of reconstructed models.Since the photometric consistency of low-textured regions is not discriminative under ... In multi-view stereo,unreliable matching in low-textured regions has a negative impact on the completeness of reconstructed models.Since the photometric consistency of low-textured regions is not discriminative under a local window,non-local information provided by the Markov Random Field(MRF)model can alleviate the matching ambiguity but is limited in continuous space with high computational complexity.Owing to its sampling and propagation strategy,PatchMatch multi-view stereo methods have advantages in terms of optimizing the continuous labeling problem.In this paper,we propose a novel method to address this problem,namely the Coarse-Hypotheses Guided Non-Local PAtchMatch Multi-View Stereo(CNLPA-MVS),which takes the advantages of both MRF-based non-local methods and PatchMatch multi-view stereo and compensates for their defects mutually.First,we combine dynamic programing(DP)and sequential propagation along scanlines in parallel to perform CNLPA-MVS,thereby obtaining the optimal depth and normal hypotheses.Second,we introduce coarse inference within a universal window provided by winner-takes-all to eliminate the stripe artifacts caused by DP and improve completeness.Third,we add a local consistency strategy based on the hypotheses of similar color pixels sharing approximate values into CNLPA-MVS for further improving completeness.CNLPA-MVS was validated on public benchmarks and achieved state-of-the-art performance with high completeness. 展开更多
关键词 3D reconstruction multi-view stereo PatchMatch dynamic programming
原文传递
Practical BRDF reconstruction using reliable geometric regions from multi-view stereo
6
作者 Taishi Ono Hiroyuki Kubo +2 位作者 Kenichiro Tanaka Takuya Funatomi Yasuhiro Mukaigawa 《Computational Visual Media》 CSCD 2019年第4期325-336,共12页
In this paper,we present a practical method for reconstructing the bidirectional reflectance distribution function(BRDF)from multiple images of a real object composed of a homogeneous material.The key idea is that the... In this paper,we present a practical method for reconstructing the bidirectional reflectance distribution function(BRDF)from multiple images of a real object composed of a homogeneous material.The key idea is that the BRDF can be sampled after geometry estimation using multi-view stereo(MVS)techniques.Our contribution is selection of reliable samples of lighting,surface normal,and viewing directions for robustness against estimation errors of MVS.Our method is quantitatively evaluated using synthesized images and its effectiveness is shown via real-world experiments. 展开更多
关键词 BRDF RECONSTRUCTION multi-view stereo(MVS) PHOTOGRAMMETRY RENDERING
原文传递
DFPC-SLAM:A Dynamic Feature Point Constraints-Based SLAM Using Stereo Vision for Dynamic Environment
7
作者 Bo Zeng Chengqun Song +1 位作者 Cheng Jun Yuhang Kang 《Guidance, Navigation and Control》 2023年第1期46-60,共15页
Visual SLAM methods usually presuppose that the scene is static, so the SLAM algorithm formobile robots in dynamic scenes often results in a signicant decrease in accuracy due to thein°uence of dynamic objects. I... Visual SLAM methods usually presuppose that the scene is static, so the SLAM algorithm formobile robots in dynamic scenes often results in a signicant decrease in accuracy due to thein°uence of dynamic objects. In this paper, feature points are divided into dynamic and staticfrom semantic information and multi-view geometry information, and then static region featurepoints are added to the pose-optimization, and static scene maps are established for dynamicscenes. Finally, experiments are conducted in dynamic scenes using the KITTI dataset, and theresults show that the proposed algorithm has higher accuracy in highly dynamic scenes comparedto the visual SLAM baseline. 展开更多
关键词 SLAM stereo vision semantic segmentation multi-view geometry dynamic scenes.
原文传递
Robust Local Light Field Synthesis via Occlusion-aware Sampling and Deep Visual Feature Fusion
8
作者 Wenpeng Xing Jie Chen Yike Guo 《Machine Intelligence Research》 EI CSCD 2023年第3期408-420,共13页
Novel view synthesis has attracted tremendous research attention recently for its applications in virtual reality and immersive telepresence.Rendering a locally immersive light field(LF)based on arbitrary large baseli... Novel view synthesis has attracted tremendous research attention recently for its applications in virtual reality and immersive telepresence.Rendering a locally immersive light field(LF)based on arbitrary large baseline RGB references is a challenging problem that lacks efficient solutions with existing novel view synthesis techniques.In this work,we aim at truthfully rendering local immersive novel views/LF images based on large baseline LF captures and a single RGB image in the target view.To fully explore the precious information from source LF captures,we propose a novel occlusion-aware source sampler(OSS)module which efficiently transfers the pixels of source views to the target view′s frustum in an occlusion-aware manner.An attention-based deep visual fusion module is proposed to fuse the revealed occluded background content with a preliminary LF into a final refined LF.The proposed source sampling and fusion mechanism not only helps to provide information for occluded regions from varying observation angles,but also proves to be able to effectively enhance the visual rendering quality.Experimental results show that our proposed method is able to render high-quality LF images/novel views with sparse RGB references and outperforms state-of-the-art LF rendering and novel view synthesis methods. 展开更多
关键词 Novel view synthesis light field(LF)imaging multi-view stereo occlusion sampling deep visual feature(DVF)fusion
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部