摘要
针对现有的动态场景深度恢复方法普遍需要较多数目的同步摄像机才能获得理想深度估计的问题,提出一个能够从2~3个手持摄像机所拍摄的同步视频序列中自动地恢复出高质量的深度图序列的鲁棒、便捷的动态场景稠密深度恢复方法.首先对不同序列同一时刻的图像帧进行匹配以完成每帧的深度初始化,然后采用一种新的双层分割方法在手持摄像机自由移动的情况下将像素进行静动态分类,并对静态和动态像素点采用不同的方式进行时空一致性的深度优化.特别地,文中采用了一个基于多帧统计信息、迭代式的优化框架,使得深度优化与双层分割在该优化框架之下交替迭代地进行,最终实现高质量的动态场景的分割和深度恢复.最后通过各种动态场景实例证明了文中方法的鲁棒性和有效性.
Most previous dynamic depth recovery methods require many synchronous cameras to achieve good depth estimation, which is inflexible in practice. In this paper, a novel dynamic dense depth recovery method is proposed to automatically recover high-quality dense depth information from the synchronized videos taken by two or three handheld cameras. Initial depth maps are computed by matching the synchronized frames in the same time instance. Then a novel bilayer segmentation method for freely moving cameras is employed to classify the pixels of each frame into static and dynamic ones, so that their depths can be more effectively optimized with different spatio-temporal coherence constraints. Especially, an iterative optimization framework based on multiple frames is proposed, which iteratively performs depth optimization and bilayer segmentation to finally achieve a set of temporally consistent segmentation and depth maps. A Variety of dynamic scene examples demonstrate the effectiveness and robustness of the proposed method.
出处
《计算机辅助设计与图形学学报》
EI
CSCD
北大核心
2013年第2期137-145,共9页
Journal of Computer-Aided Design & Computer Graphics
基金
国家科技支撑计划(2012BAH35B02)
国家自然科学基金(61103104)
中央高校基本科研业务费专项资金
关键词
动态场景
时空一致性深度恢复
双层分割
dynamic scene
spatio-temporal depth recovery
bilayer segmentation