Ray-space based arbitrary viewpoint rendering without complex object segmentation or model construction is the main technology to realize Free Viewpoint Video(FVV) system for complex scenes. Ray-space interpolation an...Ray-space based arbitrary viewpoint rendering without complex object segmentation or model construction is the main technology to realize Free Viewpoint Video(FVV) system for complex scenes. Ray-space interpolation and compression are two key techniques for the solution. In this paper,correlation among multiple epipolar lines in ray-space data is analyzed,and a new method of ray-space interpolation with multi-epipolar lines matching is proposed. Comparing with the pixel-based matching interpolation method and the block-based matching interpolation method,the proposed method can achieve higher Peak Signal to Noise Ratio(PSNR) in interpolating rayspace data and rendering arbitrary viewpoint images.展开更多
In recent years, many image-based rendering techniques have advanced from static to dynamic scenes and thus become video-based rendering (VBR) methods. But actually, only a few of them can render new views on-line. ...In recent years, many image-based rendering techniques have advanced from static to dynamic scenes and thus become video-based rendering (VBR) methods. But actually, only a few of them can render new views on-line. We present a new VBR system that creates new views of a live dynamic scene. This system provides high quality images and does not require any background subtraction. Our method follows a plane-sweep approach and reaches real-time rendering using consumer graphic hardware, graphics processing unit (GPU). Only one computer is used for both acquisition and rendering. The video stream acquisition is performed by at least 3 webcams. We propose an additional video stream management that extends the number of webcams to 10 or more. These considerations make our system low-cost and hence accessible for everyone. We also present an adaptation of our plane-sweep method to create simultaneously multiple views of the scene in real-time. Our system is especially designed for stereovision using autostereoscopic displays. The new views are computed from 4 webcams connected to a computer and are compressed in order to be transfered to a mobile phone. Using GPU programming, our method provides up to 16 images of the scene in real-time. The use of both GPU and CPU makes this method work on only one consumer grade computer.展开更多
We describe four fundamental challenges that complex real-life Virtual Reality (VR) productions are facing today (such as multi-camera management, quality control, automatic annotation with cinematography and 360&...We describe four fundamental challenges that complex real-life Virtual Reality (VR) productions are facing today (such as multi-camera management, quality control, automatic annotation with cinematography and 360˚?depth estimation) and describe an integrated solution, called Hyper 360, to address them. We demonstrate our solution and its evaluation in the context of practical productions and present related results.展开更多
The quality of virtual view based on multi-view video (MVD) plus depth format is often evaluated by PSNR or subjectively judged. However, due to synthesizing arbitrary view images, the virtual view images mostly hav...The quality of virtual view based on multi-view video (MVD) plus depth format is often evaluated by PSNR or subjectively judged. However, due to synthesizing arbitrary view images, the virtual view images mostly have no reference images and are only assessed using non-reference. Virtual view images synthesized by depth estimation reference software (DERS) and view synthesis reference software (VSRS) often accompanied with blockiness and other distortions on the edge. In addition, matching level for the depth map and the corresponding texture maps of left and right views also affects the quality of the virtual view. This paper compares the edge similarity of the depth and the corresponding texture maps which generate the intermediate virtual view and combined with the virtual view's blockiness which causing blur to evaluate the quality of the virtual view. Experiment results show that the proposed method can reflect the quality of the virtual view better.展开更多
基金the National Natural Science Foundation of China (No.60472100)the Natural Science Foundation of Zhejiang Province (No.Y105577)the Key Project of Chinese Ministry of Education (No.206059).
文摘Ray-space based arbitrary viewpoint rendering without complex object segmentation or model construction is the main technology to realize Free Viewpoint Video(FVV) system for complex scenes. Ray-space interpolation and compression are two key techniques for the solution. In this paper,correlation among multiple epipolar lines in ray-space data is analyzed,and a new method of ray-space interpolation with multi-epipolar lines matching is proposed. Comparing with the pixel-based matching interpolation method and the block-based matching interpolation method,the proposed method can achieve higher Peak Signal to Noise Ratio(PSNR) in interpolating rayspace data and rendering arbitrary viewpoint images.
基金This work was supported by Foundation of Technology Supporting the Creation of Digital Media Contents project (CREST, JST), Japan
文摘In recent years, many image-based rendering techniques have advanced from static to dynamic scenes and thus become video-based rendering (VBR) methods. But actually, only a few of them can render new views on-line. We present a new VBR system that creates new views of a live dynamic scene. This system provides high quality images and does not require any background subtraction. Our method follows a plane-sweep approach and reaches real-time rendering using consumer graphic hardware, graphics processing unit (GPU). Only one computer is used for both acquisition and rendering. The video stream acquisition is performed by at least 3 webcams. We propose an additional video stream management that extends the number of webcams to 10 or more. These considerations make our system low-cost and hence accessible for everyone. We also present an adaptation of our plane-sweep method to create simultaneously multiple views of the scene in real-time. Our system is especially designed for stereovision using autostereoscopic displays. The new views are computed from 4 webcams connected to a computer and are compressed in order to be transfered to a mobile phone. Using GPU programming, our method provides up to 16 images of the scene in real-time. The use of both GPU and CPU makes this method work on only one consumer grade computer.
基金funding from the European Union’s Horizon 2020 research and innovation programme,grant n°761934,Hyper 360(“Enriching 360 media with 3D storytelling and personalisation elements”).
文摘We describe four fundamental challenges that complex real-life Virtual Reality (VR) productions are facing today (such as multi-camera management, quality control, automatic annotation with cinematography and 360˚?depth estimation) and describe an integrated solution, called Hyper 360, to address them. We demonstrate our solution and its evaluation in the context of practical productions and present related results.
基金supported by the National Natural Science Foundation of China(Grant No.60832003)the Key Laboratory of Advanced Display and System Application(Shanghai University),Ministry of Education,China(Grant No.p200902)+1 种基金the Science and Technology Commission of Shanghai Municipality(Grant No.10510500500)the Natural Science Foundation of Anhui Higher Education Institutions of China(Grant No.KJ2011Z008)
文摘The quality of virtual view based on multi-view video (MVD) plus depth format is often evaluated by PSNR or subjectively judged. However, due to synthesizing arbitrary view images, the virtual view images mostly have no reference images and are only assessed using non-reference. Virtual view images synthesized by depth estimation reference software (DERS) and view synthesis reference software (VSRS) often accompanied with blockiness and other distortions on the edge. In addition, matching level for the depth map and the corresponding texture maps of left and right views also affects the quality of the virtual view. This paper compares the edge similarity of the depth and the corresponding texture maps which generate the intermediate virtual view and combined with the virtual view's blockiness which causing blur to evaluate the quality of the virtual view. Experiment results show that the proposed method can reflect the quality of the virtual view better.