Recently, the reference functions for the synthesis and analysis of the autostereoscopic multiview and integral images in three-dimensional displays were introduced. In the current paper, we propose the wavelets to an...Recently, the reference functions for the synthesis and analysis of the autostereoscopic multiview and integral images in three-dimensional displays were introduced. In the current paper, we propose the wavelets to analyze such images. The wavelets are built on these reference functions as on the scaling functions of the wavelet analysis. The continuous wavelet transform was successfully applied to the testing wireframe binary objects. The restored locations correspond to the structure of the testing wireframe binary objects.展开更多
In this Letter,we present a display system based on a curved screen and parallax barrier,which provides stereo images with a horizontal field of view of 360°without wearing any eyewear,to achieve an immersive aut...In this Letter,we present a display system based on a curved screen and parallax barrier,which provides stereo images with a horizontal field of view of 360°without wearing any eyewear,to achieve an immersive autostereoscopic effect.The display principle and characteristics of this display system are studied theoretically in detail.Three consecutive pixels on a curved screen and parallax barrier form a display unit,which can generate separate viewing zones for the left and right eyes,respectively.Simulation and experimental results show that the non-crosstalk effect can be obtained in the viewing zones,which proves the effectiveness of this display system.This study provides some new ideas for the improvement of the autostereoscopic display and to enable envisioned applications in virtual reality technology.展开更多
In recent years, many image-based rendering techniques have advanced from static to dynamic scenes and thus become video-based rendering (VBR) methods. But actually, only a few of them can render new views on-line. ...In recent years, many image-based rendering techniques have advanced from static to dynamic scenes and thus become video-based rendering (VBR) methods. But actually, only a few of them can render new views on-line. We present a new VBR system that creates new views of a live dynamic scene. This system provides high quality images and does not require any background subtraction. Our method follows a plane-sweep approach and reaches real-time rendering using consumer graphic hardware, graphics processing unit (GPU). Only one computer is used for both acquisition and rendering. The video stream acquisition is performed by at least 3 webcams. We propose an additional video stream management that extends the number of webcams to 10 or more. These considerations make our system low-cost and hence accessible for everyone. We also present an adaptation of our plane-sweep method to create simultaneously multiple views of the scene in real-time. Our system is especially designed for stereovision using autostereoscopic displays. The new views are computed from 4 webcams connected to a computer and are compressed in order to be transfered to a mobile phone. Using GPU programming, our method provides up to 16 images of the scene in real-time. The use of both GPU and CPU makes this method work on only one consumer grade computer.展开更多
文摘Recently, the reference functions for the synthesis and analysis of the autostereoscopic multiview and integral images in three-dimensional displays were introduced. In the current paper, we propose the wavelets to analyze such images. The wavelets are built on these reference functions as on the scaling functions of the wavelet analysis. The continuous wavelet transform was successfully applied to the testing wireframe binary objects. The restored locations correspond to the structure of the testing wireframe binary objects.
基金supported by the Youth Innovation Promotion Association,Chinese Academy of Sciences(Nos.2018251 and 2017264)the National Natural Science Foundation of China(Nos.11704378 and 61705221)。
文摘In this Letter,we present a display system based on a curved screen and parallax barrier,which provides stereo images with a horizontal field of view of 360°without wearing any eyewear,to achieve an immersive autostereoscopic effect.The display principle and characteristics of this display system are studied theoretically in detail.Three consecutive pixels on a curved screen and parallax barrier form a display unit,which can generate separate viewing zones for the left and right eyes,respectively.Simulation and experimental results show that the non-crosstalk effect can be obtained in the viewing zones,which proves the effectiveness of this display system.This study provides some new ideas for the improvement of the autostereoscopic display and to enable envisioned applications in virtual reality technology.
基金This work was supported by Foundation of Technology Supporting the Creation of Digital Media Contents project (CREST, JST), Japan
文摘In recent years, many image-based rendering techniques have advanced from static to dynamic scenes and thus become video-based rendering (VBR) methods. But actually, only a few of them can render new views on-line. We present a new VBR system that creates new views of a live dynamic scene. This system provides high quality images and does not require any background subtraction. Our method follows a plane-sweep approach and reaches real-time rendering using consumer graphic hardware, graphics processing unit (GPU). Only one computer is used for both acquisition and rendering. The video stream acquisition is performed by at least 3 webcams. We propose an additional video stream management that extends the number of webcams to 10 or more. These considerations make our system low-cost and hence accessible for everyone. We also present an adaptation of our plane-sweep method to create simultaneously multiple views of the scene in real-time. Our system is especially designed for stereovision using autostereoscopic displays. The new views are computed from 4 webcams connected to a computer and are compressed in order to be transfered to a mobile phone. Using GPU programming, our method provides up to 16 images of the scene in real-time. The use of both GPU and CPU makes this method work on only one consumer grade computer.