Hybrid approaches such as combining video data with pure physics-based simulation have been popular in the recent decade for computer graphics. The key motivation is to clearly retain salient advantages from both data...Hybrid approaches such as combining video data with pure physics-based simulation have been popular in the recent decade for computer graphics. The key motivation is to clearly retain salient advantages from both data-driven method and model-centric numerical simulation, while overcoming certain difficulties of both. The Eulerian method, which has been widely employed in flow simulation, stores variables such as velocity and density on regular Cartesian grids, thereby it could be associated with (volumetric) video data on the same domain. This paper proposes a novel method for flow simulation, which is tightly coupling video-based reconstruction with physically-based simulation and making use of meaningful physical attributes during re-simulation. First, we reconstruct the density field from a single-view video. Second, we estimate the velocity field using the reconstructed density field as prior. In the iterative process, the pressure projection can be treated as a physical constraint and the results of each step are corrected by obtained velocity field in the Eulerian framework. Third, we use the reconstructed density field and velocity field to guide the Eulerian simulation with anticipated new results. Through the guidance of video data, we can produce new flows that closely match with the real scene exhibited in data acquisition. Moreover, in the multigrid Eulerian simulation, we can generate new visual effects which cannot be created from raw video acquisition, with a goal of easily producing many more visually interesting results and respecting true physical attributes at the same time. We demonstrate salient advantages of our hybrid method with a variety of animation examples.展开更多
Existing learning-based super-resolution (SR) reconstruction algorithms are mainly designed for single image, which ignore the spatio-temporal relationship between video frames. Aiming at applying the advantages of ...Existing learning-based super-resolution (SR) reconstruction algorithms are mainly designed for single image, which ignore the spatio-temporal relationship between video frames. Aiming at applying the advantages of learning-based algorithms to video SR field, a novel video SR reconstruction algorithm based on deep convolutional neural network (CNN) and spatio-temporal similarity (STCNN-SR) was proposed in this paper. It is a deep learning method for video SR reconstruction, which considers not onlv the mapping relationship among associated low-resolution (LR) and high-resolution (HR) image blocks, but also the spatio-temporal non-local complementary and redundant information between adjacent low-resolution video frames. The reconstruction speed can be improved obviously with the pre-trained end-to-end reconstructed coefficients. Moreover, the performance of video SR will be further improved by the optimization process with spatio-temporal similarity. Experimental results demonstrated that the proposed algorithm achieves a competitive SR quality on both subjective and objective evaluations, when compared to other state-of-the-art algorithms.展开更多
基金This work was supported by the National Natural Science Foundation of China under Grant Nos. 61532002, 61672237, 61672077 and 61672149, the Natural Science Foundation of USA under Grant Nos. IIS-1715985, IIS-0949467, IIS-1047715, and IIS-1049448, and the National High Technology Research and Development 863 Program of China under Grant No. 2015AA016404.
文摘Hybrid approaches such as combining video data with pure physics-based simulation have been popular in the recent decade for computer graphics. The key motivation is to clearly retain salient advantages from both data-driven method and model-centric numerical simulation, while overcoming certain difficulties of both. The Eulerian method, which has been widely employed in flow simulation, stores variables such as velocity and density on regular Cartesian grids, thereby it could be associated with (volumetric) video data on the same domain. This paper proposes a novel method for flow simulation, which is tightly coupling video-based reconstruction with physically-based simulation and making use of meaningful physical attributes during re-simulation. First, we reconstruct the density field from a single-view video. Second, we estimate the velocity field using the reconstructed density field as prior. In the iterative process, the pressure projection can be treated as a physical constraint and the results of each step are corrected by obtained velocity field in the Eulerian framework. Third, we use the reconstructed density field and velocity field to guide the Eulerian simulation with anticipated new results. Through the guidance of video data, we can produce new flows that closely match with the real scene exhibited in data acquisition. Moreover, in the multigrid Eulerian simulation, we can generate new visual effects which cannot be created from raw video acquisition, with a goal of easily producing many more visually interesting results and respecting true physical attributes at the same time. We demonstrate salient advantages of our hybrid method with a variety of animation examples.
基金supported by the National Natural Science Foundation of China (61320106006, 61532006, 61502042)
文摘Existing learning-based super-resolution (SR) reconstruction algorithms are mainly designed for single image, which ignore the spatio-temporal relationship between video frames. Aiming at applying the advantages of learning-based algorithms to video SR field, a novel video SR reconstruction algorithm based on deep convolutional neural network (CNN) and spatio-temporal similarity (STCNN-SR) was proposed in this paper. It is a deep learning method for video SR reconstruction, which considers not onlv the mapping relationship among associated low-resolution (LR) and high-resolution (HR) image blocks, but also the spatio-temporal non-local complementary and redundant information between adjacent low-resolution video frames. The reconstruction speed can be improved obviously with the pre-trained end-to-end reconstructed coefficients. Moreover, the performance of video SR will be further improved by the optimization process with spatio-temporal similarity. Experimental results demonstrated that the proposed algorithm achieves a competitive SR quality on both subjective and objective evaluations, when compared to other state-of-the-art algorithms.