期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Augmented Flow Simulation Based on Tight Coupling Between Video Reconstruction and Eulerian Models
1
作者 Feng-Yu Li Chang-Bo Wang +1 位作者 Hong Qin Hong-Yan Quan 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第3期452-462,共11页
Hybrid approaches such as combining video data with pure physics-based simulation have been popular in the recent decade for computer graphics. The key motivation is to clearly retain salient advantages from both data... Hybrid approaches such as combining video data with pure physics-based simulation have been popular in the recent decade for computer graphics. The key motivation is to clearly retain salient advantages from both data-driven method and model-centric numerical simulation, while overcoming certain difficulties of both. The Eulerian method, which has been widely employed in flow simulation, stores variables such as velocity and density on regular Cartesian grids, thereby it could be associated with (volumetric) video data on the same domain. This paper proposes a novel method for flow simulation, which is tightly coupling video-based reconstruction with physically-based simulation and making use of meaningful physical attributes during re-simulation. First, we reconstruct the density field from a single-view video. Second, we estimate the velocity field using the reconstructed density field as prior. In the iterative process, the pressure projection can be treated as a physical constraint and the results of each step are corrected by obtained velocity field in the Eulerian framework. Third, we use the reconstructed density field and velocity field to guide the Eulerian simulation with anticipated new results. Through the guidance of video data, we can produce new flows that closely match with the real scene exhibited in data acquisition. Moreover, in the multigrid Eulerian simulation, we can generate new visual effects which cannot be created from raw video acquisition, with a goal of easily producing many more visually interesting results and respecting true physical attributes at the same time. We demonstrate salient advantages of our hybrid method with a variety of animation examples. 展开更多
关键词 video reconstruction velocity estimation fluid simulation volume modeling and re-simulation
原文传递
Video super-resolution reconstruction based on deep convolutional neural network and spatio-temporal similarity
2
作者 Li Linghui Du Junping +2 位作者 Liang Meiyu Ren Nan Fan Dan 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2016年第5期68-81,共14页
Existing learning-based super-resolution (SR) reconstruction algorithms are mainly designed for single image, which ignore the spatio-temporal relationship between video frames. Aiming at applying the advantages of ... Existing learning-based super-resolution (SR) reconstruction algorithms are mainly designed for single image, which ignore the spatio-temporal relationship between video frames. Aiming at applying the advantages of learning-based algorithms to video SR field, a novel video SR reconstruction algorithm based on deep convolutional neural network (CNN) and spatio-temporal similarity (STCNN-SR) was proposed in this paper. It is a deep learning method for video SR reconstruction, which considers not onlv the mapping relationship among associated low-resolution (LR) and high-resolution (HR) image blocks, but also the spatio-temporal non-local complementary and redundant information between adjacent low-resolution video frames. The reconstruction speed can be improved obviously with the pre-trained end-to-end reconstructed coefficients. Moreover, the performance of video SR will be further improved by the optimization process with spatio-temporal similarity. Experimental results demonstrated that the proposed algorithm achieves a competitive SR quality on both subjective and objective evaluations, when compared to other state-of-the-art algorithms. 展开更多
关键词 video SR reconstruction deep convolutional neural network spatio-temporal siruilarity Zernike moment feature
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部