期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Temporal consistency maintenance on multiprocessor platforms with instance skipping
1
作者 BAI Tian LI Zhi-jie FAN Bo 《Journal of Central South University》 SCIE EI CAS CSCD 2020年第11期3364-3374,共11页
Maintaining temporal consistency of real-time data is important for cyber-physical systems.Most of the previous studies focus on uniprocessor systems.In this paper,the problem of temporal consistency maintenance on mu... Maintaining temporal consistency of real-time data is important for cyber-physical systems.Most of the previous studies focus on uniprocessor systems.In this paper,the problem of temporal consistency maintenance on multiprocessor platforms with instance skipping was formulated based on the(m,k)-constrained model.A partitioned scheduling method SC-AD was proposed to solve the problem.SC-AD uses a derived sufficient schedulability condition to calculate the initial value of m for each sensor transaction.It then partitions the transactions among the processors in a balanced way.To further reduce the average relative invalid time of real-time data,SC-AD judiciously increases the values of m for transactions assigned to each processor.Experiment results show that SC-AD outperforms the baseline methods in terms of the average relative invalid time and the average valid ratio under different system workloads. 展开更多
关键词 cyber-physical systems sensor transactions multiprocessor scheduling temporal consistency
下载PDF
Deep Video Harmonization by Improving Spatial-temporal Consistency
2
作者 Xiuwen Chen Li Fang +1 位作者 Long Ye Qin Zhang 《Machine Intelligence Research》 EI CSCD 2024年第1期46-54,共9页
Video harmonization is an important step in video editing to achieve visual consistency by adjusting foreground appear-ances in both spatial and temporal dimensions.Previous methods always only harmonize on a single s... Video harmonization is an important step in video editing to achieve visual consistency by adjusting foreground appear-ances in both spatial and temporal dimensions.Previous methods always only harmonize on a single scale or ignore the inaccuracy of flow estimation,which leads to limited harmonization performance.In this work,we propose a novel architecture for video harmoniza-tion by making full use of spatiotemporal features and yield temporally consistent harmonized results.We introduce multiscale harmon-ization by using nonlocal similarity on each scale to make the foreground more consistent with the background.We also propose a fore-ground temporal aggregator to dynamically aggregate neighboring frames at the feature level to alleviate the effect of inaccurate estim-ated flow and ensure temporal consistency.The experimental results demonstrate the superiority of our method over other state-of-the-art methods in both quantitative and visual comparisons. 展开更多
关键词 HARMONIZATION temporal consistency video editing video composition nonlocal similarity
原文传递
Temporally Consistent Depth Map Estimation for 3D Video Generation and Coding 被引量:2
3
作者 Sang-Beom Lee Yo-Sung Ho 《China Communications》 SCIE CSCD 2013年第5期39-49,共11页
In this paper, we propose a new algorithm for temporally consistent depth map estimation to generate three-dimensional video. The proposed algorithm adaptively computes the matching cost using a temporal weighting fun... In this paper, we propose a new algorithm for temporally consistent depth map estimation to generate three-dimensional video. The proposed algorithm adaptively computes the matching cost using a temporal weighting function, which is obtained by block-based moving object detection and motion estimation with variable block sizes. Experimental results show that the proposed algorithm improves the temporal consistency of the depth video and reduces by about 38% both the flickering artefact in the synthesized view and the number of coding bits for depth video coding. 展开更多
关键词 three-dimensional television multiview video depth estimation temporal consistency temporal weighting function
下载PDF
Temporally consistent video colorization with deep feature propagation and self-regularization learning 被引量:1
4
作者 Yihao Liu Hengyuan Zhao +4 位作者 Kelvin CKChan Xintao Wang Chen Change Loy Yu Qiao Chao Dong 《Computational Visual Media》 SCIE EI CSCD 2024年第2期375-395,共21页
Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and... Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and existing methods always suffer from severe flickering artifacts(temporal inconsistency)or unsatisfactory colorization.We address this problem from a new perspective,by jointly considering colorization and temporal consistency in a unified framework.Specifically,we propose a novel temporally consistent video colorization(TCVC)framework.TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization.Furthermore,TCVC introduces a self-regularization learning(SRL)scheme to minimize the differences in predictions obtained using different time steps.SRL does not require any ground-truth color videos for training and can further improve temporal consistency.Experiments demonstrate that our method can not only provide visually pleasing colorized video,but also with clearly better temporal consistency than state-of-the-art methods.A video demo is provided at https://www.youtube.com/watch?v=c7dczMs-olE,while code is available at https://github.com/lyh-18/TCVC-Tem porally-Consistent-Video-Colorization. 展开更多
关键词 video colorization temporal consistency feature propagation self-regularization
原文传递
Motion-Inspired Real-Time Garment Synthesis with Temporal-Consistency
5
作者 魏育坤 石敏 +2 位作者 冯文科 朱登明 毛天露 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第6期1356-1368,共13页
Synthesizing garment dynamics according to body motions is a vital technique in computer graphics.Physics-based simulation depends on an accurate model of the law of kinetics of cloth,which is time-consuming,hard to i... Synthesizing garment dynamics according to body motions is a vital technique in computer graphics.Physics-based simulation depends on an accurate model of the law of kinetics of cloth,which is time-consuming,hard to implement,and complex to control.Existing data-driven approaches either lack temporal consistency,or fail to handle garments that are different from body topology.In this paper,we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape.Given a sequence of body motions,our workflow is able to gen-erate corresponding garment dynamics with both spatial and temporal coherence.To that end,we develop a transformer-based garment synthesis network to learn the mapping from body motions to garment dynamics.Frame-level attention is employed to capture the dependency of garments and body motions.Moreover,a post-processing procedure is further tak-en to perform penetration removal and auto-texturing.Then,textured clothing animation that is collision-free and tempo-rally-consistent is generated.We quantitatively and qualitatively evaluated our proposed workflow from different aspects.Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation,while running 1000 times faster.Besides,our workflow achieved superior synthesis perfor-mance compared with alternative approaches.To stimulate further research in this direction,our code will be publicly available soon. 展开更多
关键词 clothing animation computer graphics TRANSFORMER temporal consistency
原文传递
Temporally Consistent Depth Map Prediction Using Deep Convolutional Neural Network and Spatial-Temporal Conditional Random Field
6
作者 Xu-Ran Zhao Xun Wang Qi-Chao Chen 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第3期443-456,共14页
Deep convolutional neural networks (DCNNs) based methods recently keep setting new records on the tasks of predicting depth maps from monocular images. When dealing with video-based applications such as 2D (2-dimen... Deep convolutional neural networks (DCNNs) based methods recently keep setting new records on the tasks of predicting depth maps from monocular images. When dealing with video-based applications such as 2D (2-dimensional) to 3D (3-dimensional) video conversion, however, these approaches tend to produce temporally inconsistent depth maps, since their CNN models are optimized over single frames. In this paper, we address this problem by introducing a novel spatial-temporal conditional random fields (CRF) model into the DCNN architecture, which is able to enforce temporal consistency between depth map estimations over consecutive video frames. In our approach, temporally consistent superpixel (TSP) is first applied to an image sequence to establish the correspondence of targets in consecutive frames. A DCNN is then used to regress the depth value of each temporal superpixel, followed by a spatial-temporal CRF layer to model the relationship of the estimated depths in both spatial and temporal domains. The parameters in both DCNN and CRF models are jointly optimized with back propagation. Experimental results show that our approach not only is able to significantly enhance the temporal consistency of estimated depth maps over existing single-frame-based approaches, but also improves the depth estimation accuracy in terms of various evaluation metrics. 展开更多
关键词 depth estimation temporal consistency convolutional neural network conditional random fields
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部