Maintaining temporal consistency of real-time data is important for cyber-physical systems.Most of the previous studies focus on uniprocessor systems.In this paper,the problem of temporal consistency maintenance on mu...Maintaining temporal consistency of real-time data is important for cyber-physical systems.Most of the previous studies focus on uniprocessor systems.In this paper,the problem of temporal consistency maintenance on multiprocessor platforms with instance skipping was formulated based on the(m,k)-constrained model.A partitioned scheduling method SC-AD was proposed to solve the problem.SC-AD uses a derived sufficient schedulability condition to calculate the initial value of m for each sensor transaction.It then partitions the transactions among the processors in a balanced way.To further reduce the average relative invalid time of real-time data,SC-AD judiciously increases the values of m for transactions assigned to each processor.Experiment results show that SC-AD outperforms the baseline methods in terms of the average relative invalid time and the average valid ratio under different system workloads.展开更多
Video harmonization is an important step in video editing to achieve visual consistency by adjusting foreground appear-ances in both spatial and temporal dimensions.Previous methods always only harmonize on a single s...Video harmonization is an important step in video editing to achieve visual consistency by adjusting foreground appear-ances in both spatial and temporal dimensions.Previous methods always only harmonize on a single scale or ignore the inaccuracy of flow estimation,which leads to limited harmonization performance.In this work,we propose a novel architecture for video harmoniza-tion by making full use of spatiotemporal features and yield temporally consistent harmonized results.We introduce multiscale harmon-ization by using nonlocal similarity on each scale to make the foreground more consistent with the background.We also propose a fore-ground temporal aggregator to dynamically aggregate neighboring frames at the feature level to alleviate the effect of inaccurate estim-ated flow and ensure temporal consistency.The experimental results demonstrate the superiority of our method over other state-of-the-art methods in both quantitative and visual comparisons.展开更多
In this paper, we propose a new algorithm for temporally consistent depth map estimation to generate three-dimensional video. The proposed algorithm adaptively computes the matching cost using a temporal weighting fun...In this paper, we propose a new algorithm for temporally consistent depth map estimation to generate three-dimensional video. The proposed algorithm adaptively computes the matching cost using a temporal weighting function, which is obtained by block-based moving object detection and motion estimation with variable block sizes. Experimental results show that the proposed algorithm improves the temporal consistency of the depth video and reduces by about 38% both the flickering artefact in the synthesized view and the number of coding bits for depth video coding.展开更多
Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and...Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and existing methods always suffer from severe flickering artifacts(temporal inconsistency)or unsatisfactory colorization.We address this problem from a new perspective,by jointly considering colorization and temporal consistency in a unified framework.Specifically,we propose a novel temporally consistent video colorization(TCVC)framework.TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization.Furthermore,TCVC introduces a self-regularization learning(SRL)scheme to minimize the differences in predictions obtained using different time steps.SRL does not require any ground-truth color videos for training and can further improve temporal consistency.Experiments demonstrate that our method can not only provide visually pleasing colorized video,but also with clearly better temporal consistency than state-of-the-art methods.A video demo is provided at https://www.youtube.com/watch?v=c7dczMs-olE,while code is available at https://github.com/lyh-18/TCVC-Tem porally-Consistent-Video-Colorization.展开更多
Synthesizing garment dynamics according to body motions is a vital technique in computer graphics.Physics-based simulation depends on an accurate model of the law of kinetics of cloth,which is time-consuming,hard to i...Synthesizing garment dynamics according to body motions is a vital technique in computer graphics.Physics-based simulation depends on an accurate model of the law of kinetics of cloth,which is time-consuming,hard to implement,and complex to control.Existing data-driven approaches either lack temporal consistency,or fail to handle garments that are different from body topology.In this paper,we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape.Given a sequence of body motions,our workflow is able to gen-erate corresponding garment dynamics with both spatial and temporal coherence.To that end,we develop a transformer-based garment synthesis network to learn the mapping from body motions to garment dynamics.Frame-level attention is employed to capture the dependency of garments and body motions.Moreover,a post-processing procedure is further tak-en to perform penetration removal and auto-texturing.Then,textured clothing animation that is collision-free and tempo-rally-consistent is generated.We quantitatively and qualitatively evaluated our proposed workflow from different aspects.Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation,while running 1000 times faster.Besides,our workflow achieved superior synthesis perfor-mance compared with alternative approaches.To stimulate further research in this direction,our code will be publicly available soon.展开更多
Deep convolutional neural networks (DCNNs) based methods recently keep setting new records on the tasks of predicting depth maps from monocular images. When dealing with video-based applications such as 2D (2-dimen...Deep convolutional neural networks (DCNNs) based methods recently keep setting new records on the tasks of predicting depth maps from monocular images. When dealing with video-based applications such as 2D (2-dimensional) to 3D (3-dimensional) video conversion, however, these approaches tend to produce temporally inconsistent depth maps, since their CNN models are optimized over single frames. In this paper, we address this problem by introducing a novel spatial-temporal conditional random fields (CRF) model into the DCNN architecture, which is able to enforce temporal consistency between depth map estimations over consecutive video frames. In our approach, temporally consistent superpixel (TSP) is first applied to an image sequence to establish the correspondence of targets in consecutive frames. A DCNN is then used to regress the depth value of each temporal superpixel, followed by a spatial-temporal CRF layer to model the relationship of the estimated depths in both spatial and temporal domains. The parameters in both DCNN and CRF models are jointly optimized with back propagation. Experimental results show that our approach not only is able to significantly enhance the temporal consistency of estimated depth maps over existing single-frame-based approaches, but also improves the depth estimation accuracy in terms of various evaluation metrics.展开更多
基金Project(2020JJ4032)supported by the Hunan Provincial Natural Science Foundation of China。
文摘Maintaining temporal consistency of real-time data is important for cyber-physical systems.Most of the previous studies focus on uniprocessor systems.In this paper,the problem of temporal consistency maintenance on multiprocessor platforms with instance skipping was formulated based on the(m,k)-constrained model.A partitioned scheduling method SC-AD was proposed to solve the problem.SC-AD uses a derived sufficient schedulability condition to calculate the initial value of m for each sensor transaction.It then partitions the transactions among the processors in a balanced way.To further reduce the average relative invalid time of real-time data,SC-AD judiciously increases the values of m for transactions assigned to each processor.Experiment results show that SC-AD outperforms the baseline methods in terms of the average relative invalid time and the average valid ratio under different system workloads.
基金This work was supported by National Natural Science Foundation of China(No.62001432)the Fundamental Research Funds for the Central Universities,China(Nos.CUC18LG024 and CUC22JG001).
文摘Video harmonization is an important step in video editing to achieve visual consistency by adjusting foreground appear-ances in both spatial and temporal dimensions.Previous methods always only harmonize on a single scale or ignore the inaccuracy of flow estimation,which leads to limited harmonization performance.In this work,we propose a novel architecture for video harmoniza-tion by making full use of spatiotemporal features and yield temporally consistent harmonized results.We introduce multiscale harmon-ization by using nonlocal similarity on each scale to make the foreground more consistent with the background.We also propose a fore-ground temporal aggregator to dynamically aggregate neighboring frames at the feature level to alleviate the effect of inaccurate estim-ated flow and ensure temporal consistency.The experimental results demonstrate the superiority of our method over other state-of-the-art methods in both quantitative and visual comparisons.
基金supported by the National Research Foundation of Korea Grant funded by the Korea Ministry of Science and Technology under Grant No. 2012-0009228
文摘In this paper, we propose a new algorithm for temporally consistent depth map estimation to generate three-dimensional video. The proposed algorithm adaptively computes the matching cost using a temporal weighting function, which is obtained by block-based moving object detection and motion estimation with variable block sizes. Experimental results show that the proposed algorithm improves the temporal consistency of the depth video and reduces by about 38% both the flickering artefact in the synthesized view and the number of coding bits for depth video coding.
基金supported by grants from the National Natural Science Foundation of China(61906184)the Joint Lab of CAS–HK,and the Shanghai Committee of Science and Technology,China(20DZ1100800,21DZ1100100).
文摘Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and existing methods always suffer from severe flickering artifacts(temporal inconsistency)or unsatisfactory colorization.We address this problem from a new perspective,by jointly considering colorization and temporal consistency in a unified framework.Specifically,we propose a novel temporally consistent video colorization(TCVC)framework.TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization.Furthermore,TCVC introduces a self-regularization learning(SRL)scheme to minimize the differences in predictions obtained using different time steps.SRL does not require any ground-truth color videos for training and can further improve temporal consistency.Experiments demonstrate that our method can not only provide visually pleasing colorized video,but also with clearly better temporal consistency than state-of-the-art methods.A video demo is provided at https://www.youtube.com/watch?v=c7dczMs-olE,while code is available at https://github.com/lyh-18/TCVC-Tem porally-Consistent-Video-Colorization.
基金supported by the National Natural Science Foundation of China under Grant No.61972379.
文摘Synthesizing garment dynamics according to body motions is a vital technique in computer graphics.Physics-based simulation depends on an accurate model of the law of kinetics of cloth,which is time-consuming,hard to implement,and complex to control.Existing data-driven approaches either lack temporal consistency,or fail to handle garments that are different from body topology.In this paper,we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape.Given a sequence of body motions,our workflow is able to gen-erate corresponding garment dynamics with both spatial and temporal coherence.To that end,we develop a transformer-based garment synthesis network to learn the mapping from body motions to garment dynamics.Frame-level attention is employed to capture the dependency of garments and body motions.Moreover,a post-processing procedure is further tak-en to perform penetration removal and auto-texturing.Then,textured clothing animation that is collision-free and tempo-rally-consistent is generated.We quantitatively and qualitatively evaluated our proposed workflow from different aspects.Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation,while running 1000 times faster.Besides,our workflow achieved superior synthesis perfor-mance compared with alternative approaches.To stimulate further research in this direction,our code will be publicly available soon.
基金This work is supported in part by the Natural Science Foundation of Zhejiang Province of China under Grant No. LQ17F030001, the National Natural Science Foundation of China under Grant No. U1609215, Qianjiang Talent Program of Zhejiang Province of China under Grant No. QJD1602021, the National Key Technology Research and Development Program of the Ministry of Science and Technology of China under Grant No. 2014BAK14B01, and Beihang University Virtual Reality Technology and System National Key Laboratory Open Project under Grant No. BUAA-VR-16KF-17.
文摘Deep convolutional neural networks (DCNNs) based methods recently keep setting new records on the tasks of predicting depth maps from monocular images. When dealing with video-based applications such as 2D (2-dimensional) to 3D (3-dimensional) video conversion, however, these approaches tend to produce temporally inconsistent depth maps, since their CNN models are optimized over single frames. In this paper, we address this problem by introducing a novel spatial-temporal conditional random fields (CRF) model into the DCNN architecture, which is able to enforce temporal consistency between depth map estimations over consecutive video frames. In our approach, temporally consistent superpixel (TSP) is first applied to an image sequence to establish the correspondence of targets in consecutive frames. A DCNN is then used to regress the depth value of each temporal superpixel, followed by a spatial-temporal CRF layer to model the relationship of the estimated depths in both spatial and temporal domains. The parameters in both DCNN and CRF models are jointly optimized with back propagation. Experimental results show that our approach not only is able to significantly enhance the temporal consistency of estimated depth maps over existing single-frame-based approaches, but also improves the depth estimation accuracy in terms of various evaluation metrics.