期刊文献+

Deep Video Harmonization by Improving Spatial-temporal Consistency

原文传递
导出
摘要 Video harmonization is an important step in video editing to achieve visual consistency by adjusting foreground appear-ances in both spatial and temporal dimensions.Previous methods always only harmonize on a single scale or ignore the inaccuracy of flow estimation,which leads to limited harmonization performance.In this work,we propose a novel architecture for video harmoniza-tion by making full use of spatiotemporal features and yield temporally consistent harmonized results.We introduce multiscale harmon-ization by using nonlocal similarity on each scale to make the foreground more consistent with the background.We also propose a fore-ground temporal aggregator to dynamically aggregate neighboring frames at the feature level to alleviate the effect of inaccurate estim-ated flow and ensure temporal consistency.The experimental results demonstrate the superiority of our method over other state-of-the-art methods in both quantitative and visual comparisons.
出处 《Machine Intelligence Research》 EI CSCD 2024年第1期46-54,共9页 机器智能研究(英文版)
基金 This work was supported by National Natural Science Foundation of China(No.62001432) the Fundamental Research Funds for the Central Universities,China(Nos.CUC18LG024 and CUC22JG001).
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部