摘要
随着计算机视觉的发展,基于图像的虚拟试衣方法已经取得了很大的进展,但是,直接将此方法应用到视频虚拟试穿的任务时,由于缺乏时空一致性,会导致视频帧之间的不一致和不连贯,会对视频的视觉效果产生很大的影响,为了解决上述问题,本文提出了一种基于光流改进的视频虚拟试穿模型。首先使用正则化校正损失的薄板样条插值变换方法对服装进行扭曲,然后使用一个U-Net网络进行服装试穿,同时使用原视频服装区域的光流监督,合成视频服装区域的光流。实验结果表明,本文基于光流改进的视频虚拟试衣方法能很好解决虚拟试衣视频时空一致性问题。
With the development of computer vision,image-based virtual try-on methods have made great progress,however,when they are directly applied to the task of video virtual try-on,the lack of spatio-temporal consistency will lead to inconsistency and incoherence between frames,and this will have a great impact on the visual effect of the video,to solve the above problems,this paper proposes a video virtual try-on model based on optical flow improvement To solve the above problems,this paper proposes a video virtual try-on model based on optical flow improvement,which first distorts the clothing using a regularization-corrected loss thin-slab sample interpolation transformation method,and then uses a U-Net network for clothing try-on while using the optical flow of the original video clothing region to supervise the optical flow of the synthetic video clothing region.The experimental results show that this paper′s improved video virtual try-on method based on optical flow can well solve the virtual try-on video spatio-temporal consistency problem.
作者
胡安康
HU Ankang(School of Computer Science and Technology,Donghua University,Shanghai 201620,China)
出处
《智能计算机与应用》
2023年第12期114-119,共6页
Intelligent Computer and Applications
关键词
视频虚拟试衣
光流
时空一致性
薄板样条插值变换
video virtual try-on
optical flow
spatio-temporal consistency
thin slab-like interpolation transform