期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Face animation based on multiple sources and perspective alignment
1
作者 Yuanzong MEI Wenyi WANG +5 位作者 Xi LIU Wei YONG Weijie WU Yifan ZHU Shuai WANG Jianwen CHEN 《虚拟现实与智能硬件(中英文)》 EI 2024年第3期252-266,共15页
Background Face image animation generates a synthetic human face video that harmoniously integrates the identity derived from the source image and facial motion obtained from the driving video.This technology could be... Background Face image animation generates a synthetic human face video that harmoniously integrates the identity derived from the source image and facial motion obtained from the driving video.This technology could be beneficial in multiple medical fields,such as diagnosis and privacy protection.Previous studies on face animation often relied on a single source image to generate an output video.With a significant pose difference between the source image and the driving frame,the quality of the generated video is likely to be suboptimal because the source image may not provide sufficient features for the warped feature map.Methods In this study,we propose a novel face-animation scheme based on multiple sources and perspective alignment to address these issues.We first introduce a multiple-source sampling and selection module to screen the optimal source image set from the provided driving video.We then propose an inter-frame interpolation and alignment module to further eliminate the misalignment between the selected source image and the driving frame.Conclusions The proposed method exhibits superior performance in terms of objective metrics and visual quality in large-angle animation scenes compared to other state-of-the-art face animation methods.It indicates the effectiveness of the proposed method in addressing the distortion issues in large-angle animation. 展开更多
关键词 face animation Multiple-source driving Generative adversarial network Medical diagnostics
下载PDF
Ultra-Lightweight Face Animation Method for Ultra-Low Bitrate Video Conferencing
2
作者 LU Jianguo ZHENG Qingfang 《ZTE Communications》 2023年第1期64-71,共8页
Video conferencing systems face the dilemma between smooth streaming and decent visual quality because traditional video compression algorithms fail to produce bitstreams low enough for bandwidth-constrained networks.... Video conferencing systems face the dilemma between smooth streaming and decent visual quality because traditional video compression algorithms fail to produce bitstreams low enough for bandwidth-constrained networks.An ultra-lightweight face-animation-based method that enables better video conferencing experience is proposed in this paper.The proposed method compresses high-quality upperbody videos with ultra-low bitrates and runs efficiently on mobile devices without high-end graphics processing units(GPU).Moreover,a visual quality evaluation algorithm is used to avoid image degradation caused by extreme face poses and/or expressions,and a full resolution image composition algorithm to reduce unnaturalness,which guarantees the user experience.Experiments show that the proposed method is efficient and can generate high-quality videos at ultra-low bitrates. 展开更多
关键词 talking heads face animation video conferencing generative adversarial network
下载PDF
Personalized Multi-View Face Animation with Lifelike Textures
3
作者 柳杨华 徐光祐 《Tsinghua Science and Technology》 SCIE EI CAS 2007年第1期51-57,共7页
Realistic personalized face animation mainly depends on a picture-perfect appearance and natural head rotation. This paper describes a face model for generation of novel view facial textures with various realistic exp... Realistic personalized face animation mainly depends on a picture-perfect appearance and natural head rotation. This paper describes a face model for generation of novel view facial textures with various realistic expressions and poses. The model is achieved from corpora of a talking person using machine learning techniques. In face modeling, the facial texture variation is expressed by a multi-view facial texture space model, with the facial shape variation represented by a compact 3-D point distribution model (PDM). The facial texture space and the shape space are connected by bridging 2-D mesh structures. Levenberg-Marquardt optimization is employed for fine model fitting. Animation trajectory is trained for smooth and continuous image sequences. The test results show that this approach can achieve a vivid talking face sequence in various views. Moreover, the animation complexity is significantly reduced by the vector representation. 展开更多
关键词 face animation point distribution model (PDM) TEXTURE MULTI-VIEW
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部