摘要
为了由视频进行驱动生成人脸表情动画,提出一种表演驱动的二维人脸表情合成方法。利用主动外观模型算法对人脸的关键点进行定位,并从关键点提取出人脸的运动参数;对人脸划分区域,并获取目标人脸的若干样本图像;从人脸的运动参数获取样本图像的插值系数,对样本图像进行线性组合来合成目标人脸的表情图像。该方法具有计算简单有效、真实感强的特点,可以应用于数字娱乐、视频会议等领域。
In order to generate facial expression animation, a performance-driven 2-D facial expression synthesis method is presenteck First, the key points on the face are located with the active appearance models, and the motion parameters of the face are extracted from these key points, Second, a face is divided into several regions, and several example expression images of the target face are acquired. Finally, the interpolation parameters are acquired from the face motion parameters. The corresponding expression images of the target face are synthesized by the linear combination of the style images. The method is simple and effective, and can generate highly realistic results. The method suits the fields such as digital entertainment and video confereneing.
出处
《计算机工程与设计》
CSCD
北大核心
2012年第8期3144-3148,共5页
Computer Engineering and Design
基金
国家自然科学基金项目(60903145)
徐州工程学院基金项目(XKY2009115
XKY2008107)
关键词
人脸建模
形变模型
表情动画
动画数据插值
眼睛处理
performance-driven
region division
expression parameter
expression map
example images