期刊文献+

利用几何度量的无监督实时面部动画生成算法 被引量:1

Unsupervised Algorithm of Real-Time Facial Animation by Geometric Measurements
下载PDF
导出
摘要 目前面部表情动画生成算法普遍具有捕捉设备昂贵、依赖用户表情数据预采集、需要用户具备专业知识等缺点,因此很难在普通用户中进行推广.针对这些不足,文中选择价格适中、操作简单的Kinect作为采集设备,提出了一种无须预处理的面部表情捕捉算法.首先从捕获的面部表情数据中提取面部特征点,利用几何度量建立低层面部特征点与高层表情语义之间的联系,根据权重和补偿策略建立几何度量样本集.然后采用无监督的方式自动分析样本分布,推测各表情单元的变化区间,实现表情参数的实时提取.最后利用表情参数驱动离线生成的通用表情基,生成能反映用户情绪的面部动画.在表情基生成过程中,首次引入控制点影响区域的概念来约束拉普拉斯变形算法,以提高通用Blendshape表情基的精度.实验结果表明,该方法简单易行,无需对每名用户进行表情数据预采集,即可在多人同时出现、部分遮挡等情况下实时、鲁棒地生成与用户近似的面部动画.主观评价中,该方法被证明具备优秀的采集灵活度、方便使用、实时性能良好,在普通用户中更具备推广价值. Most of current facial animation algorithms are difficult to be popularized among common users,because of the disadvantages of using expensive capture devices,depending on preprocess of expression data and needing special operator.To solve these problems,an affordable and convenient Kinect was chosen as capture device,and a non preprocessing capture algorithm of facial expression was proposed in this paper.Firstly,the facial feature points was extracted from the RGBD data which was captured by Kinect,and at the same time the relationship between low level facial feature points and high level expressional semantics was built using geometric measurements.Meanwhile the sample group of geometric measurements was established according to the weight strategies and compensation strategies.Secondly,the distribution of sample was analyzed automatically by unsupervised method and then the range of expression unit was inferred,so that expression parameters can be extracted in real time.Finally,the universal Blendshape basis,which were generated offline,were driven by the expression parameter to generated real time facial animation of reflecting users’mood.In this process,in order to improve the accuracy of the universal Blendshape basis,the paper first introduced area of influence of control points to restrain the Laplace deformation algorithm.The results demonstrate that the proposed algorithm is a simple and convenient method to generate real time and robust facial animation without preprocess of expression data collecting for every user,even in these case of appearing many people simultaneously and partial occlusion.It is proved that high flexible collection,easy operation and reliable real time performance are provided by the method.Therefore,it is worth to generalize among ordinary users.
作者 姜那 刘少龙 石峰 周忠 JIANG Na;LIU Shao-Long;SHI Feng;ZHOU Zhong(State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191)
出处 《计算机学报》 EI CSCD 北大核心 2017年第11期2478-2491,共14页 Chinese Journal of Computers
基金 国家"八六三"高技术研究发展计划项目基金(2015AA016403) 国家自然科学基金(61472020)资助~~
关键词 KINECT 人脸跟踪 Blendshape模型 表情动画 表演驱动 Kinect face tracking Blendshape models face animation performance driven
  • 相关文献

参考文献2

二级参考文献20

  • 1WANG Yu-shun,ZHUANG Yue-ting,WU Fei.Data-driven facial animation based on manifold Bayesian regression[J].Journal of Zhejiang University-Science A(Applied Physics & Engineering),2006,7(4):556-563. 被引量:3
  • 2裴玉茹,查红彬.真实感人脸的形状与表情空间[J].计算机辅助设计与图形学学报,2006,18(5):613-619. 被引量:5
  • 3曾丹,程义民,葛仕明,李杰.人眼3D肌肉控制模型[J].计算机辅助设计与图形学学报,2006,18(11):1710-1716. 被引量:10
  • 4Zeng M, Liang L, Liu X G, et al. Video-driven state-aware fa?cial animation[J]. Computer Animation and Virtual Worlds, 2012,23(3/4): 167-178.
  • 5Guenter B, Grimm C, Wood D, et al. Making faces[C] II Pro?ceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM Press, 1998: 55-66.
  • 6Zhang L, Snavely N, Curless B, et al. Space-time faces: high resolution capture for modeling and animation[J]. ACM Transactions on Graphics, 2004, 23(3): 548-558.
  • 7Bradley D, Heidrich W, Popa T, et al. High resolution passive facial performance capture[J]. ACM Transactions on Graphics, 2010,29(4): Article No.41.
  • 8Weise T, Bouaziz S, Li H, et al. Realtime performance-based facial animation[J]. ACM Transactions on Graphics, 2011, 30(4): Article No.77.
  • 9Cao C, Weng Y L, Lin S, et al. 3D shape regression for real-time facial animation[J]. ACM Transactions on Graphics, 2013,32(4): Article No.41.
  • 10Saragih J M, Lucey S, Cohn J F. Deformable model fitting by regularized landmark mean-shift[J]. International Journal of Computer Vision, 2011, 91(2): 200-215.

共引文献10

同被引文献13

引证文献1

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部