期刊文献+

Statistical learning based facial animation

Statistical learning based facial animation
原文传递
导出
摘要 To synthesize real-time and realistic facial animation, we present an effective algorithm which combines image- and geometry-based methods for facial animation simulation. Considering the numerous motion units in the expression coding system, we present a novel simplified motion unit based on the basic facial expression, and construct the corresponding basic action for a head model. As image features are difficult to obtain using the performance driven method, we develop an automatic image feature recognition method based on statistical learning, and an expression image semi-automatic labeling method with rotation invariant face detection, which can improve the accuracy and efficiency of expression feature identification and training. After facial animation redirection, each basic action weight needs to be computed and mapped automatically. We apply the blend shape method to construct and train the corresponding expression database according to each basic action, and adopt the least squares method to compute the corresponding control parameters for facial animation. Moreover, there is a pre-integration of diffuse light distribution and specular light distribution based on the physical method, to improve the plausibility and efficiency of facial rendering. Our work provides a simplification of the facial motion unit, an optimization of the statistical training process and recognition process for facial animation, solves the expression parameters, and simulates the subsurface scattering effect in real time. Experimental results indicate that our method is effective and efficient, and suitable for computer animation and interactive applications. To synthesize real-time and realistic facial animation, we present an effective algorithm which combines image- and geometry-based methods for facial animation simulation. Considering the numerous motion units in the expression coding system, we present a novel simplified motion unit based on the basic facial expression, and construct the corresponding basic action for a head model. As image features are difficult to obtain using the performance driven method, we develop an automatic image feature recognition method based on statistical learning, and an expression image semi-automatic labeling method with rotation invariant face detection, which can improve the accuracy and efficiency of expression feature identification and training. After facial ani- mation redirection, each basic action weight needs to be computed and mapped automatically. We apply the blend shape method to construct and train the corresponding expression database according to each basic action, and adopt the least squares method to compute the corresponding control parameters for facial animation. Moreover, there is a pre-integration of diffuse light distribu- tion and specular light distribution based on the physical method, to improve the plausibility and efficiency of facial rendering. Our work provides a simplification of the facial motion unit, an optimization of the statistical training process and recognition process for facial animation, solves the expression parameters, and simulates the subsurface scattering effect in real time. Experimental results indicate that our method is effective and efficient, and suitable for computer animation and interactive applications.
出处 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2013年第7期542-550,共9页 浙江大学学报C辑(计算机与电子(英文版)
基金 supported by the 2013 Annual Beijing Technological and Cultural Fusion for Demonstrated Base Construction and Industrial Nurture (No. Z131100000113007) the National Natural Science Foundation of China (Nos. 61202324, 61271431, and 61271430)
关键词 Facial animation Motion unit Statistical learning Realistic rendering Pre-integration Facial animation, Motion unit, Statistical learning, Realistic rendering, Pre-integration
  • 相关文献

参考文献3

二级参考文献70

  • 1NOH J, NEUMANN U. A survey of facial modeling and animation techniques[R]. California: University of Southem California, 1998.
  • 2KRINIDIS S, BUCIU I, PITAS I. Facial expression analysis and synthesis: a survey [ C ]//Proc of the 10th International Conference on Human-Computer Interaction ( HCI 2003). Crete, Greece: [ s. n. ], 2003 : 1432-1436.
  • 3DENG Z, CHIANG P, FOX P,et al. Animating blendshape faces by cross-mapping motion capture data [ C ]//Proe of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (BD). 2006:43- 48.
  • 4WILLIAMS L. Performance-driven facial animation[ C ]//Proc of the 17th Annual Conference on Computer Graphics and Interactive Techniques. 1990:235-242.
  • 5TERZOPOULOS D, LEE Y,VASILESCO M. Model-based and imagebased methods for facial synthesis, analysis and recognition [ C ]// Proc of the 6th IEEE International Conference on Automatic Face and Gesture Recognition. 2004:3-8.
  • 6TERAN J, SIFAKIS E, SALINAS-BLEMKER S, et al. Creating and simulating skeletal muscle from the visible human data set[ J]. IEEE Trans on VisuaUsation and Computer Graphics, 2005,11 ( 3 ):317-328.
  • 7MAUREL W. 3D modeling of the human upper limb including the biomechanics of joints, muscles and soft tissues[ D]. Lausanne, Swit-zerland:Eeole Polytechnique Federale De Lausanne, 1999.
  • 8PARKE F. Computer generated animation of faces[ C]//Proc of ACM Annual Conference. Boston, Massachusetts : [ s. n. ], 1972:451-457.
  • 9PARKE F. Parameterized models for facial animation [ J ]. IEEE Computer Graphics and Applications, 1982,2 (9) :61-68.
  • 10COHEN M, MASSARO D. Modeling coarticulation in synthetic visual speech [ J ]. Tokyo: Springer-Verlag, 1993 : 141-155.

共引文献30

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部