摘要
Background With the development of virtual reality(VR)technology,there is a growing need for customized 3D avatars.However,traditional methods for 3D avatar modeling are either time-consuming or fail to retain the similarity to the person being modeled.This study presents a novel framework for generating animatable 3D cartoon faces from a single portrait image.Methods First,we transferred an input real-world portrait to a stylized cartoon image using StyleGAN.We then proposed a two-stage reconstruction method to recover a 3D cartoon face with detailed texture.Our two-stage strategy initially performs coarse estimation based on template models and subsequently refines the model by nonrigid deformation under landmark supervision.Finally,we proposed a semantic-preserving face-rigging method based on manually created templates and deformation transfer.Conclusions Compared with prior arts,the qualitative and quantitative results show that our method achieves better accuracy,aesthetics,and similarity criteria.Furthermore,we demonstrated the capability of the proposed 3D model for real-time facial animation.
出处
《虚拟现实与智能硬件(中英文)》
EI
2024年第4期292-307,共16页
Virtual Reality & Intelligent Hardware