摘要
针对现有单目三维人脸重建方法在细节刻画和身份信息保持方面的不足,本文提出了一种由粗到精的三维人脸重建框架。该框架首先利用从二维人脸图片中提取的特征参数生成初始三维人脸模型,并设计多尺度身份特征提取器捕获个性化特征。然后,通过自适应加权策略筛选对重建任务最具贡献的特征信息。在精细重建阶段,本文关注人脸的几何细节重建,将身份和表情编码融入几何细节生成网络中,以生成具有特定身份和表情信息的几何细节。最后,利用可微分渲染器将三维人脸模型渲染为二维人脸图像,进行自监督训练。在CelebA和AFLW2000-3D数据集上的实验结果表明,本文提出的框架能够从单幅图像中重建出更加真实、自然且具有高度个性化特征的三维人脸模型,在细节刻画和身份信息保持方面均优于现有方法,具有广阔的应用前景。
Addressing the limitations of existing monocular 3D face reconstruction methods in capturing fine details and preserving identity information, this paper proposes a coarse-to-fine framework for 3D face reconstruction. The framework initially generates a basic 3D face model using feature parame-ters extracted from a 2D facial image and employs a multi-scale identity feature extractor to cap-ture personalized characteristics. Subsequently, an adaptive weighting strategy is utilized to select the most relevant features for the reconstruction task. In the fine reconstruction phase, the focus is on geometric detail reconstruction, integrating identity and expression encodings into a geometric detail generation network to produce detailed geometry specific to the individual's identity and expressions. Finally, a differentiable renderer is employed to convert the 3D face model into a 2D facial image for self-supervised training. Experimental results on the CelebA and AFLW2000-3D datasets demonstrate that the proposed framework can reconstruct more realistic, natural, and highly personalized 3D face models from a single image, outperforming existing methods in terms of detail capture and identity preservation, thus holding promising potential for various applica-tions.
出处
《计算机科学与应用》
2024年第4期255-267,共13页
Computer Science and Application