摘要
在面部动画生成领域,克服人脸几何形状的复杂性是一项极具挑战性的任务。为了更好地应对这一挑战,文章采用了一种创新的方法,即将经过一维卷积堆叠和自注意力提取后的音频特征作为输入,通过Transformer模型从音频信号中生成面部动画。这个过程采用时间自回归模型逐步合成面部运动。使用BIWI数据集开展实验证明,该方法成功地将唇部顶点误差率缩小至令人满意的6.123%,同步率超过MeshTalk79.64%,这意味该方法在口型同步和面部表情生成方面表现出色,在完成面部动画生成任务中表现出很高的潜力,可为未来相关研究提供方向和参考。
In the field of facial animation generation,overcoming the complexity of face geometry has always been a highly challenging task.To better meet this challenge,this paper proposes an innovative approach which uses audio features that have undergone one-dimensional convolutional stacking and self-attention extraction as input,and generates facial animations from audio signals using a Transformer model.During the process,a time auto-regression model is used to gradually synthesize facial movements.Experiments on BIWI dataset show that this method has successfully reduced the lip vertex error rate to a satisfactory 6.123%,with a synchronization rate 79.64%higher than MeshTalk.This means that the proposed method performs well in lip synchronization and facial expression generation,and shows high potential in accomplishing facial animation generation tasks.It provides directions and reference for future related research.
作者
豆子闻
李文书
DOU Ziwen;LI Wenshu(School of Computer Science and Technology,Zhejiang Sci-Tech University,Hangzhou 310018,China)
出处
《软件工程》
2023年第12期59-62,共4页
Software Engineering