近年来,语音驱动的3D面部动画得到了广泛的研究,虽然先前的工作可以从语音数据中生成连贯的3D面部动画,但是由于视听数据的稀缺性,生成的3D面部动画缺乏真实感和生动性,嘴唇运动的准确性不高。为了提高嘴唇运动的准确性和生动性,本文提...近年来,语音驱动的3D面部动画得到了广泛的研究,虽然先前的工作可以从语音数据中生成连贯的3D面部动画,但是由于视听数据的稀缺性,生成的3D面部动画缺乏真实感和生动性,嘴唇运动的准确性不高。为了提高嘴唇运动的准确性和生动性,本文提出了一种新的模型HBF Talk (端到端的神经网络模型),通过使用Hu BERT (Hidden-Unit BERT)预训练模型对语音数据进行特征提取和编码,引入Flash模块对提取到的语音特征表示进行进一步的编码,获得更为丰富的语音特征上下文表示,最后使用带偏置的跨模态Transformer解码器进行解码。本文进行了定量和定性实验,并与现有的基线模型进行比较,显示本文HBF Talk模型具有更好的性能,提高了语音驱动的嘴唇运动的准确性和生动性。In recent years, speech-driven 3D facial animation has been widely studied. Previous work on the generation of coherent 3D facial animations was reported from speech data. However, the generated 3D facial animations lacks realism and vividness due to the scarcity of audio-visual data, and the accuracy of lip movements is not sufficient. This work is performed in order to improve the accuracy and vividness of lip movement and an end-to-end neural network model, HBF Talk, is proposed. It utilizes the Hu BERT (Hidden-Unit BERT) pre-trained model for feature extraction and encoding of speech data. The Flash module is introduced to further encode the extracted speech feature representations, resulting in more enriched contextual representations of speech features. Finally, a biased cross-modal Transformer decoder is used for decoding. This paper conducts both quantitative and qualitative experiments and compares the results with existing baseline models, demonstrating the proposed HBF Talk model outperforms previous models by improving the accuracy and liveliness of speech-driven lip movements.展开更多
文摘近年来,语音驱动的3D面部动画得到了广泛的研究,虽然先前的工作可以从语音数据中生成连贯的3D面部动画,但是由于视听数据的稀缺性,生成的3D面部动画缺乏真实感和生动性,嘴唇运动的准确性不高。为了提高嘴唇运动的准确性和生动性,本文提出了一种新的模型HBF Talk (端到端的神经网络模型),通过使用Hu BERT (Hidden-Unit BERT)预训练模型对语音数据进行特征提取和编码,引入Flash模块对提取到的语音特征表示进行进一步的编码,获得更为丰富的语音特征上下文表示,最后使用带偏置的跨模态Transformer解码器进行解码。本文进行了定量和定性实验,并与现有的基线模型进行比较,显示本文HBF Talk模型具有更好的性能,提高了语音驱动的嘴唇运动的准确性和生动性。In recent years, speech-driven 3D facial animation has been widely studied. Previous work on the generation of coherent 3D facial animations was reported from speech data. However, the generated 3D facial animations lacks realism and vividness due to the scarcity of audio-visual data, and the accuracy of lip movements is not sufficient. This work is performed in order to improve the accuracy and vividness of lip movement and an end-to-end neural network model, HBF Talk, is proposed. It utilizes the Hu BERT (Hidden-Unit BERT) pre-trained model for feature extraction and encoding of speech data. The Flash module is introduced to further encode the extracted speech feature representations, resulting in more enriched contextual representations of speech features. Finally, a biased cross-modal Transformer decoder is used for decoding. This paper conducts both quantitative and qualitative experiments and compares the results with existing baseline models, demonstrating the proposed HBF Talk model outperforms previous models by improving the accuracy and liveliness of speech-driven lip movements.