期刊文献+
共找到206篇文章
< 1 2 11 >
每页显示 20 50 100
Application and Analysis of Computer Virtual Simulation Technology in Three-dimensional Animation Production
1
作者 Le Qiu 《管理科学与研究(中英文版)》 2019年第1期19-21,共3页
With the continuous promotion of computer technology, the application system of virtual simulation technology has been further optimized and improved, and has been widely used in various fields of social development, ... With the continuous promotion of computer technology, the application system of virtual simulation technology has been further optimized and improved, and has been widely used in various fields of social development, such as urban construction, interior design, industrial simulation and tourism teaching. China's three-dimensional animation production started relatively late, but has achieved good results with the support of related advanced technology in the process of development. Computer virtual simulation technology is an important technical support in the production of three-dimensional animation. In this paper, firstly, the related content of computer virtual simulation technology was introduced. Then, the specific application of this technology in the production of three-dimensional animation was further elaborated, so as to provide some reference for the improvement of the production effect of three-dimensional animation in the future. 展开更多
关键词 COMPUTER VIRTUAL Simulation Technology THREE-DIMENSIONAL ANIMATION PRODUCTION APPLICATION
下载PDF
Data-driven facial animation based on manifold Bayesian regression 被引量:3
2
作者 WANG Yu-shun ZHUANG Yue-ting WU Fei 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2006年第4期556-563,共8页
Driving facial animation based on tens of tracked markers is a challenging task due to the complex topology and to the non-rigid nature of human faces. We propose a solution named manifold Bayesian regression. First a... Driving facial animation based on tens of tracked markers is a challenging task due to the complex topology and to the non-rigid nature of human faces. We propose a solution named manifold Bayesian regression. First a novel distance metric, the geodesic manifold distance, is introduced to replace the Euclidean distance. The problem of facial animation can be formulated as a sparse warping kernels regression problem, in which the geodesic manifold distance is used for modelling the topology and discontinuities of the face models. The geodesic manifold distance can be adopted in traditional regression methods, e.g. radial basis functions without much tuning. We put facial animation into the framework of Bayesian regression. Bayesian approaches provide an elegant way of dealing with noise and uncertainty. After the covariance matrix is properly modulated, Hybrid Monte Carlo is used to approximate the integration of probabilities and get deformation results. The experimental results showed that our algorithm can robustly produce facial animation with large motions and complex face models. 展开更多
关键词 Facial animation MANIFOLD Geodesic distance Bayesian regression
下载PDF
Prenatal diagnosis of isolated lateral facial cleft by ultrasonography and three-dimensional printing:A case report 被引量:1
3
作者 Wen-Ling Song Hai-Ou Ma +5 位作者 Yu Nan Yu-Jia Li Na Qi Li-Ying Zhang Xin Xu Yuan-Yi Wang 《World Journal of Clinical Cases》 SCIE 2021年第24期7196-7204,共9页
BACKGROUND Lateral facial clefts are atypical with a low incidence in the facial cleft spectrum.With the development of ultrasonography(US)prenatal screening,such facial malformations can be detected and diagnosed pre... BACKGROUND Lateral facial clefts are atypical with a low incidence in the facial cleft spectrum.With the development of ultrasonography(US)prenatal screening,such facial malformations can be detected and diagnosed prenatally rather than at birth.Although three-dimensional US(3DUS)can render the fetus'face via 3D reconstruction,the 3D images are displayed on two-dimensional screens without field depth,which impedes the understanding of untrained individuals.In contrast,a 3D-printed model of the fetus'face helps both parents and doctors develop a more comprehensive understanding of the facial malformation by creating more interactive aspects.Herein,we present an isolated lateral facial cleft case that was diagnosed via US combined with a 3D-printed model.CASE SUMMARY A 31-year-old G2P1 patient presented for routine prenatal screening at the 22nd wk of gestation.The coronal nostril-lip section of two-dimensional US(2DUS)demonstrated that the fetus'bilateral oral commissures were asymmetrical,and left oral commissure was abnormally wide.The left oblique-coronal section showed a cleft at the left oral commissure which extended to the left cheek.The results of 3DUS confirmed the cleft.Furthermore,we created a model of the fetal face using 3D printing technology,which clearly presented facial malformations.The fetus was diagnosed with a left lateral facial cleft,which was categorized as a No.7 facial cleft according to the Tessier facial cleft classification.The parents terminated the pregnancy at the 24th wk of gestation after parental counseling.CONCLUSION In the diagnostic course of the current case,in addition to the traditional application of 2D and 3DUS,we created a 3D-printed model of the fetus,which enhanced diagnostic evidence,benefited the education of junior doctors,improved parental counseling,and had the potential to guide surgical planning. 展开更多
关键词 Prenatal diagnosis Isolated lateral facial cleft Three-dimensional printing Facial malformations ULTRASONOGRAPHY Tessier No.7 facial cleft Case report
下载PDF
Fast Individual Facial Animation Framework Based on Motion Capture Data
4
作者 张满囤 葛新杰 +3 位作者 刘爽 肖智东 游理华 张建军 《Journal of Donghua University(English Edition)》 EI CAS 2014年第3期256-261,共6页
Based upon motion capture,a semi-automatic technique for fast facial animation was implemented. While capturing the facial expressions from a performer,a camera was used to record her /his front face as a texture map.... Based upon motion capture,a semi-automatic technique for fast facial animation was implemented. While capturing the facial expressions from a performer,a camera was used to record her /his front face as a texture map. The radial basis function( RBF) technique was utilized to deform a generic facial model and the texture was remapped to generate a personalized face.Partitioning the personalized face into three regions and using the captured facial expression data,the RBF and Laplacian operator,and mean-value coordinates were implemented to deform each region respectively. With shape blending,the three regions were combined together to construct the final face model. Our results show that the technique is efficient in generating realistic facial animation. 展开更多
关键词 motion capture facial animation radial basis function(RBF) facial expression
下载PDF
Synthesizing Performance-driven Facial Animation
5
作者 LUO Chang-Wei YU Jun WANG Zeng-Fu 《自动化学报》 EI CSCD 北大核心 2014年第10期2245-2252,共8页
关键词 人脸动画 性能驱动 面部表情 动画系统 人脸模型 实时输出 数字字符 表面变形
下载PDF
An Improved Three-Dimensional Model for Emotion Based on Fuzzy Theory
6
作者 Zijiang Zhu Junshan Li +1 位作者 Xiaoguang Deng Yi Hu 《Journal of Computer and Communications》 2018年第8期101-111,共11页
Emotion Model is the basis of facial expression recognition system. The constructed emotional model should not only match facial expressions with emotions, but also reflect the location relationship between different ... Emotion Model is the basis of facial expression recognition system. The constructed emotional model should not only match facial expressions with emotions, but also reflect the location relationship between different emotions. In this way, it is easy to understand the current emotion of an individual through the analysis of the acquired facial expression information. This paper constructs an improved three-dimensional model for emotion based on fuzzy theory, which corresponds to the facial features to emotions based on the basic emotions proposed by Ekman. What’s more, the three-dimensional model for motion is able to divide every emotion into three different groups which can show the positional relationship visually and quantitatively and at the same time determine the degree of emotion based on fuzzy theory. 展开更多
关键词 EMOTION Model FACIAL EXPRESSION FUZZY Theory THREE-DIMENSIONAL STATE-SPACE
下载PDF
Experimental Studies on the Irradiation of Facial Bones in Animals:A Review
7
作者 Lucas Poort Bernd Lethaus +4 位作者 Roland Bockmann Doke Buurman Jos De Jong Frank Hoebers Peter Kessler 《International Journal of Otolaryngology and Head & Neck Surgery》 2014年第3期113-127,共15页
Introduction: Radiotherapy is often used to treat head and neck malignancies, with inevitable effects on the surrounding healthy tissues. We have reviewed the literature concerning the experimental irradiation of faci... Introduction: Radiotherapy is often used to treat head and neck malignancies, with inevitable effects on the surrounding healthy tissues. We have reviewed the literature concerning the experimental irradiation of facial bones in animals. Materials and Methods: A PubMed search was performed to retrieve animal experiments on the irradiation of facial bones that were published between January 1992 and January 2012. The search terms were “irradiation facial bone” and “irradiation osteoradionecrosis”. Results: Thirty-six publications were included. The irradiation sources were Cobalt60, orthovoltage, 4 - 6 megavolt photons, and brachytherapy. The total dose varied between 8 - 60 Gy in single or multiple fractions. The literature presents a broad range of animal studies that differ in terms of the in vivo model, irradiation, observation period, and evaluation of results. Discussion: The different animal models used leave many questions unanswered. A detailed and standardized description of the methodology and results would facilitate the comparability of future studies. 展开更多
关键词 IRRADIATION Animal Experiments Facial Bones OSTEORADIONECROSIS
下载PDF
Facial Expression Recognition Using Enhanced Convolution Neural Network with Attention Mechanism 被引量:2
8
作者 K.Prabhu S.SathishKumar +2 位作者 M.Sivachitra S.Dineshkumar P.Sathiyabama 《Computer Systems Science & Engineering》 SCIE EI 2022年第4期415-426,共12页
Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER hav... Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER have been perfect on normal faces but have been found to be constrained in occluded faces.Recently,Deep Learning Techniques(DLT)have gained popular-ity in applications of real-world problems including recognition of human emo-tions.The human face reflects emotional states and human intentions.An expression is the most natural and powerful way of communicating non-verbally.Systems which form communications between the two are termed Human Machine Interaction(HMI)systems.FER can improve HMI systems as human expressions convey useful information to an observer.This paper proposes a FER scheme called EECNN(Enhanced Convolution Neural Network with Atten-tion mechanism)to recognize seven types of human emotions with satisfying results in its experiments.Proposed EECNN achieved 89.8%accuracy in classi-fying the images. 展开更多
关键词 Facial expression recognition linear discriminant analysis animal migration optimization regions of interest enhanced convolution neural network with attention mechanism
下载PDF
Research on Facial Expression Capture Based on Two-Stage Neural Network
9
作者 Zhenzhou Wang Shao Cui +1 位作者 Xiang Wang JiaFeng Tian 《Computers, Materials & Continua》 SCIE EI 2022年第9期4709-4725,共17页
To generate realistic three-dimensional animation of virtual character,capturing real facial expression is the primary task.Due to diverse facial expressions and complex background,facial landmarks recognized by exist... To generate realistic three-dimensional animation of virtual character,capturing real facial expression is the primary task.Due to diverse facial expressions and complex background,facial landmarks recognized by existing strategies have the problem of deviations and low accuracy.Therefore,a method for facial expression capture based on two-stage neural network is proposed in this paper which takes advantage of improved multi-task cascaded convolutional networks(MTCNN)and high-resolution network.Firstly,the convolution operation of traditional MTCNN is improved.The face information in the input image is quickly filtered by feature fusion in the first stage and Octave Convolution instead of the original ones is introduced into in the second stage to enhance the feature extraction ability of the network,which further rejects a large number of false candidates.The model outputs more accurate facial candidate windows for better landmarks recognition and locates the faces.Then the images cropped after face detection are input into high-resolution network.Multi-scale feature fusion is realized by parallel connection of multi-resolution streams,and rich high-resolution heatmaps of facial landmarks are obtained.Finally,the changes of facial landmarks recognized are tracked in real-time.The expression parameters are extracted and transmitted to Unity3D engine to drive the virtual character’s face,which can realize facial expression synchronous animation.Extensive experimental results obtained on the WFLW database demonstrate the superiority of the proposed method in terms of accuracy and robustness,especially for diverse expressions and complex background.The method can accurately capture facial expression and generate three-dimensional animation effects,making online entertainment and social interaction more immersive in shared virtual space. 展开更多
关键词 Facial expression capture facial landmarks multi-task cascaded convolutional networks high-resolution network animation generation
下载PDF
Robust facial expression recognition system in higher poses
10
作者 Ebenezer Owusu Justice Kwame Appati Percy Okae 《Visual Computing for Industry,Biomedicine,and Art》 EI 2022年第1期159-173,共15页
Facial expression recognition(FER)has numerous applications in computer security,neuroscience,psychology,and engineering.Owing to its non-intrusiveness,it is considered a useful technology for combating crime.However,... Facial expression recognition(FER)has numerous applications in computer security,neuroscience,psychology,and engineering.Owing to its non-intrusiveness,it is considered a useful technology for combating crime.However,FER is plagued with several challenges,the most serious of which is its poor prediction accuracy in severe head poses.The aim of this study,therefore,is to improve the recognition accuracy in severe head poses by proposing a robust 3D head-tracking algorithm based on an ellipsoidal model,advanced ensemble of AdaBoost,and saturated vector machine(SVM).The FER features are tracked from one frame to the next using the ellipsoidal tracking model,and the visible expressive facial key points are extracted using Gabor filters.The ensemble algorithm(Ada-AdaSVM)is then used for feature selection and classification.The proposed technique is evaluated using the Bosphorus,BU-3DFE,MMI,CK^(+),and BP4D-Spontaneous facial expression databases.The overall performance is outstanding. 展开更多
关键词 Facial expressions Three-dimensional head pose Ellipsoidal model Gabor filters Ada-AdaSVM
下载PDF
Multimodal Emotion Recognition Based on Facial Expression and ECG Signal
11
作者 NIU Jian-wei AN Yue-qi +1 位作者 NI Jie JIANG Chang-hua 《包装工程》 CAS 北大核心 2022年第4期71-79,共9页
As a key link in human-computer interaction,emotion recognition can enable robots to correctly perceive user emotions and provide dynamic and adjustable services according to the emotional needs of different users,whi... As a key link in human-computer interaction,emotion recognition can enable robots to correctly perceive user emotions and provide dynamic and adjustable services according to the emotional needs of different users,which is the key to improve the cognitive level of robot service.Emotion recognition based on facial expression and electrocardiogram has numerous industrial applications.First,three-dimensional convolutional neural network deep learning architecture is utilized to extract the spatial and temporal features from facial expression video data and electrocardiogram(ECG)data,and emotion classification is carried out.Then two modalities are fused in the data level and the decision level,respectively,and the emotion recognition results are then given.Finally,the emotion recognition results of single-modality and multi-modality are compared and analyzed.Through the comparative analysis of the experimental results of single-modality and multi-modality under the two fusion methods,it is concluded that the accuracy rate of multi-modal emotion recognition is greatly improved compared with that of single-modal emotion recognition,and decision-level fusion is easier to operate and more effective than data-level fusion. 展开更多
关键词 multi-modal emotion recognition facial expression ECG signal three-dimensional convolutional neural network
下载PDF
基于潜在空间的动漫人脸风格迁移与编辑方法
12
作者 邓海欣 张凤全 +2 位作者 王楠 张万才 雷劼睿 《系统仿真学报》 CAS CSCD 北大核心 2024年第12期2834-2849,共16页
为解决现有图像仿真中动漫风格迁移网络存在图像失真和风格单一等问题,提出了适用于动漫人脸风格迁移和编辑的TGFE-TrebleStyleGAN(text-guided facial editing with TrebleStyleGAN)网络框架。利用潜在空间的向量引导生成人脸图像,并在... 为解决现有图像仿真中动漫风格迁移网络存在图像失真和风格单一等问题,提出了适用于动漫人脸风格迁移和编辑的TGFE-TrebleStyleGAN(text-guided facial editing with TrebleStyleGAN)网络框架。利用潜在空间的向量引导生成人脸图像,并在TrebleStyleGAN中设计了细节控制模块和特征控制模块来约束生成图像的外观。迁移网络生成的图像不仅用作风格控制信号,还用作约束细粒度分割后的编辑区域。引入文本生成图像技术,捕捉风格迁移图像和语义信息的关联性。通过在开源数据集和自建配对标签的动漫人脸数据集上的实验表明:相较于基线模型DualStyleGAN,该模型的FID降低了2.819,SSIM与NIMA分别提升了0.028和0.074。集成风格迁移与编辑的方法能够确保在生成过程中既保留原有动漫人脸细节风格,又具备灵活的编辑能力,减少了图像的失真问题,在生成图像特征的一致性和动漫人脸图像风格相似性中表现更优。 展开更多
关键词 动漫风格迁移 生成对抗网络 潜在空间 动漫人脸编辑 文本引导图像生成
下载PDF
文本驱动的情绪多样化人脸动画生成研究
13
作者 刘增科 殷继彬 《计算机科学》 CSCD 北大核心 2024年第S02期313-320,共8页
文中介绍了一种新型的文本驱动人脸动画合成技术,该技术通过融合情绪模型以增强面部表情的表现力。这一技术主要由两个核心部分构成:面部情感模拟和唇形与语音的一致性。首先,通过对输入文本的深度分析,识别出其中包含的情感类型及其强... 文中介绍了一种新型的文本驱动人脸动画合成技术,该技术通过融合情绪模型以增强面部表情的表现力。这一技术主要由两个核心部分构成:面部情感模拟和唇形与语音的一致性。首先,通过对输入文本的深度分析,识别出其中包含的情感类型及其强度。然后,基于这些情感信息,应用三维自由变形算法(DFFD)来生成相应的面部表情。与此同时,收集人类发音时的语音音素和唇形数据,并利用强制对齐技术,将这些数据与文本中的语音音素在时间上进行精确匹配,从而产生一系列唇部关键点的变化。随后,通过线性插值方法生成中间帧,以进一步细化唇部运动的时间序列。最后,使用DFFD算法根据这些时间序列数据合成相应的唇形动画。通过对面部情感和唇形动画进行细致的权重配比,成功实现了高度逼真的虚拟人脸表情动画。该研究不仅解决了文本驱动面部表情合成中的信息缺失问题,而且克服了表情单一和面部表情与唇形不协调的挑战,为人机交互、游戏开发、影视制作等领域提供了一种创新的应用方案。 展开更多
关键词 文本驱动动画 情绪模型 DFFD 面部动画合成 情绪强度 唇形语音一致性
下载PDF
基于改进薄板样条运动模型的人脸动画算法
14
作者 杨硕 王一丁 《计算机工程》 CAS CSCD 北大核心 2024年第6期255-265,共11页
面部动画在电影、游戏、虚拟现实等领域起着关键作用,对于实现逼真、生动的人脸动画和情感传达至关重要。当面临面部形状、姿态、表情等多个变化因素时,虽然通过薄板样条非线性变换可以获得较好的运动估计结果,但在处理面部复杂纹理和... 面部动画在电影、游戏、虚拟现实等领域起着关键作用,对于实现逼真、生动的人脸动画和情感传达至关重要。当面临面部形状、姿态、表情等多个变化因素时,虽然通过薄板样条非线性变换可以获得较好的运动估计结果,但在处理面部复杂纹理和嘴部运动时存在运动估计不精细的问题,需要更强大的图像修复能力。因此,提出一种基于改进薄板样条运动模型(TPSMM)的人脸动画算法。首先,在TPSMM的基础上引入一种Farneback光流金字塔算法,通过与薄板样条变换和背景仿射变换相结合,使得人脸局部运动估计更精准;其次,为了更真实地恢复缺失区域的细节纹理信息,提出一种多尺度细节感知网络,该网络在编码器中通过嵌入通道注意力(ECA)模块减少源图像因多层下采样而导致的人脸细节信息丢失,在解码器中利用坐标注意力(CA)模块来有效捕获运动估计特征图中不同位置的重要特征,提高人脸图像的生成质量。实验结果表明,相比一阶段运动模型(FOMM)、关节动画的运动表示法(MRAA)、TPSMM等,该算法在MUG、UvA-Nemo和Oulu-CASIA数据集上的L1、平均关键点距离(AKD)、平均欧氏距离(AED)数值均达到最优,平均分别为0.0129、0.923、0.00099。 展开更多
关键词 面部动画 光流估计 薄板样条 多尺度特征融合 通道注意力机制 坐标注意力机制
下载PDF
改进Wav2Lip的文本音频驱动人脸动画生成
15
作者 孙瑜 朱欣娟 《计算机系统应用》 2024年第2期276-283,共8页
为了提高中文唇音同步人脸动画视频的真实性,本文提出一种基于改进Wav2Lip模型的文本音频驱动人脸动画生成技术.首先,构建了一个中文唇音同步数据集,使用该数据集来预训练唇部判别器,使其判别中文唇音同步人脸动画更加准确.然后,在Wav2... 为了提高中文唇音同步人脸动画视频的真实性,本文提出一种基于改进Wav2Lip模型的文本音频驱动人脸动画生成技术.首先,构建了一个中文唇音同步数据集,使用该数据集来预训练唇部判别器,使其判别中文唇音同步人脸动画更加准确.然后,在Wav2Lip模型中,引入文本特征,提升唇音时间同步性从而提高人脸动画视频的真实性.本文模型综合提取到的文本信息、音频信息和说话人面部信息,在预训练的唇部判别器和视频质量判别器的监督下,生成高真实感的唇音同步人脸动画视频.与ATVGnet模型和Wav2Lip模型的对比实验表明,本文模型生成的唇音同步人脸动画视频提升了唇形和音频之间的同步性,提高了人脸动画视频整体的真实感.本文成果为当前人脸动画生成需求提供一种解决方案. 展开更多
关键词 文本音频驱动 人脸动画 Wav2Lip模型 动画生成
下载PDF
人工智能生成内容(AIGC)驱动的电影虚拟角色面部特效研究
16
作者 吴方强 徐沁雪 周冰 《现代电影技术》 2024年第11期11-18,共8页
为研究AIGC技术对特效领域中虚拟非人角色表情制作带来的变革,本文结合当下主流国产AI大模型,对影视级虚拟角色表情特效制作开展实验,并提出制作思路和优化的制作流程。实验结果基本满足影视作品虚拟预演(PreViz)快速制作的技术要求,可... 为研究AIGC技术对特效领域中虚拟非人角色表情制作带来的变革,本文结合当下主流国产AI大模型,对影视级虚拟角色表情特效制作开展实验,并提出制作思路和优化的制作流程。实验结果基本满足影视作品虚拟预演(PreViz)快速制作的技术要求,可用于电影特效镜头的辅助制作。本文最后总结了该实验方法的可拓展性及迭代方向。 展开更多
关键词 AIGC 动作捕捉 文生视频 表情特效 国产大模型
下载PDF
Speech-driven facial animation with spectral gathering and temporal attention 被引量:1
17
作者 Yujin CHAI Yanlin WENG +1 位作者 Lvdi WANG Kun ZHOU 《Frontiers of Computer Science》 SCIE EI CSCD 2022年第3期153-162,共10页
In this paper,we present an efficient algorithm that generates lip-synchronized facial animation from a given vocal audio clip.By combining spectral-dimensional bidirectional long short-term memory and temporal attent... In this paper,we present an efficient algorithm that generates lip-synchronized facial animation from a given vocal audio clip.By combining spectral-dimensional bidirectional long short-term memory and temporal attention mechanism,we design a light-weight speech encoder that leams useful and robust vocal features from the input audio without resorting to pre-trained speech recognition modules or large training data.To learn subject-independent facial motion,we use deformation gradients as the internal representation,which allows nuanced local motions to be better synthesized than using vertex offsets.Compared with state-of-the-art automatic-speech-recognition-based methods,our model is much smaller but achieves similar robustness and quality most of the time,and noticeably better results in certain challenging cases. 展开更多
关键词 speech-driven facial animation spectral-dimensional bidirectional long short-term memory temporal attention deformation gradients
原文传递
Surface Detail Capturing for Realistic Facial Animation
18
作者 Pei-HsuanTu I-ChenLin Jeng-ShengYeh Rung-HueiLiang MingOuhyoung 《Journal of Computer Science & Technology》 SCIE EI CSCD 2004年第5期618-625,共8页
In this paper, a facial animation system is proposed for capturing bothgeometrical information and illumination changes of surface details, called expression details, fromvideo clips simultaneously, and the captured d... In this paper, a facial animation system is proposed for capturing bothgeometrical information and illumination changes of surface details, called expression details, fromvideo clips simultaneously, and the captured data can be widely applied to different 2D face imagesand 3D face models. While tracking the geometric data, we record the expression details by ratioimages. For 2D facial animation synthesis, these ratio images are used to generate dynamic textures.Because a ratio image is obtained via dividing colors of an expressive face by those of a neutralface, pixels with ratio value smaller than one are where a wrinkle or crease appears. Therefore, thegradients of the ratio value at each pixel in ratio images are regarded as changes of a facesurface, and original normals on the surface can be adjusted according to these gradients. Based onthis idea, we can convert the ratio images into a sequence of normal maps and then apply them toanimated 3D model rendering. With the expression detail mapping, the resulted facial animations aremore life-like and more expressive. 展开更多
关键词 facial animation facial expression deformations MORPHING bump mapping
原文传递
基于自适应回归模型和视频面部跟踪的三维动画表情驱动研究
19
作者 米娜 《佳木斯大学学报(自然科学版)》 CAS 2024年第1期51-54,共4页
随着科学技术的飞速发展,三维人脸识别和表情驱动也得到更多的应用。但由于该技术的设备要求较高,运用的成本也随之增加,如何在保证完成三维动画表情驱动的前提下降低成本,推广其实际的应用是一个重要的课题。研究针对这些问题构建了融... 随着科学技术的飞速发展,三维人脸识别和表情驱动也得到更多的应用。但由于该技术的设备要求较高,运用的成本也随之增加,如何在保证完成三维动画表情驱动的前提下降低成本,推广其实际的应用是一个重要的课题。研究针对这些问题构建了融合自适应回归模型和视频面部跟踪的三维动画表情驱动模型。首先利用三维形状回归模型、局部约束模型与模型方法进行对比,然后将它们的运行时间、准确程度进行分析。最后计算得出单帧运行消耗时间、模型准确性,验证了模型方法在三维动画表情驱动中的可行性。 展开更多
关键词 自适应回归模型 表情驱动 面部跟踪 三维动画
下载PDF
Statistical learning based facial animation
20
作者 Shibiao XU Guanghui MA +1 位作者 Weiliang MENG Xiaopeng ZHANG 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2013年第7期542-550,共9页
To synthesize real-time and realistic facial animation, we present an effective algorithm which combines image- and geometry-based methods for facial animation simulation. Considering the numerous motion units in the ... To synthesize real-time and realistic facial animation, we present an effective algorithm which combines image- and geometry-based methods for facial animation simulation. Considering the numerous motion units in the expression coding system, we present a novel simplified motion unit based on the basic facial expression, and construct the corresponding basic action for a head model. As image features are difficult to obtain using the performance driven method, we develop an automatic image feature recognition method based on statistical learning, and an expression image semi-automatic labeling method with rotation invariant face detection, which can improve the accuracy and efficiency of expression feature identification and training. After facial animation redirection, each basic action weight needs to be computed and mapped automatically. We apply the blend shape method to construct and train the corresponding expression database according to each basic action, and adopt the least squares method to compute the corresponding control parameters for facial animation. Moreover, there is a pre-integration of diffuse light distribution and specular light distribution based on the physical method, to improve the plausibility and efficiency of facial rendering. Our work provides a simplification of the facial motion unit, an optimization of the statistical training process and recognition process for facial animation, solves the expression parameters, and simulates the subsurface scattering effect in real time. Experimental results indicate that our method is effective and efficient, and suitable for computer animation and interactive applications. 展开更多
关键词 Facial animation Motion unit Statistical learning Realistic rendering Pre-integration
原文传递
上一页 1 2 11 下一页 到第
使用帮助 返回顶部