期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
Mesh representation matters:investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models
1
作者 Robert KOSK Richard SOUTHERN +3 位作者 Lihua YOU Shaojun BIAN Willem KOKKE Greg MAGUIRE 《虚拟现实与智能硬件(中英文)》 EI 2024年第5期383-395,共13页
Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition sys... Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition systems and medical imaging.These applications require high spatial and perceptual quality of synthesised meshes.Despite their significance,these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.Methods We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes.This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L_(1) and L_(2) norm metrics and underperforms on perceptual metrics.In contrast,using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error.The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.Results The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods. 展开更多
关键词 Shape modelling Deep 3D morphable models Representation learning Feature engineering Perceptual metrics
下载PDF
Sphere Face Model: A 3D morphable model with hypersphere manifold latent space using joint 2D/3D training 被引量:1
2
作者 Diqiong Jiang Yiwei Jin +4 位作者 Fang-Lue Zhang Zhe Zhu Yun Zhang Ruofeng Tong Min Tang 《Computational Visual Media》 SCIE EI CSCD 2023年第2期279-296,共18页
3D morphable models(3DMMs)are generative models for face shape and appearance.Recent works impose face recognition constraints on 3DMM shape parameters so that the face shapes of the same person remain consistent.Howe... 3D morphable models(3DMMs)are generative models for face shape and appearance.Recent works impose face recognition constraints on 3DMM shape parameters so that the face shapes of the same person remain consistent.However,the shape parameters of traditional 3DMMs satisfy the multivariate Gaussian distribution.In contrast,the identity embeddings meet the hypersphere distribution,and this conflict makes it challenging for face reconstruction models to preserve the faithfulness and the shape consistency simultaneously.In other words,recognition loss and reconstruction loss can not decrease jointly due to their conflict distribution.To address this issue,we propose the Sphere Face Model(SFM),a novel 3DMM for monocular face reconstruction,preserving both shape fidelity and identity consistency.The core of our SFM is the basis matrix which can be used to reconstruct 3D face shapes,and the basic matrix is learned by adopting a twostage training approach where 3D and 2D training data are used in the first and second stages,respectively.We design a novel loss to resolve the distribution mismatch,enforcing that the shape parameters have the hyperspherical distribution.Our model accepts 2D and 3D data for constructing the sphere face models.Extensive experiments show that SFM has high representation ability and clustering performance in its shape parameter space.Moreover,it produces highfidelity face shapes consistently in challenging conditions in monocular face reconstruction.The code will be released at https://github.com/a686432/SIR. 展开更多
关键词 facial modeling deep learning face reconstruction 3D morphable model(3DMM)
原文传递
An Effective Surface Modeling Method for Car Styling from a Side-View Image 被引量:1
3
作者 LI Bao-jun ZHANG Xue-fang +1 位作者 LV Zhang-quan QI Yi-chao 《Computer Aided Drafting,Design and Manufacturing》 2014年第4期49-55,共7页
We introduce an almost-automatic technique for generating 3D car styling surface models based on a single side-view image. Our approach combines the prior knowledge of car styling and deformable curve network model to... We introduce an almost-automatic technique for generating 3D car styling surface models based on a single side-view image. Our approach combines the prior knowledge of car styling and deformable curve network model to obtain an automatic modeling process. Firstly, we define the consistent parameterized curve template for 2D and 3D case respectivelyby analyzingthe characteristic lines for car styling. Then, a semi-automatic extraction from a side-view car image is adopted. Thirdly, statistic morphable model of 3D curve network isused to get the initial solution with sparse point constraints.Withonly afew post-processing operations, the optimized curve network models for creating surfaces are obtained. Finally, the styling surfaces are automatically generated using template-based parametric surface modeling method. More than 50 3D curve network models are constructed as the morphable database. We show that this intelligent modeling toolsimplifiesthe exhausted modeling task, and also demonstratemeaningful results of our approach. 展开更多
关键词 surface modeling curve network car styling statistic morphable model
下载PDF
Measuring 3D face deformations from RGB images of expression rehabilitation exercises
4
作者 Claudio FERRARI Stefano BERRETTI +1 位作者 Pietro PALA Alberto Del BIMBO 《Virtual Reality & Intelligent Hardware》 2022年第4期306-323,共18页
Background The accurate(quantitative)analysis of 3D face deformation is a problem of increasing interest in many applications.In particular,defining a 3D model of the face deformation into a 2D target image to capture... Background The accurate(quantitative)analysis of 3D face deformation is a problem of increasing interest in many applications.In particular,defining a 3D model of the face deformation into a 2D target image to capture local and asymmetric deformations remains a challenge in existing literature.A measure of such local deformations may be a relevant index for monitoring the rehabilitation exercises of patients suffering from Par-kinson’s or Alzheimer’s disease or those recovering from a stroke.Methods In this paper,a complete framework that allows the construction of a 3D morphable shape model(3DMM)of the face is presented for fitting to a target RGB image.The model has the specific characteristic of being based on localized components of deformation.The fitting transformation is performed from 3D to 2D and guided by the correspondence between landmarks detected in the target image and those manually annotated on the average 3DMM.The fitting also has the distinction of being performed in two steps to disentangle face deformations related to the identity of the target subject from those induced by facial actions.Results The method was experimentally validated using the MICC-3D dataset,which includes 11 subjects.Each subject was imaged in one neutral pose and while performing 18 facial actions that deform the face in localized and asymmetric ways.For each acquisition,3DMM was fit to an RGB frame whereby,from the apex facial action and the neutral frame,the extent of the deformation was computed.The results indicate that the proposed approach can accurately capture face deformation,even localized and asymmetric deformations.Conclusion The proposed framework demonstrated that it is possible to measure deformations of a reconstructed 3D face model to monitor facial actions performed in response to a set of targets.Interestingly,these results were obtained using only RGB targets,without the need for 3D scans captured with costly devices.This paves the way for the use of the proposed tool in remote medical rehabilitation monitoring. 展开更多
关键词 3D morphable face model Sparse and locally coherent 3DMM components Local and asymmetric Face deformations Face rehabilitation Face deformation measure
下载PDF
One-shot Face Reenactment with Dense Correspondence Estimation
5
作者 Yunfan Liu Qi Li Zhenan Sun 《Machine Intelligence Research》 EI CSCD 2024年第5期941-953,共13页
One-shot face reenactment is a challenging task due to the identity mismatch between source and driving faces.Most existing methods fail to completely eliminate the interference of driving subjects’identity informati... One-shot face reenactment is a challenging task due to the identity mismatch between source and driving faces.Most existing methods fail to completely eliminate the interference of driving subjects’identity information,which may lead to face shape distortion and undermine the realism of reenactment results.To solve this problem,in this paper,we propose using a 3D morphable model(3DMM)for explicit facial semantic decomposition and identity disentanglement.Instead of using 3D coefficients alone for reenactment control,we take advantage of the generative ability of 3DMM to render textured face proxies.These proxies contain abundant yet compact geometric and semantic information of human faces,which enables us to compute the face motion field between source and driving images by estimating the dense correspondence.In this way,we can approximate reenactment results by warping source images according to the motion field,and a generative adversarial network(GAN)is adopted to further improve the visual quality of warping results.Extensive experiments on various datasets demonstrate the advantages of the proposed method over existing state-of-the-art benchmarks in both identity preservation and reenactment fulfillment. 展开更多
关键词 Generative adversarial networks face image manipulation face image synthesis face reenactment 3D morphable model
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部