Background The accurate(quantitative)analysis of 3D face deformation is a problem of increasing interest in many applications.In particular,defining a 3D model of the face deformation into a 2D target image to capture...Background The accurate(quantitative)analysis of 3D face deformation is a problem of increasing interest in many applications.In particular,defining a 3D model of the face deformation into a 2D target image to capture local and asymmetric deformations remains a challenge in existing literature.A measure of such local deformations may be a relevant index for monitoring the rehabilitation exercises of patients suffering from Par-kinson’s or Alzheimer’s disease or those recovering from a stroke.Methods In this paper,a complete framework that allows the construction of a 3D morphable shape model(3DMM)of the face is presented for fitting to a target RGB image.The model has the specific characteristic of being based on localized components of deformation.The fitting transformation is performed from 3D to 2D and guided by the correspondence between landmarks detected in the target image and those manually annotated on the average 3DMM.The fitting also has the distinction of being performed in two steps to disentangle face deformations related to the identity of the target subject from those induced by facial actions.Results The method was experimentally validated using the MICC-3D dataset,which includes 11 subjects.Each subject was imaged in one neutral pose and while performing 18 facial actions that deform the face in localized and asymmetric ways.For each acquisition,3DMM was fit to an RGB frame whereby,from the apex facial action and the neutral frame,the extent of the deformation was computed.The results indicate that the proposed approach can accurately capture face deformation,even localized and asymmetric deformations.Conclusion The proposed framework demonstrated that it is possible to measure deformations of a reconstructed 3D face model to monitor facial actions performed in response to a set of targets.Interestingly,these results were obtained using only RGB targets,without the need for 3D scans captured with costly devices.This paves the way for the use of the proposed tool in remote medical rehabilitation monitoring.展开更多
3D morphable models(3DMMs)are generative models for face shape and appearance.Recent works impose face recognition constraints on 3DMM shape parameters so that the face shapes of the same person remain consistent.Howe...3D morphable models(3DMMs)are generative models for face shape and appearance.Recent works impose face recognition constraints on 3DMM shape parameters so that the face shapes of the same person remain consistent.However,the shape parameters of traditional 3DMMs satisfy the multivariate Gaussian distribution.In contrast,the identity embeddings meet the hypersphere distribution,and this conflict makes it challenging for face reconstruction models to preserve the faithfulness and the shape consistency simultaneously.In other words,recognition loss and reconstruction loss can not decrease jointly due to their conflict distribution.To address this issue,we propose the Sphere Face Model(SFM),a novel 3DMM for monocular face reconstruction,preserving both shape fidelity and identity consistency.The core of our SFM is the basis matrix which can be used to reconstruct 3D face shapes,and the basic matrix is learned by adopting a twostage training approach where 3D and 2D training data are used in the first and second stages,respectively.We design a novel loss to resolve the distribution mismatch,enforcing that the shape parameters have the hyperspherical distribution.Our model accepts 2D and 3D data for constructing the sphere face models.Extensive experiments show that SFM has high representation ability and clustering performance in its shape parameter space.Moreover,it produces highfidelity face shapes consistently in challenging conditions in monocular face reconstruction.The code will be released at https://github.com/a686432/SIR.展开更多
3D face similarity is a critical issue in computer vision, computer graphics and face recognition and so on. Since Fr@chet distance is an effective metric for measuring curve similarity, a novel 3D face similarity mea...3D face similarity is a critical issue in computer vision, computer graphics and face recognition and so on. Since Fr@chet distance is an effective metric for measuring curve similarity, a novel 3D face similarity measure method based on Fr^chet distances of geodesics is proposed in this paper. In our method, the surface similarity between two 3D faces is measured by the similarity between two sets of 3D curves on them. Due to the intrinsic property of geodesics, we select geodesics as the comparison curves. Firstly, the geodesics on each 3D facial model emanating from the nose tip point are extracted in the same initial direction with equal angular increment. Secondly, the Fr@chet distances between the two sets of geodesics on the two compared facial models are computed. At last, the similarity between the two facial models is computed based on the Fr6chet distances of the geodesics obtained in the second step. We verify our method both theoretically and practically. In theory, we prove that the similarity of our method satisfies three properties: reflexivity, symmetry, and triangle inequality. And in practice, experiments are conducted on the open 3D face database GavaDB, Texas 3D Face Recognition database, and our 3D face database. After the comparison with iso-geodesic and Hausdorff distance method, the results illustrate that our method has good discrimination ability and can not only identify the facial models of the same person, but also distinguish the facial models of any two different persons.展开更多
It is a long-standing question as to which genes define the characteristic facial features among different ethnic groups. In this study, we use Uyghurs, an ancient admixed population to query the genetic bases why Eur...It is a long-standing question as to which genes define the characteristic facial features among different ethnic groups. In this study, we use Uyghurs, an ancient admixed population to query the genetic bases why Europeans and Han Chinese look different. Facial traits were analyzed based on high-dense 3D facial images; numerous biometric spaces were examined for divergent facial features between European and Han Chinese, ranging from inter-landmark distances to dense shape geometrics, Genome-wide associ- ation studies (GWAS) were conducted on a discovery panel of Uyghurs, Six significant loci were iden- tified, four of which, rs1868752, rs118078182, rs60159418 at or near UBASH3B, COL23A1, PCDH7 and rs17868256 were replicated in independent cohorts of Uyghurs or Southern Han Chinese. A prospective model was also developed to predict 3D faces based on top GWAS signals and tested in hypothetic forensic scenarios.展开更多
文摘Background The accurate(quantitative)analysis of 3D face deformation is a problem of increasing interest in many applications.In particular,defining a 3D model of the face deformation into a 2D target image to capture local and asymmetric deformations remains a challenge in existing literature.A measure of such local deformations may be a relevant index for monitoring the rehabilitation exercises of patients suffering from Par-kinson’s or Alzheimer’s disease or those recovering from a stroke.Methods In this paper,a complete framework that allows the construction of a 3D morphable shape model(3DMM)of the face is presented for fitting to a target RGB image.The model has the specific characteristic of being based on localized components of deformation.The fitting transformation is performed from 3D to 2D and guided by the correspondence between landmarks detected in the target image and those manually annotated on the average 3DMM.The fitting also has the distinction of being performed in two steps to disentangle face deformations related to the identity of the target subject from those induced by facial actions.Results The method was experimentally validated using the MICC-3D dataset,which includes 11 subjects.Each subject was imaged in one neutral pose and while performing 18 facial actions that deform the face in localized and asymmetric ways.For each acquisition,3DMM was fit to an RGB frame whereby,from the apex facial action and the neutral frame,the extent of the deformation was computed.The results indicate that the proposed approach can accurately capture face deformation,even localized and asymmetric deformations.Conclusion The proposed framework demonstrated that it is possible to measure deformations of a reconstructed 3D face model to monitor facial actions performed in response to a set of targets.Interestingly,these results were obtained using only RGB targets,without the need for 3D scans captured with costly devices.This paves the way for the use of the proposed tool in remote medical rehabilitation monitoring.
基金supported in part by National Natural Science Foundation of China(61972342,61832016)Science and Technology Department of Zhejiang Province(2018C01080)+2 种基金Zhejiang Province Public Welfare Technology Application Research(LGG22F020009)Key Laboratory of Film and TV Media Technology of Zhejiang Province(2020E10015)Teaching Reform Project of Communication University of Zhejiang(jgxm202131).
文摘3D morphable models(3DMMs)are generative models for face shape and appearance.Recent works impose face recognition constraints on 3DMM shape parameters so that the face shapes of the same person remain consistent.However,the shape parameters of traditional 3DMMs satisfy the multivariate Gaussian distribution.In contrast,the identity embeddings meet the hypersphere distribution,and this conflict makes it challenging for face reconstruction models to preserve the faithfulness and the shape consistency simultaneously.In other words,recognition loss and reconstruction loss can not decrease jointly due to their conflict distribution.To address this issue,we propose the Sphere Face Model(SFM),a novel 3DMM for monocular face reconstruction,preserving both shape fidelity and identity consistency.The core of our SFM is the basis matrix which can be used to reconstruct 3D face shapes,and the basic matrix is learned by adopting a twostage training approach where 3D and 2D training data are used in the first and second stages,respectively.We design a novel loss to resolve the distribution mismatch,enforcing that the shape parameters have the hyperspherical distribution.Our model accepts 2D and 3D data for constructing the sphere face models.Extensive experiments show that SFM has high representation ability and clustering performance in its shape parameter space.Moreover,it produces highfidelity face shapes consistently in challenging conditions in monocular face reconstruction.The code will be released at https://github.com/a686432/SIR.
基金This work was supported by the National Natural Science Foundation of China under Grant Nos. 61702293, 61772294, and 61572078, the Open Research Fund of the Ministry of Education Engineering Research Center of Virtual Reality Application of China under Grant No. MEOBNUEVRA201601. It was also partially supported by the National High Technology Research and Development 863 Program of China under Grant No. 2015AA020506, and the National Science and Technology Pillar Program during the 12th Five-Year Plan Period of China under Grant No. 2013BAI01B03.
文摘3D face similarity is a critical issue in computer vision, computer graphics and face recognition and so on. Since Fr@chet distance is an effective metric for measuring curve similarity, a novel 3D face similarity measure method based on Fr^chet distances of geodesics is proposed in this paper. In our method, the surface similarity between two 3D faces is measured by the similarity between two sets of 3D curves on them. Due to the intrinsic property of geodesics, we select geodesics as the comparison curves. Firstly, the geodesics on each 3D facial model emanating from the nose tip point are extracted in the same initial direction with equal angular increment. Secondly, the Fr@chet distances between the two sets of geodesics on the two compared facial models are computed. At last, the similarity between the two facial models is computed based on the Fr6chet distances of the geodesics obtained in the second step. We verify our method both theoretically and practically. In theory, we prove that the similarity of our method satisfies three properties: reflexivity, symmetry, and triangle inequality. And in practice, experiments are conducted on the open 3D face database GavaDB, Texas 3D Face Recognition database, and our 3D face database. After the comparison with iso-geodesic and Hausdorff distance method, the results illustrate that our method has good discrimination ability and can not only identify the facial models of the same person, but also distinguish the facial models of any two different persons.
基金funded by the Max-Planck-Gesellschaft Partner Group Grant (KT)the National Natural Science Foundation of China (Nos.31371267,31322030,91331108 (KT)+10 种基金91731303,31771388,and 31711530221 (SX)91631307 (SW)31501011 (YL) and 31260263 (YG))supported by Strategic Priority Research Program of the Chinese Academy of Sciences (CAS) (XDB13040100,SXXDB13041000,SW)the National Science Fund for Distinguished Young Scholars (31525014,SX)the Program of Shanghai Academic Research Leader (16XD1404700,to SX)the support of a National Thousand Young Talents Award and a Max Planck-CAS Paul Gerson Unna Independent Research Group Leadership Award (SW)the Science and Technology Commission of Shanghai Municipality (16JC1400504,SW14YF1406800,YL16YF1413900,HL)
文摘It is a long-standing question as to which genes define the characteristic facial features among different ethnic groups. In this study, we use Uyghurs, an ancient admixed population to query the genetic bases why Europeans and Han Chinese look different. Facial traits were analyzed based on high-dense 3D facial images; numerous biometric spaces were examined for divergent facial features between European and Han Chinese, ranging from inter-landmark distances to dense shape geometrics, Genome-wide associ- ation studies (GWAS) were conducted on a discovery panel of Uyghurs, Six significant loci were iden- tified, four of which, rs1868752, rs118078182, rs60159418 at or near UBASH3B, COL23A1, PCDH7 and rs17868256 were replicated in independent cohorts of Uyghurs or Southern Han Chinese. A prospective model was also developed to predict 3D faces based on top GWAS signals and tested in hypothetic forensic scenarios.