In this paper, we proposed a combined PCA-LPP algorithm toimprove 3D face reconstruction performance. Principal component analysis(PCA) is commonly used to compress images and extract features. Onedisadvantage of PCA ...In this paper, we proposed a combined PCA-LPP algorithm toimprove 3D face reconstruction performance. Principal component analysis(PCA) is commonly used to compress images and extract features. Onedisadvantage of PCA is local feature loss. To address this, various studies haveproposed combining a PCA-LPP-based algorithm with a locality preservingprojection (LPP). However, the existing PCA-LPP method is unsuitable for3D face reconstruction because it focuses on data classification and clustering.In the existing PCA-LPP, the adjacency graph, which primarily shows the connectionrelationships between data, is composed of the e-or k-nearest neighbortechniques. By contrast, in this study, complex and detailed parts, such aswrinkles around the eyes and mouth, can be reconstructed by composing thetopology of the 3D face model as an adjacency graph and extracting localfeatures from the connection relationship between the 3D model vertices.Experiments verified the effectiveness of the proposed method. When theproposed method was applied to the 3D face reconstruction evaluation set,a performance improvement of 10% to 20% was observed compared with theexisting PCA-based method.展开更多
Background The accurate(quantitative)analysis of 3D face deformation is a problem of increasing interest in many applications.In particular,defining a 3D model of the face deformation into a 2D target image to capture...Background The accurate(quantitative)analysis of 3D face deformation is a problem of increasing interest in many applications.In particular,defining a 3D model of the face deformation into a 2D target image to capture local and asymmetric deformations remains a challenge in existing literature.A measure of such local deformations may be a relevant index for monitoring the rehabilitation exercises of patients suffering from Par-kinson’s or Alzheimer’s disease or those recovering from a stroke.Methods In this paper,a complete framework that allows the construction of a 3D morphable shape model(3DMM)of the face is presented for fitting to a target RGB image.The model has the specific characteristic of being based on localized components of deformation.The fitting transformation is performed from 3D to 2D and guided by the correspondence between landmarks detected in the target image and those manually annotated on the average 3DMM.The fitting also has the distinction of being performed in two steps to disentangle face deformations related to the identity of the target subject from those induced by facial actions.Results The method was experimentally validated using the MICC-3D dataset,which includes 11 subjects.Each subject was imaged in one neutral pose and while performing 18 facial actions that deform the face in localized and asymmetric ways.For each acquisition,3DMM was fit to an RGB frame whereby,from the apex facial action and the neutral frame,the extent of the deformation was computed.The results indicate that the proposed approach can accurately capture face deformation,even localized and asymmetric deformations.Conclusion The proposed framework demonstrated that it is possible to measure deformations of a reconstructed 3D face model to monitor facial actions performed in response to a set of targets.Interestingly,these results were obtained using only RGB targets,without the need for 3D scans captured with costly devices.This paves the way for the use of the proposed tool in remote medical rehabilitation monitoring.展开更多
在101.3 k Pa恒定压力下,采用改进的Rose汽液平衡釜测定了甲醇-DMM3(聚甲氧基二甲醚,聚合度为n,即DMMn)二元体系汽液平衡数据,并对汽液平衡数据进行热力学一致性检验,结果表明所测定数据符合Gibbs-Duhenm的热力学一致性。用Aspen Plus v...在101.3 k Pa恒定压力下,采用改进的Rose汽液平衡釜测定了甲醇-DMM3(聚甲氧基二甲醚,聚合度为n,即DMMn)二元体系汽液平衡数据,并对汽液平衡数据进行热力学一致性检验,结果表明所测定数据符合Gibbs-Duhenm的热力学一致性。用Aspen Plus v7.1计算机软件,分别对Wilson、NRTL、UNIQUAC活度系数模型进行关联,由最大似然法对目标函数进行优化,回归出相应的二元交互作用参数。将关联结果与实验结果相比较,得到关联值与实验值的温度和汽相组成的平均绝对偏差,分别小于0.65 K和0.0065。为化工数据库增添了内容,也为含甲醇、DMM3体系的工程设计和进一步深入研究奠定了基础。展开更多
3D morphable models(3DMMs)are generative models for face shape and appearance.Recent works impose face recognition constraints on 3DMM shape parameters so that the face shapes of the same person remain consistent.Howe...3D morphable models(3DMMs)are generative models for face shape and appearance.Recent works impose face recognition constraints on 3DMM shape parameters so that the face shapes of the same person remain consistent.However,the shape parameters of traditional 3DMMs satisfy the multivariate Gaussian distribution.In contrast,the identity embeddings meet the hypersphere distribution,and this conflict makes it challenging for face reconstruction models to preserve the faithfulness and the shape consistency simultaneously.In other words,recognition loss and reconstruction loss can not decrease jointly due to their conflict distribution.To address this issue,we propose the Sphere Face Model(SFM),a novel 3DMM for monocular face reconstruction,preserving both shape fidelity and identity consistency.The core of our SFM is the basis matrix which can be used to reconstruct 3D face shapes,and the basic matrix is learned by adopting a twostage training approach where 3D and 2D training data are used in the first and second stages,respectively.We design a novel loss to resolve the distribution mismatch,enforcing that the shape parameters have the hyperspherical distribution.Our model accepts 2D and 3D data for constructing the sphere face models.Extensive experiments show that SFM has high representation ability and clustering performance in its shape parameter space.Moreover,it produces highfidelity face shapes consistently in challenging conditions in monocular face reconstruction.The code will be released at https://github.com/a686432/SIR.展开更多
目的针对从单幅人脸图像中恢复面部纹理图时获得的信息不完整、纹理细节不够真实等问题,提出一种基于生成对抗网络的人脸全景纹理图生成方法。方法将2维人脸图像与3维人脸模型之间的特征关系转换为编码器中的条件参数,从图像数据与人脸...目的针对从单幅人脸图像中恢复面部纹理图时获得的信息不完整、纹理细节不够真实等问题,提出一种基于生成对抗网络的人脸全景纹理图生成方法。方法将2维人脸图像与3维人脸模型之间的特征关系转换为编码器中的条件参数,从图像数据与人脸条件参数的多元高斯分布中得到隐层数据的概率分布,用于在生成器中学习人物的头面部纹理特征。在新创建的人脸纹理图数据集上训练一个全景纹理图生成模型,利用不同属性的鉴别器对输出结果进行评估反馈,提升生成纹理图的完整性和真实性。结果实验与当前最新方法进行了比较,在Celeb A-HQ和LFW(labled faces in the wild)数据集中随机选取单幅正面人脸测试图像,经生成结果的可视化对比及3维映射显示效果对比,纹理图的完整度和显示效果均优于其他方法。通过全局和面部区域的像素量化指标进行数据比较,相比于UVGAN,全局峰值信噪比(peak signal to noise ratio,PSNR)和全局结构相似性(structural similarity index,SSIM)分别提高了7.9 d B和0.088,局部PSNR和局部SSIM分别提高了2.8 d B和0.053;相比于OSTe C,全局PSNR和全局SSIM分别提高了5.45 d B和0.043,局部PSNR和局部SSIM分别提高了0.4 d B和0.044;相比于MVF-Net(multi-view 3D face network),局部PSNR和局部SSIM分别提高了0.6和0.119。实验结果证明,提出的人脸全景纹理图生成方法解决了从单幅人脸图像中重建面部纹理不完整的问题,改善了生成纹理图的显示细节。结论本文提出的人脸全景纹理图生成方法,利用人脸参数和网络模型的特性,使生成的人脸纹理图更完整,尤其是对原图不可见区域,像素恢复自然连贯,纹理细节更真实。展开更多
目的人脸姿态偏转是影响人脸识别准确率的一个重要因素,本文利用3维人脸重建中常用的3维形变模型以及深度卷积神经网络,提出一种用于多姿态人脸识别的人脸姿态矫正算法,在一定程度上提高了大姿态下人脸识别的准确率。方法对传统的3维形...目的人脸姿态偏转是影响人脸识别准确率的一个重要因素,本文利用3维人脸重建中常用的3维形变模型以及深度卷积神经网络,提出一种用于多姿态人脸识别的人脸姿态矫正算法,在一定程度上提高了大姿态下人脸识别的准确率。方法对传统的3维形变模型拟合方法进行改进,利用人脸形状参数和表情参数对3维形变模型进行建模,针对面部不同区域的关键点赋予不同的权值,加权拟合3维形变模型,使得具有不同姿态和面部表情的人脸图像拟合效果更好。然后,对3维人脸模型进行姿态矫正并利用深度学习对人脸图像进行修复,修复不规则的人脸空洞区域,并使用最新的局部卷积技术同时在新的数据集上重新训练卷积神经网络,使得网络参数达到最优。结果在LFW(labeled faces in the wild)人脸数据库和Stirling ESRC(Economic Social Research Council)3维人脸数据库上,将本文算法与其他方法进行比较,实验结果表明,本文算法的人脸识别精度有一定程度的提高。在LFW数据库上,通过对具有任意姿态的人脸图像进行姿态矫正和修复后,本文方法达到了96.57%的人脸识别精确度。在Stirling ESRC数据库上,本文方法在人脸姿态为±22°的情况下,人脸识别准确率分别提高5.195%和2.265%;在人脸姿态为±45°情况下,人脸识别准确率分别提高5.875%和11.095%;平均人脸识别率分别提高5.53%和7.13%。对比实验结果表明,本文提出的人脸姿态矫正算法有效提高了人脸识别的准确率。结论本文提出的人脸姿态矫正算法,综合了3维形变模型和深度学习模型的优点,在各个人脸姿态角度下,均能使人脸识别准确率在一定程度上有所提高。展开更多
基金This research was supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF),funded by the Ministry of Education(2021R1I1A3058103).
文摘In this paper, we proposed a combined PCA-LPP algorithm toimprove 3D face reconstruction performance. Principal component analysis(PCA) is commonly used to compress images and extract features. Onedisadvantage of PCA is local feature loss. To address this, various studies haveproposed combining a PCA-LPP-based algorithm with a locality preservingprojection (LPP). However, the existing PCA-LPP method is unsuitable for3D face reconstruction because it focuses on data classification and clustering.In the existing PCA-LPP, the adjacency graph, which primarily shows the connectionrelationships between data, is composed of the e-or k-nearest neighbortechniques. By contrast, in this study, complex and detailed parts, such aswrinkles around the eyes and mouth, can be reconstructed by composing thetopology of the 3D face model as an adjacency graph and extracting localfeatures from the connection relationship between the 3D model vertices.Experiments verified the effectiveness of the proposed method. When theproposed method was applied to the 3D face reconstruction evaluation set,a performance improvement of 10% to 20% was observed compared with theexisting PCA-based method.
文摘Background The accurate(quantitative)analysis of 3D face deformation is a problem of increasing interest in many applications.In particular,defining a 3D model of the face deformation into a 2D target image to capture local and asymmetric deformations remains a challenge in existing literature.A measure of such local deformations may be a relevant index for monitoring the rehabilitation exercises of patients suffering from Par-kinson’s or Alzheimer’s disease or those recovering from a stroke.Methods In this paper,a complete framework that allows the construction of a 3D morphable shape model(3DMM)of the face is presented for fitting to a target RGB image.The model has the specific characteristic of being based on localized components of deformation.The fitting transformation is performed from 3D to 2D and guided by the correspondence between landmarks detected in the target image and those manually annotated on the average 3DMM.The fitting also has the distinction of being performed in two steps to disentangle face deformations related to the identity of the target subject from those induced by facial actions.Results The method was experimentally validated using the MICC-3D dataset,which includes 11 subjects.Each subject was imaged in one neutral pose and while performing 18 facial actions that deform the face in localized and asymmetric ways.For each acquisition,3DMM was fit to an RGB frame whereby,from the apex facial action and the neutral frame,the extent of the deformation was computed.The results indicate that the proposed approach can accurately capture face deformation,even localized and asymmetric deformations.Conclusion The proposed framework demonstrated that it is possible to measure deformations of a reconstructed 3D face model to monitor facial actions performed in response to a set of targets.Interestingly,these results were obtained using only RGB targets,without the need for 3D scans captured with costly devices.This paves the way for the use of the proposed tool in remote medical rehabilitation monitoring.
文摘在101.3 k Pa恒定压力下,采用改进的Rose汽液平衡釜测定了甲醇-DMM3(聚甲氧基二甲醚,聚合度为n,即DMMn)二元体系汽液平衡数据,并对汽液平衡数据进行热力学一致性检验,结果表明所测定数据符合Gibbs-Duhenm的热力学一致性。用Aspen Plus v7.1计算机软件,分别对Wilson、NRTL、UNIQUAC活度系数模型进行关联,由最大似然法对目标函数进行优化,回归出相应的二元交互作用参数。将关联结果与实验结果相比较,得到关联值与实验值的温度和汽相组成的平均绝对偏差,分别小于0.65 K和0.0065。为化工数据库增添了内容,也为含甲醇、DMM3体系的工程设计和进一步深入研究奠定了基础。
基金supported in part by National Natural Science Foundation of China(61972342,61832016)Science and Technology Department of Zhejiang Province(2018C01080)+2 种基金Zhejiang Province Public Welfare Technology Application Research(LGG22F020009)Key Laboratory of Film and TV Media Technology of Zhejiang Province(2020E10015)Teaching Reform Project of Communication University of Zhejiang(jgxm202131).
文摘3D morphable models(3DMMs)are generative models for face shape and appearance.Recent works impose face recognition constraints on 3DMM shape parameters so that the face shapes of the same person remain consistent.However,the shape parameters of traditional 3DMMs satisfy the multivariate Gaussian distribution.In contrast,the identity embeddings meet the hypersphere distribution,and this conflict makes it challenging for face reconstruction models to preserve the faithfulness and the shape consistency simultaneously.In other words,recognition loss and reconstruction loss can not decrease jointly due to their conflict distribution.To address this issue,we propose the Sphere Face Model(SFM),a novel 3DMM for monocular face reconstruction,preserving both shape fidelity and identity consistency.The core of our SFM is the basis matrix which can be used to reconstruct 3D face shapes,and the basic matrix is learned by adopting a twostage training approach where 3D and 2D training data are used in the first and second stages,respectively.We design a novel loss to resolve the distribution mismatch,enforcing that the shape parameters have the hyperspherical distribution.Our model accepts 2D and 3D data for constructing the sphere face models.Extensive experiments show that SFM has high representation ability and clustering performance in its shape parameter space.Moreover,it produces highfidelity face shapes consistently in challenging conditions in monocular face reconstruction.The code will be released at https://github.com/a686432/SIR.
文摘目的针对从单幅人脸图像中恢复面部纹理图时获得的信息不完整、纹理细节不够真实等问题,提出一种基于生成对抗网络的人脸全景纹理图生成方法。方法将2维人脸图像与3维人脸模型之间的特征关系转换为编码器中的条件参数,从图像数据与人脸条件参数的多元高斯分布中得到隐层数据的概率分布,用于在生成器中学习人物的头面部纹理特征。在新创建的人脸纹理图数据集上训练一个全景纹理图生成模型,利用不同属性的鉴别器对输出结果进行评估反馈,提升生成纹理图的完整性和真实性。结果实验与当前最新方法进行了比较,在Celeb A-HQ和LFW(labled faces in the wild)数据集中随机选取单幅正面人脸测试图像,经生成结果的可视化对比及3维映射显示效果对比,纹理图的完整度和显示效果均优于其他方法。通过全局和面部区域的像素量化指标进行数据比较,相比于UVGAN,全局峰值信噪比(peak signal to noise ratio,PSNR)和全局结构相似性(structural similarity index,SSIM)分别提高了7.9 d B和0.088,局部PSNR和局部SSIM分别提高了2.8 d B和0.053;相比于OSTe C,全局PSNR和全局SSIM分别提高了5.45 d B和0.043,局部PSNR和局部SSIM分别提高了0.4 d B和0.044;相比于MVF-Net(multi-view 3D face network),局部PSNR和局部SSIM分别提高了0.6和0.119。实验结果证明,提出的人脸全景纹理图生成方法解决了从单幅人脸图像中重建面部纹理不完整的问题,改善了生成纹理图的显示细节。结论本文提出的人脸全景纹理图生成方法,利用人脸参数和网络模型的特性,使生成的人脸纹理图更完整,尤其是对原图不可见区域,像素恢复自然连贯,纹理细节更真实。
文摘目的人脸姿态偏转是影响人脸识别准确率的一个重要因素,本文利用3维人脸重建中常用的3维形变模型以及深度卷积神经网络,提出一种用于多姿态人脸识别的人脸姿态矫正算法,在一定程度上提高了大姿态下人脸识别的准确率。方法对传统的3维形变模型拟合方法进行改进,利用人脸形状参数和表情参数对3维形变模型进行建模,针对面部不同区域的关键点赋予不同的权值,加权拟合3维形变模型,使得具有不同姿态和面部表情的人脸图像拟合效果更好。然后,对3维人脸模型进行姿态矫正并利用深度学习对人脸图像进行修复,修复不规则的人脸空洞区域,并使用最新的局部卷积技术同时在新的数据集上重新训练卷积神经网络,使得网络参数达到最优。结果在LFW(labeled faces in the wild)人脸数据库和Stirling ESRC(Economic Social Research Council)3维人脸数据库上,将本文算法与其他方法进行比较,实验结果表明,本文算法的人脸识别精度有一定程度的提高。在LFW数据库上,通过对具有任意姿态的人脸图像进行姿态矫正和修复后,本文方法达到了96.57%的人脸识别精确度。在Stirling ESRC数据库上,本文方法在人脸姿态为±22°的情况下,人脸识别准确率分别提高5.195%和2.265%;在人脸姿态为±45°情况下,人脸识别准确率分别提高5.875%和11.095%;平均人脸识别率分别提高5.53%和7.13%。对比实验结果表明,本文提出的人脸姿态矫正算法有效提高了人脸识别的准确率。结论本文提出的人脸姿态矫正算法,综合了3维形变模型和深度学习模型的优点,在各个人脸姿态角度下,均能使人脸识别准确率在一定程度上有所提高。