期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
3D Face Reconstruction from a Single Image Using a Combined PCA-LPP Method
1
作者 Jee-Sic Hur Hyeong-Geun Lee +2 位作者 Shinjin Kang Yeo Chan Yoon Soo Kyun Kim 《Computers, Materials & Continua》 SCIE EI 2023年第3期6213-6227,共15页
In this paper, we proposed a combined PCA-LPP algorithm toimprove 3D face reconstruction performance. Principal component analysis(PCA) is commonly used to compress images and extract features. Onedisadvantage of PCA ... In this paper, we proposed a combined PCA-LPP algorithm toimprove 3D face reconstruction performance. Principal component analysis(PCA) is commonly used to compress images and extract features. Onedisadvantage of PCA is local feature loss. To address this, various studies haveproposed combining a PCA-LPP-based algorithm with a locality preservingprojection (LPP). However, the existing PCA-LPP method is unsuitable for3D face reconstruction because it focuses on data classification and clustering.In the existing PCA-LPP, the adjacency graph, which primarily shows the connectionrelationships between data, is composed of the e-or k-nearest neighbortechniques. By contrast, in this study, complex and detailed parts, such aswrinkles around the eyes and mouth, can be reconstructed by composing thetopology of the 3D face model as an adjacency graph and extracting localfeatures from the connection relationship between the 3D model vertices.Experiments verified the effectiveness of the proposed method. When theproposed method was applied to the 3D face reconstruction evaluation set,a performance improvement of 10% to 20% was observed compared with theexisting PCA-based method. 展开更多
关键词 Principal component analysis locality preserving project 3DMM face reconstruction face modeling
下载PDF
Advancing Wound Filling Extraction on 3D Faces:An Auto-Segmentation and Wound Face Regeneration Approach
2
作者 Duong Q.Nguyen Thinh D.Le +2 位作者 Phuong D.Nguyen Nga T.K.Le H.Nguyen-Xuan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期2197-2214,共18页
Facial wound segmentation plays a crucial role in preoperative planning and optimizing patient outcomes in various medical applications.In this paper,we propose an efficient approach for automating 3D facial wound seg... Facial wound segmentation plays a crucial role in preoperative planning and optimizing patient outcomes in various medical applications.In this paper,we propose an efficient approach for automating 3D facial wound segmentation using a two-stream graph convolutional network.Our method leverages the Cir3D-FaIR dataset and addresses the challenge of data imbalance through extensive experimentation with different loss functions.To achieve accurate segmentation,we conducted thorough experiments and selected a high-performing model from the trainedmodels.The selectedmodel demonstrates exceptional segmentation performance for complex 3D facial wounds.Furthermore,based on the segmentation model,we propose an improved approach for extracting 3D facial wound fillers and compare it to the results of the previous study.Our method achieved a remarkable accuracy of 0.9999993% on the test suite,surpassing the performance of the previous method.From this result,we use 3D printing technology to illustrate the shape of the wound filling.The outcomes of this study have significant implications for physicians involved in preoperative planning and intervention design.By automating facial wound segmentation and improving the accuracy ofwound-filling extraction,our approach can assist in carefully assessing and optimizing interventions,leading to enhanced patient outcomes.Additionally,it contributes to advancing facial reconstruction techniques by utilizing machine learning and 3D bioprinting for printing skin tissue implants.Our source code is available at https://github.com/SIMOGroup/WoundFilling3D. 展开更多
关键词 3D printing technology face reconstruction 3D segmentation 3D printed model
下载PDF
3D Face Reconstruction Using Images from Cameras with Varying Parameters
3
作者 Mostafa Merras Soulaiman El Hazzat +2 位作者 Abderrahim Saaidi Khalid Satori Abderrazak Gadhi Nazih 《International Journal of Automation and computing》 EI CSCD 2017年第6期661-671,共11页
In this paper, we present a new technique of 3D face reconstruction from a sequence of images taken with cameras having varying parameters without the need to grid. This method is based on the estimation of the projec... In this paper, we present a new technique of 3D face reconstruction from a sequence of images taken with cameras having varying parameters without the need to grid. This method is based on the estimation of the projection matrices of the cameras from a symmetry property which characterizes the face, these projections matrices are used with points matching in each pair of images to determine the 3D points cloud, subsequently, 3D mesh of the face is constructed with 3D Crust algorithm. Lastly, the 2D image is projected on the 3D model to generate the texture mapping. The strong point of the proposed approach is to minimize the constraints of the calibration system: we calibrated the cameras from a symmetry property which characterizes the face, this property gives us the opportunity to know some points of 3D face in a specific well-chosen global reference, to formulate a system of linear and nonlinear equations according to these 3D points, their projection in the image plan and the elements of the projections matrix. Then to solve these equations, we use a genetic algorithm which consists of finding the global optimum without the need of the initial estimation and allows to avoid the local minima of the formulated cost function. Our study is conducted on real data to demonstrate the validity and the performance of the proposed approach in terms of robustness, simplicity, stability and convergence. 展开更多
关键词 Camera calibration genetic algorithm 3D face 3D mesh 3D reconstruction
原文传递
Sphere Face Model: A 3D morphable model with hypersphere manifold latent space using joint 2D/3D training 被引量:1
4
作者 Diqiong Jiang Yiwei Jin +4 位作者 Fang-Lue Zhang Zhe Zhu Yun Zhang Ruofeng Tong Min Tang 《Computational Visual Media》 SCIE EI CSCD 2023年第2期279-296,共18页
3D morphable models(3DMMs)are generative models for face shape and appearance.Recent works impose face recognition constraints on 3DMM shape parameters so that the face shapes of the same person remain consistent.Howe... 3D morphable models(3DMMs)are generative models for face shape and appearance.Recent works impose face recognition constraints on 3DMM shape parameters so that the face shapes of the same person remain consistent.However,the shape parameters of traditional 3DMMs satisfy the multivariate Gaussian distribution.In contrast,the identity embeddings meet the hypersphere distribution,and this conflict makes it challenging for face reconstruction models to preserve the faithfulness and the shape consistency simultaneously.In other words,recognition loss and reconstruction loss can not decrease jointly due to their conflict distribution.To address this issue,we propose the Sphere Face Model(SFM),a novel 3DMM for monocular face reconstruction,preserving both shape fidelity and identity consistency.The core of our SFM is the basis matrix which can be used to reconstruct 3D face shapes,and the basic matrix is learned by adopting a twostage training approach where 3D and 2D training data are used in the first and second stages,respectively.We design a novel loss to resolve the distribution mismatch,enforcing that the shape parameters have the hyperspherical distribution.Our model accepts 2D and 3D data for constructing the sphere face models.Extensive experiments show that SFM has high representation ability and clustering performance in its shape parameter space.Moreover,it produces highfidelity face shapes consistently in challenging conditions in monocular face reconstruction.The code will be released at https://github.com/a686432/SIR. 展开更多
关键词 facial modeling deep learning face reconstruction 3D morphable model(3DMM)
原文传递
Realistic face modeling based on multiple deformations 被引量:2
5
作者 GONG Xun WANG Guo-yin 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2007年第4期110-117,共8页
关键词 face reconstruction deforming model texture synthesis
原文传递
Joint 3D facial shape reconstruction and texture completion from a single image 被引量:1
6
作者 Xiaoxing Zeng Zhelun Wu +1 位作者 Xiaojiang Peng Yu Qiao 《Computational Visual Media》 SCIE EI CSCD 2022年第2期239-256,共18页
Recent years have witnessed significant progress in image-based 3D face reconstruction using deep convolutional neural networks.However,current reconstruction methods often perform improperly in self-occluded regions ... Recent years have witnessed significant progress in image-based 3D face reconstruction using deep convolutional neural networks.However,current reconstruction methods often perform improperly in self-occluded regions and can lead to inaccurate correspondences between a 2D input image and a 3D face template,hindering use in real applications.To address these problems,we propose a deep shape reconstruction and texture completion network,SRTC-Net,which jointly reconstructs 3D facial geometry and completes texture with correspondences from a single input face image.In SRTC-Net,we leverage the geometric cues from completed 3D texture to reconstruct detailed structures of 3D shapes.The SRTC-Net pipeline has three stages.The first introduces a correspondence network to identify pixel-wise correspondence between the input 2D image and a 3D template model,and transfers the input 2D image to a U-V texture map.Then we complete the invisible and occluded areas in the U-V texture map using an inpainting network.To get the 3D facial geometries,we predict coarse shape(U-V position maps)from the segmented face from the correspondence network using a shape network,and then refine the 3D coarse shape by regressing the U-V displacement map from the completed U-V texture map in a pixel-to-pixel way.We examine our methods on 3D reconstruction tasks as well as face frontalization and pose invariant face recognition tasks,using both in-the-lab datasets(MICC,MultiPIE)and in-the-wild datasets(CFP).The qualitative and quantitative results demonstrate the effectiveness of our methods on inferring 3D facial geometry and complete texture;they outperform or are comparable to the state-of-the-art. 展开更多
关键词 3D face reconstruction U-V completion pose invariant face recognition deep learning
原文传递
Real-time face view correction for front-facing cameras
7
作者 Yudong Guo Juyong Zhang +3 位作者 Yihua Chen Hongrui Cai Zhangjin Huang Bailin Deng 《Computational Visual Media》 EI CSCD 2021年第4期437-452,共16页
Face views are particularly important in person-to-person communication.Differenes between the camera location and the face orientation can result in undesirable facial appearances of the participants during video con... Face views are particularly important in person-to-person communication.Differenes between the camera location and the face orientation can result in undesirable facial appearances of the participants during video conferencing.This phenomenon is particularly noticeable when using devices where the frontfacing camera is placed in unconventional locations such as below the display or within the keyboard.In this paper,we take a video stream from a single RGB camera as input,and generate a video stream that emulates the view from a virtual camera at a designated location.The most challenging issue in this problem is that the corrected view often needs out-of-plane head rotations.To address this challenge,we reconstruct the 3D face shape and re-render it into synthesized frames according to the virtual camera location.To output the corrected video stream with natural appearance in real time,we propose several novel techniques including accurate eyebrow reconstruction,high-quality blending between the corrected face image and background,and template-based 3D reconstruction of glasses.Our system works well for different lighting conditions and skin tones,and can handle users wearing glasses.Extensive experiments and user studies demonstrate that our method provides high-quality results. 展开更多
关键词 face view correction 3D face reconstruction deep learning online communication
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部