Hand-biometric-based personal identification is considered to be an effective method for automatic recognition. However, existing systems require strict constraints during data acquisition, such as costly devices,spec...Hand-biometric-based personal identification is considered to be an effective method for automatic recognition. However, existing systems require strict constraints during data acquisition, such as costly devices,specified postures, simple background, and stable illumination. In this paper, a contactless personal identification system is proposed based on matching hand geometry features and color features. An inexpensive Kinect sensor is used to acquire depth and color images of the hand. During image acquisition, no pegs or surfaces are used to constrain hand position or posture. We segment the hand from the background through depth images through a process which is insensitive to illumination and background. Then finger orientations and landmark points, like finger tips or finger valleys, are obtained by geodesic hand contour analysis. Geometric features are extracted from depth images and palmprint features from intensity images. In previous systems, hand features like finger length and width are normalized, which results in the loss of the original geometric features. In our system, we transform 2D image points into real world coordinates, so that the geometric features remain invariant to distance and perspective effects. Extensive experiments demonstrate that the proposed hand-biometric-based personal identification system is effective and robust in various practical situations.展开更多
Recent years have witnessed significant progress in image-based 3D face reconstruction using deep convolutional neural networks.However,current reconstruction methods often perform improperly in self-occluded regions ...Recent years have witnessed significant progress in image-based 3D face reconstruction using deep convolutional neural networks.However,current reconstruction methods often perform improperly in self-occluded regions and can lead to inaccurate correspondences between a 2D input image and a 3D face template,hindering use in real applications.To address these problems,we propose a deep shape reconstruction and texture completion network,SRTC-Net,which jointly reconstructs 3D facial geometry and completes texture with correspondences from a single input face image.In SRTC-Net,we leverage the geometric cues from completed 3D texture to reconstruct detailed structures of 3D shapes.The SRTC-Net pipeline has three stages.The first introduces a correspondence network to identify pixel-wise correspondence between the input 2D image and a 3D template model,and transfers the input 2D image to a U-V texture map.Then we complete the invisible and occluded areas in the U-V texture map using an inpainting network.To get the 3D facial geometries,we predict coarse shape(U-V position maps)from the segmented face from the correspondence network using a shape network,and then refine the 3D coarse shape by regressing the U-V displacement map from the completed U-V texture map in a pixel-to-pixel way.We examine our methods on 3D reconstruction tasks as well as face frontalization and pose invariant face recognition tasks,using both in-the-lab datasets(MICC,MultiPIE)and in-the-wild datasets(CFP).The qualitative and quantitative results demonstrate the effectiveness of our methods on inferring 3D facial geometry and complete texture;they outperform or are comparable to the state-of-the-art.展开更多
基金Project supported by the National Natural Science Foundation of China(Nos.61340046,60875050,and 60675025)the National High-Tech R&D Program(863)of China(No.2006AA04Z247)+1 种基金the Scientific and Technical Innovation Commission of Shenzhen Municipality(Nos.JCYJ20120614152234873,CXC201104210010A,JCYJ20130331144631730,and JCYJ20130331144716089)the Specialized Research Fund for the Doctoral Program of Higher Education,China(No.20130001110011)
文摘Hand-biometric-based personal identification is considered to be an effective method for automatic recognition. However, existing systems require strict constraints during data acquisition, such as costly devices,specified postures, simple background, and stable illumination. In this paper, a contactless personal identification system is proposed based on matching hand geometry features and color features. An inexpensive Kinect sensor is used to acquire depth and color images of the hand. During image acquisition, no pegs or surfaces are used to constrain hand position or posture. We segment the hand from the background through depth images through a process which is insensitive to illumination and background. Then finger orientations and landmark points, like finger tips or finger valleys, are obtained by geodesic hand contour analysis. Geometric features are extracted from depth images and palmprint features from intensity images. In previous systems, hand features like finger length and width are normalized, which results in the loss of the original geometric features. In our system, we transform 2D image points into real world coordinates, so that the geometric features remain invariant to distance and perspective effects. Extensive experiments demonstrate that the proposed hand-biometric-based personal identification system is effective and robust in various practical situations.
基金supported by the National Natural Science Foundation of China(Nos.U1613211 and U1813218)Shenzhen Research Program(Nos.JCYJ20170818164704758 and JCYJ20150925163005055).
文摘Recent years have witnessed significant progress in image-based 3D face reconstruction using deep convolutional neural networks.However,current reconstruction methods often perform improperly in self-occluded regions and can lead to inaccurate correspondences between a 2D input image and a 3D face template,hindering use in real applications.To address these problems,we propose a deep shape reconstruction and texture completion network,SRTC-Net,which jointly reconstructs 3D facial geometry and completes texture with correspondences from a single input face image.In SRTC-Net,we leverage the geometric cues from completed 3D texture to reconstruct detailed structures of 3D shapes.The SRTC-Net pipeline has three stages.The first introduces a correspondence network to identify pixel-wise correspondence between the input 2D image and a 3D template model,and transfers the input 2D image to a U-V texture map.Then we complete the invisible and occluded areas in the U-V texture map using an inpainting network.To get the 3D facial geometries,we predict coarse shape(U-V position maps)from the segmented face from the correspondence network using a shape network,and then refine the 3D coarse shape by regressing the U-V displacement map from the completed U-V texture map in a pixel-to-pixel way.We examine our methods on 3D reconstruction tasks as well as face frontalization and pose invariant face recognition tasks,using both in-the-lab datasets(MICC,MultiPIE)and in-the-wild datasets(CFP).The qualitative and quantitative results demonstrate the effectiveness of our methods on inferring 3D facial geometry and complete texture;they outperform or are comparable to the state-of-the-art.