期刊文献+
共找到11篇文章
< 1 >
每页显示 20 50 100
Deepfake Video Detection Employing Human Facial Features
1
作者 Daniel Schilling Weiss Nguyen Desmond T. Ademiluyi 《Journal of Computer and Communications》 2023年第12期1-13,共13页
Deepfake technology can be used to replace people’s faces in videos or pictures to show them saying or doing things they never said or did. Deepfake media are often used to extort, defame, and manipulate public opini... Deepfake technology can be used to replace people’s faces in videos or pictures to show them saying or doing things they never said or did. Deepfake media are often used to extort, defame, and manipulate public opinion. However, despite deepfake technology’s risks, current deepfake detection methods lack generalization and are inconsistent when applied to unknown videos, i.e., videos on which they have not been trained. The purpose of this study is to develop a generalizable deepfake detection model by training convoluted neural networks (CNNs) to classify human facial features in videos. The study formulated the research questions: “How effectively does the developed model provide reliable generalizations?” A CNN model was trained to distinguish between real and fake videos using the facial features of human subjects in videos. The model was trained, validated, and tested using the FaceForensiq++ dataset, which contains more than 500,000 frames and subsets of the DFDC dataset, totaling more than 22,000 videos. The study demonstrated high generalizability, as the accuracy of the unknown dataset was only marginally (about 1%) lower than that of the known dataset. The findings of this study indicate that detection systems can be more generalizable, lighter, and faster by focusing on just a small region (the human face) of an entire video. 展开更多
关键词 Artificial Intelligence Convoluted Neural Networks Deepfake GANs GENERALIZATION Deep Learning facial features Video Frames
下载PDF
Facial Features of an Air Gun Array Wavelet in the Time-Frequency Domain Based on Marine Vertical Cables 被引量:2
2
作者 ZHANG Dong LIU Huaishan +4 位作者 XING Lei WEI Jia WANG Jianhua ZHOU Heng GE Xinmin 《Journal of Ocean University of China》 SCIE CAS CSCD 2021年第6期1371-1382,共12页
Air gun arrays are often used in marine energy exploration and marine geological surveys.The study of the single bubble dynamics and multibubbles produced by air guns interacting with each other is helpful in understa... Air gun arrays are often used in marine energy exploration and marine geological surveys.The study of the single bubble dynamics and multibubbles produced by air guns interacting with each other is helpful in understanding pressure signals.We used the van der Waals air gun model to simulate the wavelets of a sleeve gun of various offsets and arrival angles.Several factors were taken into account,such as heat transfer,the thermodynamically open quasi-static system,the vertical rise of the bubble,and air gun post throttling.Marine vertical cables are located on the seafloor,but hydrophones are located in seawater and are far away from the air gun array vertically.This situation conforms to the acquisition conditions of the air gun far-field wavelet and thus avoids the problems of ship noise,ocean surges,and coupling.High-quality 3D wavelet data of air gun arrays were collected during a vertical cable test in the South China Sea in 2017.We proposed an evaluation method of multidimensional facial features,including zeropeak amplitude,peak-peak amplitude,bubble period,primary-to-bubble ratio,frequency spectrum,instantaneous amplitude,instantaneous phase,and instantaneous frequency,to characterize the 3D air gun wave field.The match between the facial features in the field and simulated data provides confidence for the use of the van der Waals air gun model to predict air gun wavelet and facial features to evaluate air gun array. 展开更多
关键词 air gun van der Waals marine vertical cable facial features MULTIDIMENSIONAL
下载PDF
Novel Facial Features Segmentation Algorithm
3
作者 姜微 沈庭芝 +1 位作者 王晓华 张健 《Journal of Beijing Institute of Technology》 EI CAS 2008年第4期478-483,共6页
An efficient algorithm for facial features extractions is proposed. The facial features we segment are the two eyes, nose and mouth. The algorithm is based on an improved Gabor wavelets edge detector, morphological ap... An efficient algorithm for facial features extractions is proposed. The facial features we segment are the two eyes, nose and mouth. The algorithm is based on an improved Gabor wavelets edge detector, morphological approach to detect the face region and facial features regions, and an improved T-shape face mask to locate the extract location of facial features. The experimental results show that the proposed method is robust against facial expression, illumination, and can be also effective if the person wearing glasses, and so on. 展开更多
关键词 facial feature SEGMENTATION Gabor wavelets morphological approach T-shape mask
下载PDF
Automatic Location of Main Facial Features in Front-View Images
4
作者 Wang Lei Mo Yulong Qi Feihu 《Advances in Manufacturing》 SCIE CAS 1998年第4期4-11,共8页
Front-View ImagesTX1IntroductionDuetotheincreasingdemandinpersonalidentifi-cationandsecuritywork,automaticfac... Front-View ImagesTX1IntroductionDuetotheincreasingdemandinpersonalidentifi-cationandsecuritywork,automaticfacerecognitionhasbe... 展开更多
关键词 facial feature location integral projection pixel clustering face recognition
下载PDF
Quantification of Cranial Asymmetry in Infants by Facial Feature Extraction
5
作者 Chun-Ming Chang Wei-Cheng Li +1 位作者 Chung-Lin Huang Pei-Yeh Chang 《Journal of Electronic Science and Technology》 CAS 2014年第4期410-414,共5页
In this paper, a facial feature extracting method is proposed to transform three-dimension (3D) head images of infants with deformational plagiocephaly for assessment of asymmetry. The features of 3D point clouds of... In this paper, a facial feature extracting method is proposed to transform three-dimension (3D) head images of infants with deformational plagiocephaly for assessment of asymmetry. The features of 3D point clouds of an infant's cranium can be identified by local feature analysis and a two-phase k-means classification algorithm. The 3D images of infants with asymmetric cranium can then be aligned to the same pose. The mirrored head model obtained from the symmetry plane is compared with the original model for the measurement of asymmetry. Numerical data of the cranial volume can be reviewed by a pediatrician to adjust the treatment plan. The system can also be used to demonstrate the treatment progress. 展开更多
关键词 Cranial asymmetry deformationalplagiocephaly facial feature image registration.
下载PDF
Active Shape Model of Combining Pca and Ica: Application to Facial Feature Extraction
6
作者 邓琳 饶妮妮 王刚 《Journal of Electronic Science and Technology of China》 2006年第2期114-117,共4页
Active Shape Model (ASM) is a powerful statistical tool to extract the facial features of a face image under frontal view. It mainly relies on Principle Component Analysis (PCA) to statistically model the variabil... Active Shape Model (ASM) is a powerful statistical tool to extract the facial features of a face image under frontal view. It mainly relies on Principle Component Analysis (PCA) to statistically model the variability in the training set of example shapes. Independent Component Analysis (ICA) has been proven to be more efficient to extract face features than PCA. In this paper, we combine the PCA and ICA by the consecutive strategy to form a novel ASM. Firstly, an initial model, which shows the global shape variability in the training set, is generated by the PCA-based ASM. And then, the final shape model, which contains more local characters, is established by the ICA-based ASM. Experimental results verify that the accuracy of facial feature extraction is statistically significantly improved by applying the ICA modes after the PCA modes. 展开更多
关键词 facial feature extraction Active Shape Model (ASM) Principle ComponentAnalysis (PCA) Independent Component Analysis (ICA)
下载PDF
Live facial feature extraction
7
作者 ZHAO JieYu 《Science in China(Series F)》 2008年第5期489-498,共10页
Precise facial feature extraction is essential to the high-level face recognition and expression analysis. This paper presents a novel method for the real-time geometric facial feature extraction from live video. In t... Precise facial feature extraction is essential to the high-level face recognition and expression analysis. This paper presents a novel method for the real-time geometric facial feature extraction from live video. In this paper, the input image is viewed as a weighted graph. The segmentation of the pixels corresponding to the edges of facial components of the mouth, eyes, brows, and nose is implemented by means of random walks on the weighted graph. The graph has an 8-connected lattice structure and the weight value associated with each edge reflects the likelihood that a random walker will cross that edge. The random walks simulate an anisot- ropic diffusion process that filters out the noise while preserving the facial expression pixels. The seeds for the segmentation are obtained from a color and motion detector. The segmented facial pixels are represented with linked lists in the origi- nal geometric form and grouped into different parts corresponding to facial components. For the convenience of implementing high-level vision, the geometric description of facial component pixels is further decomposed into shape and reg- istration information. Shape is defined as the geometric information that is invariant under the registration transformation, such as translation, rotation, and isotropic scale. Statistical shape analysis is carried out to capture global facial fea- tures where the Procrustes shape distance measure is adopted. A Bayesian ap- proach is used to incorporate high-level prior knowledge of face structure. Experimental results show that the proposed method is capable of real-time extraction of precise geometric facial features from live video. The feature extraction is robust against the illumination changes, scale variation, head rotations, and hand interference. 展开更多
关键词 live facial feature extraction random walks anisotropic diffusion process statistical shape analysis
原文传递
Facial Index Based 2D Facial Composite Process for Forensic Investigation in Sri Lanka
8
作者 P. B. Jayasekara L. Sivaneasharajah +4 位作者 M. A. S. Perera J. Perera D. D. Karunaratne K. D. Sandaruwan R. N. Rajapakse 《Forensic Medicine and Anatomy Research》 2016年第1期7-16,共10页
The “facial composite” is one of the major fields in the forensic science that helps the criminal investigators to carry out their investigation process. The survey conducted by United States Law Enforcement Agencie... The “facial composite” is one of the major fields in the forensic science that helps the criminal investigators to carry out their investigation process. The survey conducted by United States Law Enforcement Agencies confirms that 80% of the law enforcement agencies use computer automated composite systems whereas Sri Lanka is still far behind in the process of facial composite with lot of inefficiencies in the current manual process. Hence this research introduces a novel approach for the manual facial composite process, while eliminating the inefficiencies of the manual procedure in Sri Lanka. In order to overcome this situation, this study introduces an automated image processing based software solution with 2D facial feature templates targeting the Sri Lankan population. Thus, this was the first ever approach that creates the 2D facial feature templates by incorporating both medically defined indexes and relevant aesthetic aspects. Hence, this research study is comprised of two separate analyses on anthropometric indices and facial feature shapes which were carried out targeting the local population. Subsequently, several evaluation techniques were utilized to evaluate this methodology where we obtained an overall success rate as 70.19%. The ultimate goal of this research study is to provide a system to the law enforcement agencies in order to carry out an efficient and effective facial composite process which can lead to increase the success rate of suspect identification. 展开更多
关键词 facial Composite facial Index facial feature Templates Sri Lanka
下载PDF
Race Classification Using Deep Learning 被引量:2
9
作者 Khalil Khan Rehan Ullah Khan +3 位作者 Jehad Ali Irfan Uddin Sahib Khan Byeong-hee Roh 《Computers, Materials & Continua》 SCIE EI 2021年第9期3483-3498,共16页
Race classification is a long-standing challenge in the field of face image analysis.The investigation of salient facial features is an important task to avoid processing all face parts.Face segmentation strongly bene... Race classification is a long-standing challenge in the field of face image analysis.The investigation of salient facial features is an important task to avoid processing all face parts.Face segmentation strongly benefits several face analysis tasks,including ethnicity and race classification.We propose a race-classification algorithm using a prior face segmentation framework.A deep convolutional neural network(DCNN)was used to construct a face segmentation model.For training the DCNN,we label face images according to seven different classes,that is,nose,skin,hair,eyes,brows,back,and mouth.The DCNN model developed in the first phase was used to create segmentation results.The probabilistic classification method is used,and probability maps(PMs)are created for each semantic class.We investigated five salient facial features from among seven that help in race classification.Features are extracted from the PMs of five classes,and a new model is trained based on the DCNN.We assessed the performance of the proposed race classification method on four standard face datasets,reporting superior results compared with previous studies. 展开更多
关键词 Deep learning facial feature face analysis learning race race classification
下载PDF
FACE RECOGNITION BASED ON WAVELET-CURVELET-FRACTAL TECHNIQUE
10
作者 Zhang Zhong Zhuang Peidong Liu Yong Ding Qun Ye Hong'an 《Journal of Electronics(China)》 2010年第2期206-211,共6页
In this paper,a novel face recognition method,named as wavelet-curvelet-fractal technique,is proposed. Based on the similarities embedded in the images,we propose to utilize the wave-let-curvelet-fractal technique to ... In this paper,a novel face recognition method,named as wavelet-curvelet-fractal technique,is proposed. Based on the similarities embedded in the images,we propose to utilize the wave-let-curvelet-fractal technique to extract facial features. Thus we have the wavelet’s details in diagonal,vertical,and horizontal directions,and the eight curvelet details at different angles. Then we adopt the Euclidean minimum distance classifier to recognize different faces. Extensive comparison tests on dif-ferent data sets are carried out,and higher recognition rate is obtained by the proposed technique. 展开更多
关键词 Face recognition Wavelet decomposition Curvelet transform FRACTAL facial feature extraction
下载PDF
Pupil center detection with a single webcam for gaze tracking
11
作者 Ralph Oyini Mbouna Seong G Kong 《Journal of Measurement Science and Instrumentation》 CAS 2012年第2期133-136,共4页
This paper presents a user friendly approach to localize the pupil center with a single web camera.Several methods have been proposed to determine the coordinates of the pupil center in an image,but with practical lim... This paper presents a user friendly approach to localize the pupil center with a single web camera.Several methods have been proposed to determine the coordinates of the pupil center in an image,but with practical limitations.The proposed method can track the user’s eye movements in real time under normal image resolution and lighting conditions using a regular webcam,without special equipment such as infrared illuminators.After the pre-processing steps used to deal with illumination variations,the pupil center is detected using iterative thresholding by applying geometric constraints.Experimental results show that robustness and speed in determining the pupil’s location in real time for users of various ethnicities,under various lighting conditions,at different distances from the webcam and with standard resolution images. 展开更多
关键词 pupil detection eye gaze tracking feature extraction facial features tracking
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部