期刊文献+
共找到6,917篇文章
< 1 2 250 >
每页显示 20 50 100
Sparse representation scheme with enhanced medium pixel intensity for face recognition
1
作者 Xuexue Zhang Yongjun Zhang +3 位作者 Zewei Wang Wei Long Weihao Gao Bob Zhang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期116-127,共12页
Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in ... Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in sparse representation means that only a few of instances selected from all training samples can effectively convey the essential class-specific information of the test sample,which is very important for classification.For deformable images such as human faces,pixels at the same location of different images of the same subject usually have different intensities.Therefore,extracting features and correctly classifying such deformable objects is very hard.Moreover,the lighting,attitude and occlusion cause more difficulty.Considering the problems and challenges listed above,a novel image representation and classification algorithm is proposed.First,the authors’algorithm generates virtual samples by a non-linear variation method.This method can effectively extract the low-frequency information of space-domain features of the original image,which is very useful for representing deformable objects.The combination of the original and virtual samples is more beneficial to improve the clas-sification performance and robustness of the algorithm.Thereby,the authors’algorithm calculates the expression coefficients of the original and virtual samples separately using the sparse representation principle and obtains the final score by a designed efficient score fusion scheme.The weighting coefficients in the score fusion scheme are set entirely automatically.Finally,the algorithm classifies the samples based on the final scores.The experimental results show that our method performs better classification than conventional sparse representation algorithms. 展开更多
关键词 computer vision face recognition image classification image representation
下载PDF
Faster Region Convolutional Neural Network(FRCNN)Based Facial Emotion Recognition
2
作者 J.Sheril Angel A.Diana Andrushia +3 位作者 TMary Neebha Oussama Accouche Louai Saker N.Anand 《Computers, Materials & Continua》 SCIE EI 2024年第5期2427-2448,共22页
Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on han... Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions. 展开更多
关键词 Facial emotions FRCNN deep learning emotion recognition face CNN
下载PDF
CapsNet-FR: Capsule Networks for Improved Recognition of Facial Features
3
作者 Mahmood Ul Haq Muhammad Athar Javed Sethi +3 位作者 Najib Ben Aoun Ala Saleh Alluhaidan Sadique Ahmad Zahid farid 《Computers, Materials & Continua》 SCIE EI 2024年第5期2169-2186,共18页
Face recognition (FR) technology has numerous applications in artificial intelligence including biometrics, security,authentication, law enforcement, and surveillance. Deep learning (DL) models, notably convolutional ... Face recognition (FR) technology has numerous applications in artificial intelligence including biometrics, security,authentication, law enforcement, and surveillance. Deep learning (DL) models, notably convolutional neuralnetworks (CNNs), have shown promising results in the field of FR. However CNNs are easily fooled since theydo not encode position and orientation correlations between features. Hinton et al. envisioned Capsule Networksas a more robust design capable of retaining pose information and spatial correlations to recognize objects morelike the brain does. Lower-level capsules hold 8-dimensional vectors of attributes like position, hue, texture, andso on, which are routed to higher-level capsules via a new routing by agreement algorithm. This provides capsulenetworks with viewpoint invariance, which has previously evaded CNNs. This research presents a FR model basedon capsule networks that was tested using the LFW dataset, COMSATS face dataset, and own acquired photos usingcameras measuring 128 × 128 pixels, 40 × 40 pixels, and 30 × 30 pixels. The trained model outperforms state-ofthe-art algorithms, achieving 95.82% test accuracy and performing well on unseen faces that have been blurred orrotated. Additionally, the suggested model outperformed the recently released approaches on the COMSATS facedataset, achieving a high accuracy of 92.47%. Based on the results of this research as well as previous results, capsulenetworks perform better than deeper CNNs on unobserved altered data because of their special equivarianceproperties. 展开更多
关键词 CapsNet face recognition artificial intelligence
下载PDF
Masked Face Recognition Using MobileNet V2 with Transfer Learning 被引量:1
4
作者 Ratnesh Kumar Shukla Arvind Kumar Tiwari 《Computer Systems Science & Engineering》 SCIE EI 2023年第4期293-309,共17页
Corona virus(COVID-19)is once in a life time calamity that has resulted in thousands of deaths and security concerns.People are using face masks on a regular basis to protect themselves and to help reduce corona virus... Corona virus(COVID-19)is once in a life time calamity that has resulted in thousands of deaths and security concerns.People are using face masks on a regular basis to protect themselves and to help reduce corona virus transmission.During the on-going coronavirus outbreak,one of the major priorities for researchers is to discover effective solution.As important parts of the face are obscured,face identification and verification becomes exceedingly difficult.The suggested method is a transfer learning using MobileNet V2 based technology that uses deep feature such as feature extraction and deep learning model,to identify the problem of face masked identification.In the first stage,we are applying face mask detector to identify the face mask.Then,the proposed approach is applying to the datasets from Canadian Institute for Advanced Research10(CIFAR10),Modified National Institute of Standards and Technology Database(MNIST),Real World Masked Face Recognition Database(RMFRD),and Stimulated Masked Face Recognition Database(SMFRD).The proposed model is achieving recognition accuracy 99.82%with proposed dataset.This article employs the four pre-programmed models VGG16,VGG19,ResNet50 and ResNet101.To extract the deep features of faces with VGG16 is achieving 99.30%accuracy,VGG19 is achieving 99.54%accuracy,ResNet50 is achieving 78.70%accuracy and ResNet101 is achieving 98.64%accuracy with own dataset.The comparative analysis shows,that our proposed model performs better result in all four previous existing models.The fundamental contribution of this study is to monitor with face mask and without face mask to decreases the pace of corona virus and to detect persons using wearing face masks. 展开更多
关键词 Convolutional Neural Network(CNN) deep learning face recognition system COVID-19 dataset and machine learning based models
下载PDF
Optimizing Deep Neural Networks for Face Recognition to Increase Training Speed and Improve Model Accuracy
5
作者 Mostafa Diba Hossein Khosravi 《Intelligent Automation & Soft Computing》 2023年第12期315-332,共18页
Convolutional neural networks continually evolve to enhance accuracy in addressing various problems,leading to an increase in computational cost and model size.This paper introduces a novel approach for pruning face r... Convolutional neural networks continually evolve to enhance accuracy in addressing various problems,leading to an increase in computational cost and model size.This paper introduces a novel approach for pruning face recognition models based on convolutional neural networks.The proposed method identifies and removes inefficient filters based on the information volume in feature maps.In each layer,some feature maps lack useful information,and there exists a correlation between certain feature maps.Filters associated with these two types of feature maps impose additional computational costs on the model.By eliminating filters related to these categories of feature maps,the reduction of both computational cost and model size can be achieved.The approach employs a combination of correlation analysis and the summation of matrix elements within each feature map to detect and eliminate inefficient filters.The method was applied to two face recognition models utilizing the VGG16 and ResNet50V2 backbone architectures.In the proposed approach,the number of filters removed in each layer varies,and the removal process is independent of the adjacent layers.The convolutional layers of both backbone models were initialized with pre-trained weights from ImageNet.For training,the CASIA-WebFace dataset was utilized,and the Labeled Faces in the Wild(LFW)dataset was employed for benchmarking purposes.In the VGG16-based face recognition model,a 0.74%accuracy improvement was achieved while reducing the number of convolution parameters by 26.85%and decreasing Floating-point operations per second(FLOPs)by 47.96%.For the face recognition model based on the ResNet50V2 architecture,the ArcFace method was implemented.The removal of inactive filters in this model led to a slight decrease in accuracy by 0.11%.However,it resulted in enhanced training speed,a reduction of 59.38%in convolution parameters,and a 57.29%decrease in FLOPs. 展开更多
关键词 face recognition network pruning FLOPs reduction deep learning Arcface
下载PDF
Multi-modal Gesture Recognition using Integrated Model of Motion, Audio and Video 被引量:3
6
作者 GOUTSU Yusuke KOBAYASHI Takaki +4 位作者 OBARA Junya KUSAJIMA Ikuo TAKEICHI Kazunari TAKANO Wataru NAKAMURA Yoshihiko 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2015年第4期657-665,共9页
Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become availa... Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely. 展开更多
关键词 gesture recognition multi-modal integration hidden Markov model random forests
下载PDF
Adaptive cross-fusion learning for multi-modal gesture recognition
7
作者 Benjia ZHOU Jun WAN +1 位作者 Yanyan LIANG Guodong GUO 《Virtual Reality & Intelligent Hardware》 2021年第3期235-247,共13页
Background Gesture recognition has attracted significant attention because of its wide range of potential applications.Although multi-modal gesture recognition has made significant progress in recent years,a popular m... Background Gesture recognition has attracted significant attention because of its wide range of potential applications.Although multi-modal gesture recognition has made significant progress in recent years,a popular method still is simply fusing prediction scores at the end of each branch,which often ignores complementary features among different modalities in the early stage and does not fuse the complementary features into a more discriminative feature.Methods This paper proposes an Adaptive Cross-modal Weighting(ACmW)scheme to exploit complementarity features from RGB-D data in this study.The scheme learns relations among different modalities by combining the features of different data streams.The proposed ACmW module contains two key functions:(1)fusing complementary features from multiple streams through an adaptive one-dimensional convolution;and(2)modeling the correlation of multi-stream complementary features in the time dimension.Through the effective combination of these two functional modules,the proposed ACmW can automatically analyze the relationship between the complementary features from different streams,and can fuse them in the spatial and temporal dimensions.Results Extensive experiments validate the effectiveness of the proposed method,and show that our method outperforms state-of-the-art methods on IsoGD and NVGesture. 展开更多
关键词 Gesture recognition multi-modal fusion RGB-D
下载PDF
Robust video foreground segmentation and face recognition 被引量:6
8
作者 管业鹏 《Journal of Shanghai University(English Edition)》 CAS 2009年第4期311-315,共5页
Face recognition provides a natural visual interface for human computer interaction (HCI) applications. The process of face recognition, however, is inhibited by variations in the appearance of face images caused by... Face recognition provides a natural visual interface for human computer interaction (HCI) applications. The process of face recognition, however, is inhibited by variations in the appearance of face images caused by changes in lighting, expression, viewpoint, aging and introduction of occlusion. Although various algorithms have been presented for face recognition, face recognition is still a very challenging topic. A novel approach of real time face recognition for HCI is proposed in the paper. In view of the limits of the popular approaches to foreground segmentation, wavelet multi-scale transform based background subtraction is developed to extract foreground objects. The optimal selection of the threshold is automatically determined, which does not require any complex supervised training or manual experimental calibration. A robust real time face recognition algorithm is presented, which combines the projection matrixes without iteration and kernel Fisher discriminant analysis (KFDA) to overcome some difficulties existing in the real face recognition. Superior performance of the proposed algorithm is demonstrated by comparing with other algorithms through experiments. The proposed algorithm can also be applied to the video image sequences of natural HCI. 展开更多
关键词 face recognition human computer interaction (HCI) foreground segmentation face detection THRESHOLD
下载PDF
Face Image Recognition Based on Convolutional Neural Network 被引量:11
9
作者 Guangxin Lou Hongzhen Shi 《China Communications》 SCIE CSCD 2020年第2期117-124,共8页
With the continuous progress of The Times and the development of technology,the rise of network social media has also brought the“explosive”growth of image data.As one of the main ways of People’s Daily communicati... With the continuous progress of The Times and the development of technology,the rise of network social media has also brought the“explosive”growth of image data.As one of the main ways of People’s Daily communication,image is widely used as a carrier of communication because of its rich content,intuitive and other advantages.Image recognition based on convolution neural network is the first application in the field of image recognition.A series of algorithm operations such as image eigenvalue extraction,recognition and convolution are used to identify and analyze different images.The rapid development of artificial intelligence makes machine learning more and more important in its research field.Use algorithms to learn each piece of data and predict the outcome.This has become an important key to open the door of artificial intelligence.In machine vision,image recognition is the foundation,but how to associate the low-level information in the image with the high-level image semantics becomes the key problem of image recognition.Predecessors have provided many model algorithms,which have laid a solid foundation for the development of artificial intelligence and image recognition.The multi-level information fusion model based on the VGG16 model is an improvement on the fully connected neural network.Different from full connection network,convolutional neural network does not use full connection method in each layer of neurons of neural network,but USES some nodes for connection.Although this method reduces the computation time,due to the fact that the convolutional neural network model will lose some useful feature information in the process of propagation and calculation,this paper improves the model to be a multi-level information fusion of the convolution calculation method,and further recovers the discarded feature information,so as to improve the recognition rate of the image.VGG divides the network into five groups(mimicking the five layers of AlexNet),yet it USES 3*3 filters and combines them as a convolution sequence.Network deeper DCNN,channel number is bigger.The recognition rate of the model was verified by 0RL Face Database,BioID Face Database and CASIA Face Image Database. 展开更多
关键词 convolutional neural network face image recognition machine learning artificial intelligence multilayer information fusion
下载PDF
Face Recognition Based on Support Vector Machine and Nearest Neighbor Classifier 被引量:8
10
作者 Zhang Yankun & Liu Chongqing Institute of Image Processing and Pattern Recognition, Shanghai Jiao long University, Shanghai 200030 P.R.China 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2003年第3期73-76,共4页
Support vector machine (SVM), as a novel approach in pattern recognition, has demonstrated a success in face detection and face recognition. In this paper, a face recognition approach based on the SVM classifier with ... Support vector machine (SVM), as a novel approach in pattern recognition, has demonstrated a success in face detection and face recognition. In this paper, a face recognition approach based on the SVM classifier with the nearest neighbor classifier (NNC) is proposed. The principal component analysis (PCA) is used to reduce the dimension and extract features. Then one-against-all stratedy is used to train the SVM classifiers. At the testing stage, we propose an al- 展开更多
关键词 face recognition Support vector machine Nearest neighbor classifier Principal component analysis.
下载PDF
2DPCA versus PCA for face recognition 被引量:5
11
作者 胡建军 谭冠政 +1 位作者 栾凤刚 A.S.M.LIBDA 《Journal of Central South University》 SCIE EI CAS CSCD 2015年第5期1809-1816,共8页
Dimensionality reduction methods play an important role in face recognition. Principal component analysis(PCA) and two-dimensional principal component analysis(2DPCA) are two kinds of important methods in this field. ... Dimensionality reduction methods play an important role in face recognition. Principal component analysis(PCA) and two-dimensional principal component analysis(2DPCA) are two kinds of important methods in this field. Recent research seems like that 2DPCA method is superior to PCA method. To prove if this conclusion is always true, a comprehensive comparison study between PCA and 2DPCA methods was carried out. A novel concept, called column-image difference(CID), was proposed to analyze the difference between PCA and 2DPCA methods in theory. It is found that there exist some restrictive conditions when2 DPCA outperforms PCA. After theoretical analysis, the experiments were conducted on four famous face image databases. The experiment results confirm the validity of theoretical claim. 展开更多
关键词 face recognition dimensionality reduction 2DPCA method PCA method column-image difference(CID)
下载PDF
A supervised multimanifold method with locality preserving for face recognition using single sample per person 被引量:3
12
作者 Nabipour Mehrasa Aghagolzadeh Ali Motameni Homayun 《Journal of Central South University》 SCIE EI CAS CSCD 2017年第12期2853-2861,共9页
Although real-world experiences show that preparing one image per person is more convenient, most of the appearance-based face recognition methods degrade or fail to work if there is only a single sample per person(SS... Although real-world experiences show that preparing one image per person is more convenient, most of the appearance-based face recognition methods degrade or fail to work if there is only a single sample per person(SSPP). In this work, we introduce a novel supervised learning method called supervised locality preserving multimanifold(SLPMM) for face recognition with SSPP. In SLPMM, two graphs: within-manifold graph and between-manifold graph are made to represent the information inside every manifold and the information among different manifolds, respectively. SLPMM simultaneously maximizes the between-manifold scatter and minimizes the within-manifold scatter which leads to discriminant space by adopting locality preserving projection(LPP) concept. Experimental results on two widely used face databases FERET and AR face database are presented to prove the efficacy of the proposed approach. 展开更多
关键词 face recognition LOCALITY PRESERVING MANIFOLD learning single sample PER PERSON
下载PDF
Local Robust Sparse Representation for Face Recognition With Single Sample per Person 被引量:5
13
作者 Jianquan Gu Haifeng Hu Haoxi Li 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2018年第2期547-554,共8页
The purpose of this paper is to solve the problem of robust face recognition(FR) with single sample per person(SSPP). In the scenario of FR with SSPP, we present a novel model local robust sparse representation(LRSR) ... The purpose of this paper is to solve the problem of robust face recognition(FR) with single sample per person(SSPP). In the scenario of FR with SSPP, we present a novel model local robust sparse representation(LRSR) to tackle the problem of query images with various intra-class variations,e.g., expressions, illuminations, and occlusion. FR with SSPP is a very difficult challenge due to lacking of information to predict the possible intra-class variation of the query images.The key idea of the proposed method is to combine a local sparse representation model and a patch-based generic variation dictionary learning model to predict the possible facial intraclass variation of the query images. The experimental results on the AR database, Extended Yale B database, CMU-PIE database and LFW database show that the proposed method is robust to intra-class variations in FR with SSPP, and outperforms the state-of-art approaches. 展开更多
关键词 Dictionary learning face recognition(FR) illumination changes single sample per person(SSPP) sparse representation
下载PDF
Efficient face recognition method based on DCT and LDA 被引量:4
14
作者 ZhangYankun LiuChongqing 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2004年第2期211-216,共6页
It has been demonstrated that the linear discriminant analysis (LDA) is an effective approach in face recognition tasks. However, due to the high dimensionality of an image space, many LDA based approaches first use t... It has been demonstrated that the linear discriminant analysis (LDA) is an effective approach in face recognition tasks. However, due to the high dimensionality of an image space, many LDA based approaches first use the principal component analysis (PCA) to project an image into a lower dimensional space, then perform the LDA transform to extract discriminant feature. But some useful discriminant information to the following LDA transform will be lost in the PCA step. To overcome these defects, a face recognition method based on the discrete cosine transform (DCT) and the LDA is proposed. First the DCT is used to achieve dimension reduction, then LDA transform is performed on the lower space to extract features. Two face databases are used to test our method and the correct recognition rates of 97.5% and 96.0% are obtained respectively. The performance of the proposed method is compared with that of the PCA+ LDA method and the results show that the method proposed outperforms the PCA+ LDA method. 展开更多
关键词 face recognition discrete cosine transform linear discriminant analysis principal component analysis.
下载PDF
In-pit coal mine personnel uniqueness detection technology based on personnel positioning and face recognition 被引量:11
15
作者 Sun Jiping Li Chenxin 《International Journal of Mining Science and Technology》 SCIE EI 2013年第3期357-361,共5页
Since the coal mine in-pit personnel positioning system neither can effectively achieve the function to detect the uniqueness of in-pit coal-mine personnel nor can identify and eliminate violations in attendance manag... Since the coal mine in-pit personnel positioning system neither can effectively achieve the function to detect the uniqueness of in-pit coal-mine personnel nor can identify and eliminate violations in attendance management such as multiple cards for one person, and swiping one's cards by others in China at present. Therefore, the research introduces a uniqueness detection system and method for in-pit coal-mine personnel integrated into the in-pit coal mine personnel positioning system, establishing a system mode based on face recognition + recognition of personnel positioning card + release by automatic detection. Aiming at the facts that the in-pit personnel are wearing helmets and faces are prone to be stained during the face recognition, the study proposes the ideas that pre-process face images using the 2D-wavelet-transformation-based Mallat algorithm and extracts three face features: miner light, eyes and mouths, using the generalized symmetry transformation-based algorithm. This research carried out test with 40 clean face images with no helmets and 40 lightly-stained face images, and then compared with results with the one using the face feature extraction method based on grey-scale transformation and edge detection. The results show that the method described in the paper can detect accurately face features in the above-mentioned two cases, and the accuracy to detect face features is 97.5% in the case of wearing helmets and lightly-stained faces. 展开更多
关键词 Coal mine Uniqueness detection recognition of personnel positioning cards face recognition Generalized symmetry transformation
下载PDF
A Novel Face Recognition Algorithm for Distinguishing Faces with Various Angles 被引量:3
16
作者 Yong-Zhong Lu 《International Journal of Automation and computing》 EI 2008年第2期193-197,共5页
In order to distinguish faces of various angles during face recognition, an algorithm of the combination of approximate dynamic programming (ADP) called action dependent heuristic dynamic programming (ADHDP) and p... In order to distinguish faces of various angles during face recognition, an algorithm of the combination of approximate dynamic programming (ADP) called action dependent heuristic dynamic programming (ADHDP) and particle swarm optimization (PSO) is presented. ADP is used for dynamically changing the values of the PSO parameters. During the process of face recognition, the discrete cosine transformation (DCT) is first introduced to reduce negative effects. Then, Karhunen-Loeve (K-L) transformation can be used to compress images and decrease data dimensions. According to principal component analysis (PCA), the main parts of vectors are extracted for data representation. Finally, radial basis function (RBF) neural network is trained to recognize various faces. The training of RBF neural network is exploited by ADP-PSO. In terms of ORL Face Database, the experimental result gives a clear view of its accurate efficiency. 展开更多
关键词 face recognition approximate dynamic programming (ADP) particle swarm optimization (PSO)
下载PDF
Improved Face Recognition Method Using Genetic Principal Component Analysis 被引量:2
17
作者 E.Gomathi K.Baskaran 《Journal of Electronic Science and Technology》 CAS 2010年第4期372-378,共7页
An improved face recognition method is proposed based on principal component analysis (PCA) compounded with genetic algorithm (GA), named as genetic based principal component analysis (GPCA). Initially the eigen... An improved face recognition method is proposed based on principal component analysis (PCA) compounded with genetic algorithm (GA), named as genetic based principal component analysis (GPCA). Initially the eigenspace is created with eigenvalues and eigenvectors. From this space, the eigenfaces are constructed, and the most relevant eigenfaees have been selected using GPCA. With these eigenfaees, the input images are classified based on Euclidian distance. The proposed method was tested on ORL (Olivetti Research Labs) face database. Experimental results on this database demonstrate that the effectiveness of the proposed method for face recognition has less misclassification in comparison with previous methods. 展开更多
关键词 EIGENfaceS EIGENVECTORS face recognition genetic algorithm principal component analysis.
下载PDF
Pre-detection and dual-dictionary sparse representation based face recognition algorithm in non-sufficient training samples 被引量:2
18
作者 ZHAO Jian ZHANG Chao +3 位作者 ZHANG Shunli LU Tingting SU Weiwen JIA Jian 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2018年第1期196-202,共7页
Face recognition based on few training samples is a challenging task. In daily applications, sufficient training samples may not be obtained and most of the gained training samples are in various illuminations and pos... Face recognition based on few training samples is a challenging task. In daily applications, sufficient training samples may not be obtained and most of the gained training samples are in various illuminations and poses. Non-sufficient training samples could not effectively express various facial conditions, so the improvement of the face recognition rate under the non-sufficient training samples condition becomes a laborious mission. In our work, the facial pose pre-recognition(FPPR) model and the dualdictionary sparse representation classification(DD-SRC) are proposed for face recognition. The FPPR model is based on the facial geometric characteristic and machine learning, dividing a testing sample into full-face and profile. Different poses in a single dictionary are influenced by each other, which leads to a low face recognition rate. The DD-SRC contains two dictionaries, full-face dictionary and profile dictionary, and is able to reduce the interference. After FPPR, the sample is processed by the DD-SRC to find the most similar one in training samples. The experimental results show the performance of the proposed algorithm on olivetti research laboratory(ORL) and face recognition technology(FERET) databases, and also reflect comparisons with SRC, linear regression classification(LRC), and two-phase test sample sparse representation(TPTSSR). 展开更多
关键词 face recognition facial pose pre-recognition(FPPR) dual-dictionary sparse representation method machine learning
下载PDF
Pose Robust Low-resolution Face Recognition via Coupled Kernel-based Enhanced Discriminant Analysis 被引量:4
19
作者 Xiaoying Wang Haifeng Hu Jianquan Gu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI 2016年第2期203-212,共10页
Most face recognition techniques have been successful in dealing with high-resolution(HR) frontal face images. However, real-world face recognition systems are often confronted with the low-resolution(LR) face images ... Most face recognition techniques have been successful in dealing with high-resolution(HR) frontal face images. However, real-world face recognition systems are often confronted with the low-resolution(LR) face images with pose and illumination variations. This is a very challenging issue, especially under the constraint of using only a single gallery image per person.To address the problem, we propose a novel approach called coupled kernel-based enhanced discriminant analysis(CKEDA).CKEDA aims to simultaneously project the features from LR non-frontal probe images and HR frontal gallery ones into a common space where discrimination property is maximized.There are four advantages of the proposed approach: 1) by using the appropriate kernel function, the data becomes linearly separable, which is beneficial for recognition; 2) inspired by linear discriminant analysis(LDA), we integrate multiple discriminant factors into our objective function to enhance the discrimination property; 3) we use the gallery extended trick to improve the recognition performance for a single gallery image per person problem; 4) our approach can address the problem of matching LR non-frontal probe images with HR frontal gallery images,which is difficult for most existing face recognition techniques.Experimental evaluation on the multi-PIE dataset signifies highly competitive performance of our algorithm. 展开更多
关键词 face recognition low-resolution(LR) pose variations discriminant analysis gallery extended
下载PDF
Gaussian Mixture Models for Human Face Recognition under Illumination Variations 被引量:2
20
作者 Sinjini Mitra 《Applied Mathematics》 2012年第12期2071-2079,共9页
The appearance of a face is severely altered by illumination conditions that makes automatic face recognition a challenging task. In this paper we propose a Gaussian Mixture Models (GMM)-based human face identificatio... The appearance of a face is severely altered by illumination conditions that makes automatic face recognition a challenging task. In this paper we propose a Gaussian Mixture Models (GMM)-based human face identification technique built in the Fourier or frequency domain that is robust to illumination changes and does not require “illumination normalization” (removal of illumination effects) prior to application unlike many existing methods. The importance of the Fourier domain phase in human face identification is a well-established fact in signal processing. A maximum a posteriori (or, MAP) estimate based on the posterior likelihood is used to perform identification, achieving misclassification error rates as low as 2% on a database that contains images of 65 individuals under 21 different illumination conditions. Furthermore, a misclassification rate of 3.5% is observed on the Yale database with 10 people and 64 different illumination conditions. Both these sets of results are significantly better than those obtained from traditional PCA and LDA classifiers. Statistical analysis pertaining to model selection is also presented. 展开更多
关键词 Classification face recognition MIXTURE MODELS ILLUMINATION
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部