Over the past few decades,face recognition has become the most effective biometric technique in recognizing people’s identity,as it is widely used in many areas of our daily lives.However,it is a challenging techniqu...Over the past few decades,face recognition has become the most effective biometric technique in recognizing people’s identity,as it is widely used in many areas of our daily lives.However,it is a challenging technique since facial images vary in rotations,expressions,and illuminations.To minimize the impact of these challenges,exploiting information from various feature extraction methods is recommended since one of the most critical tasks in face recognition system is the extraction of facial features.Therefore,this paper presents a new approach to face recognition based on the fusion of Gabor-based feature extraction,Fast Independent Component Analysis(FastICA),and Linear Discriminant Analysis(LDA).In the presented method,first,face images are transformed to grayscale and resized to have a uniform size.After that,facial features are extracted from the aligned face image using Gabor,FastICA,and LDA methods.Finally,the nearest distance classifier is utilized to recognize the identity of the individuals.Here,the performance of six distance classifiers,namely Euclidean,Cosine,Bray-Curtis,Mahalanobis,Correlation,and Manhattan,are investigated.Experimental results revealed that the presented method attains a higher rank-one recognition rate compared to the recent approaches in the literature on four benchmarked face datasets:ORL,GT,FEI,and Yale.Moreover,it showed that the proposed method not only helps in better extracting the features but also in improving the overall efficiency of the facial recognition system.展开更多
Learning from imbalanced data is one of the greatest challenging problems in binary classification,and this problem has gained more importance in recent years.When the class distribution is imbalanced,classical machin...Learning from imbalanced data is one of the greatest challenging problems in binary classification,and this problem has gained more importance in recent years.When the class distribution is imbalanced,classical machine learning algorithms tend to move strongly towards the majority class and disregard the minority.Therefore,the accuracy may be high,but the model cannot recognize data instances in the minority class to classify them,leading to many misclassifications.Different methods have been proposed in the literature to handle the imbalance problem,but most are complicated and tend to simulate unnecessary noise.In this paper,we propose a simple oversampling method based on Multivariate Gaussian distribution and K-means clustering,called GK-Means.The new method aims to avoid generating noise and control imbalances between and within classes.Various experiments have been carried out with six classifiers and four oversampling methods.Experimental results on different imbalanced datasets show that the proposed GK-Means outperforms other oversampling methods and improves classification performance as measured by F1-score and Accuracy.展开更多
文摘Over the past few decades,face recognition has become the most effective biometric technique in recognizing people’s identity,as it is widely used in many areas of our daily lives.However,it is a challenging technique since facial images vary in rotations,expressions,and illuminations.To minimize the impact of these challenges,exploiting information from various feature extraction methods is recommended since one of the most critical tasks in face recognition system is the extraction of facial features.Therefore,this paper presents a new approach to face recognition based on the fusion of Gabor-based feature extraction,Fast Independent Component Analysis(FastICA),and Linear Discriminant Analysis(LDA).In the presented method,first,face images are transformed to grayscale and resized to have a uniform size.After that,facial features are extracted from the aligned face image using Gabor,FastICA,and LDA methods.Finally,the nearest distance classifier is utilized to recognize the identity of the individuals.Here,the performance of six distance classifiers,namely Euclidean,Cosine,Bray-Curtis,Mahalanobis,Correlation,and Manhattan,are investigated.Experimental results revealed that the presented method attains a higher rank-one recognition rate compared to the recent approaches in the literature on four benchmarked face datasets:ORL,GT,FEI,and Yale.Moreover,it showed that the proposed method not only helps in better extracting the features but also in improving the overall efficiency of the facial recognition system.
文摘Learning from imbalanced data is one of the greatest challenging problems in binary classification,and this problem has gained more importance in recent years.When the class distribution is imbalanced,classical machine learning algorithms tend to move strongly towards the majority class and disregard the minority.Therefore,the accuracy may be high,but the model cannot recognize data instances in the minority class to classify them,leading to many misclassifications.Different methods have been proposed in the literature to handle the imbalance problem,but most are complicated and tend to simulate unnecessary noise.In this paper,we propose a simple oversampling method based on Multivariate Gaussian distribution and K-means clustering,called GK-Means.The new method aims to avoid generating noise and control imbalances between and within classes.Various experiments have been carried out with six classifiers and four oversampling methods.Experimental results on different imbalanced datasets show that the proposed GK-Means outperforms other oversampling methods and improves classification performance as measured by F1-score and Accuracy.