Dimensionality reduction methods play an important role in face recognition. Principal component analysis(PCA) and two-dimensional principal component analysis(2DPCA) are two kinds of important methods in this field. ...Dimensionality reduction methods play an important role in face recognition. Principal component analysis(PCA) and two-dimensional principal component analysis(2DPCA) are two kinds of important methods in this field. Recent research seems like that 2DPCA method is superior to PCA method. To prove if this conclusion is always true, a comprehensive comparison study between PCA and 2DPCA methods was carried out. A novel concept, called column-image difference(CID), was proposed to analyze the difference between PCA and 2DPCA methods in theory. It is found that there exist some restrictive conditions when2 DPCA outperforms PCA. After theoretical analysis, the experiments were conducted on four famous face image databases. The experiment results confirm the validity of theoretical claim.展开更多
As a new dimension reduction method, the two-dimensional principal component (2DPCA) can be well applied in face recognition, but it is susceptible to outliers. Therefore, this paper proposes a new 2DPCA algorithm bas...As a new dimension reduction method, the two-dimensional principal component (2DPCA) can be well applied in face recognition, but it is susceptible to outliers. Therefore, this paper proposes a new 2DPCA algorithm based on angel-2DPCA. To reduce the reconstruction error and maximize the variance simultaneously, we choose F norm as the measure and propose the Fp-2DPCA algorithm. Considering that the image has two dimensions, we offer the Fp-2DPCA algorithm based on bilateral. Experiments show that, compared with other algorithms, the Fp-2DPCA algorithm has a better dimensionality reduction effect and better robustness to outliers.展开更多
Low-dimensional feature representation with enhanced discriminatory power of paramount importance to face recognition systems. Most of traditional linear discriminant analysis (LDA)-based methods suffer from the disad...Low-dimensional feature representation with enhanced discriminatory power of paramount importance to face recognition systems. Most of traditional linear discriminant analysis (LDA)-based methods suffer from the disadvantage that their optimality criteria are not directly related to the classification ability of the obtained feature representation. Moreover, their classification accuracy is affected by the “small sample size” (SSS) problem which is often encountered in face recognition tasks. In this paper, we propose a new technique coined Relevance-Weighted Two Dimensional Linear Discriminant Analysis (RW2DLDA). Its over comes the singularity problem implicitly, while achieving efficiency. Moreover, a weight discriminant hyper plane is used in the between class scatter matrix, and RW method is used in the within class scatter matrix to weigh the information to resolve confusable data in these classes. Experiments on two well known facial databases show the effectiveness of the proposed method. Comparisons with other LDA-based methods show that our method improves the LDA classification performance.展开更多
Linear Discriminant Analysis (LDA) is a well-known scheme for feature extraction and dimension. It has been used widely in many applications involving high-dimensional data, such as face recognition, image retrieval, ...Linear Discriminant Analysis (LDA) is a well-known scheme for feature extraction and dimension. It has been used widely in many applications involving high-dimensional data, such as face recognition, image retrieval, etc. An intrinsic limitation of classical LDA is the so-called singularity problem, that is, it fails when all scatter matrices are singular. A well-known approach to deal with the singularity problem is to apply an intermediate dimension reduction stage using Principal Component Analysis (PCA) before LDA. The algorithm, called PCA + LDA, is used widely in face recognition. However, PCA + LDA have high costs in time and space, due to the need for an eigen-decomposition involving the scatter matrices. Also, Two Dimensional Linear Discriminant Analysis (2DLDA) implicitly overcomes the singular- ity problem, while achieving efficiency. The difference between 2DLDA and classical LDA lies in the model for data representation. Classical LDA works with vectorized representation of data, while the 2DLDA algorithm works with data in matrix representation. To deal with the singularity problem we propose a new technique coined as the Weighted Scatter-Difference-Based Two Dimensional Discriminant Analysis (WSD2DDA). The algorithm is applied on face recognition and compared with PCA + LDA and 2DLDA. Experiments show that WSD2DDA achieve competitive recognition accuracy, while being much more efficient.展开更多
基金Projects(50275150,61173052)supported by the National Natural Science Foundation of China
文摘Dimensionality reduction methods play an important role in face recognition. Principal component analysis(PCA) and two-dimensional principal component analysis(2DPCA) are two kinds of important methods in this field. Recent research seems like that 2DPCA method is superior to PCA method. To prove if this conclusion is always true, a comprehensive comparison study between PCA and 2DPCA methods was carried out. A novel concept, called column-image difference(CID), was proposed to analyze the difference between PCA and 2DPCA methods in theory. It is found that there exist some restrictive conditions when2 DPCA outperforms PCA. After theoretical analysis, the experiments were conducted on four famous face image databases. The experiment results confirm the validity of theoretical claim.
文摘As a new dimension reduction method, the two-dimensional principal component (2DPCA) can be well applied in face recognition, but it is susceptible to outliers. Therefore, this paper proposes a new 2DPCA algorithm based on angel-2DPCA. To reduce the reconstruction error and maximize the variance simultaneously, we choose F norm as the measure and propose the Fp-2DPCA algorithm. Considering that the image has two dimensions, we offer the Fp-2DPCA algorithm based on bilateral. Experiments show that, compared with other algorithms, the Fp-2DPCA algorithm has a better dimensionality reduction effect and better robustness to outliers.
文摘Low-dimensional feature representation with enhanced discriminatory power of paramount importance to face recognition systems. Most of traditional linear discriminant analysis (LDA)-based methods suffer from the disadvantage that their optimality criteria are not directly related to the classification ability of the obtained feature representation. Moreover, their classification accuracy is affected by the “small sample size” (SSS) problem which is often encountered in face recognition tasks. In this paper, we propose a new technique coined Relevance-Weighted Two Dimensional Linear Discriminant Analysis (RW2DLDA). Its over comes the singularity problem implicitly, while achieving efficiency. Moreover, a weight discriminant hyper plane is used in the between class scatter matrix, and RW method is used in the within class scatter matrix to weigh the information to resolve confusable data in these classes. Experiments on two well known facial databases show the effectiveness of the proposed method. Comparisons with other LDA-based methods show that our method improves the LDA classification performance.
文摘Linear Discriminant Analysis (LDA) is a well-known scheme for feature extraction and dimension. It has been used widely in many applications involving high-dimensional data, such as face recognition, image retrieval, etc. An intrinsic limitation of classical LDA is the so-called singularity problem, that is, it fails when all scatter matrices are singular. A well-known approach to deal with the singularity problem is to apply an intermediate dimension reduction stage using Principal Component Analysis (PCA) before LDA. The algorithm, called PCA + LDA, is used widely in face recognition. However, PCA + LDA have high costs in time and space, due to the need for an eigen-decomposition involving the scatter matrices. Also, Two Dimensional Linear Discriminant Analysis (2DLDA) implicitly overcomes the singular- ity problem, while achieving efficiency. The difference between 2DLDA and classical LDA lies in the model for data representation. Classical LDA works with vectorized representation of data, while the 2DLDA algorithm works with data in matrix representation. To deal with the singularity problem we propose a new technique coined as the Weighted Scatter-Difference-Based Two Dimensional Discriminant Analysis (WSD2DDA). The algorithm is applied on face recognition and compared with PCA + LDA and 2DLDA. Experiments show that WSD2DDA achieve competitive recognition accuracy, while being much more efficient.