Marginal Fisher analysis (MFA) not only aims to maintain the original relations of neighboring data points of the same class but also wants to keep away neighboring data points of the different classes.MFA can effec...Marginal Fisher analysis (MFA) not only aims to maintain the original relations of neighboring data points of the same class but also wants to keep away neighboring data points of the different classes.MFA can effectively overcome the limitation of linear discriminant analysis (LDA) due to data distribution assumption and available projection directions.However,MFA confronts the undersampled problems.Generalized marginal Fisher analysis (GMFA) based on a new optimization criterion is presented,which is applicable to the undersampled problems.The solutions to the proposed criterion for GMFA are derived,which can be characterized in a closed form.Among the solutions,two specific algorithms,namely,normal MFA (NMFA) and orthogonal MFA (OMFA),are studied,and the methods to implement NMFA and OMFA are proposed.A comparative study on the undersampled problem of face recognition is conducted to evaluate NMFA and OMFA in terms of classification accuracy,which demonstrates the effectiveness of the proposed algorithms.展开更多
This paper introduces an idea of generating a kernel from an arbitrary function by embedding the training samples into the function.Based on this idea,we present two nonlinear feature extraction methods:generating ker...This paper introduces an idea of generating a kernel from an arbitrary function by embedding the training samples into the function.Based on this idea,we present two nonlinear feature extraction methods:generating kernel principal component analysis(GKPCA)and generating kernel Fisher discriminant(GKFD).These two methods are shown to be equivalent to the function-mapping-space PCA(FMS-PCA)and the function-mapping-space linear discriminant analysis(FMS-LDA)methods,respectively.This equivalence reveals that the generating kernel is actually determined by the corresponding function map.From the generating kernel point of view,we can classify the current kernel Fisher discriminant(KFD)algorithms into two categories:KPCA+LDA based algorithms and straightforward KFD(SKFD)algorithms.The KPCA+LDA based algorithms directly work on the given kernel and are not suitable for non-kernel functions,while the SKFD algorithms essentially work on the generating kernel from a given symmetric function and are therefore suitable for non-kernels as well as kernels.Finally,we outline the tensor-based feature extraction methods and discuss ways of extending tensor-based methods to their generating kernel versions.展开更多
基金supported by Science Foundation of the Fujian Province of China (No. 2010J05099)
文摘Marginal Fisher analysis (MFA) not only aims to maintain the original relations of neighboring data points of the same class but also wants to keep away neighboring data points of the different classes.MFA can effectively overcome the limitation of linear discriminant analysis (LDA) due to data distribution assumption and available projection directions.However,MFA confronts the undersampled problems.Generalized marginal Fisher analysis (GMFA) based on a new optimization criterion is presented,which is applicable to the undersampled problems.The solutions to the proposed criterion for GMFA are derived,which can be characterized in a closed form.Among the solutions,two specific algorithms,namely,normal MFA (NMFA) and orthogonal MFA (OMFA),are studied,and the methods to implement NMFA and OMFA are proposed.A comparative study on the undersampled problem of face recognition is conducted to evaluate NMFA and OMFA in terms of classification accuracy,which demonstrates the effectiveness of the proposed algorithms.
基金国家自然科学基金(the National Natural Science Foundation of China under Grant No.60620160097No.60602038+3 种基金No.60472060No.60473039广东省自然科学基金(the Natural Science Foundation of Guangdong Province of China under Gran tNo.06300862)国家二等博士后基金资助(No.2005038202)。
基金supported by the Program for New Century Excellent Talents in University of China,the NUST Outstanding Scholar Supporting Program,and the National Natural Science Foundation of China(Grant No.60973098).
文摘This paper introduces an idea of generating a kernel from an arbitrary function by embedding the training samples into the function.Based on this idea,we present two nonlinear feature extraction methods:generating kernel principal component analysis(GKPCA)and generating kernel Fisher discriminant(GKFD).These two methods are shown to be equivalent to the function-mapping-space PCA(FMS-PCA)and the function-mapping-space linear discriminant analysis(FMS-LDA)methods,respectively.This equivalence reveals that the generating kernel is actually determined by the corresponding function map.From the generating kernel point of view,we can classify the current kernel Fisher discriminant(KFD)algorithms into two categories:KPCA+LDA based algorithms and straightforward KFD(SKFD)algorithms.The KPCA+LDA based algorithms directly work on the given kernel and are not suitable for non-kernel functions,while the SKFD algorithms essentially work on the generating kernel from a given symmetric function and are therefore suitable for non-kernels as well as kernels.Finally,we outline the tensor-based feature extraction methods and discuss ways of extending tensor-based methods to their generating kernel versions.