Principal component analysis(PCA) is fundamental in many pattern recognition applications.Much research has been performed to minimize the reconstruction error in L1-norm based reconstruction error minimization(L1-PCA...Principal component analysis(PCA) is fundamental in many pattern recognition applications.Much research has been performed to minimize the reconstruction error in L1-norm based reconstruction error minimization(L1-PCA-REM) since conventional L2-norm based PCA(L2-PCA) is sensitive to outliers.Recently,the variance maximization formulation of PCA with L1-norm(L1-PCA-VM) has been proposed,where new greedy and nongreedy solutions are developed.Armed with the gradient ascent perspective for optimization,we show that the L1-PCA-VM formulation is problematic in learning principal components and that only a greedy solution can achieve robustness motivation,which are verified by experiments on synthetic and real-world datasets.展开更多
基金Project supported by the National Natural Science Foundation of China (Nos. 61071131 and 61271388)the Beijing Natural Science Foundation (No. 4122040)+1 种基金the Research Project of Tsinghua University (No. 2012Z01011)the United Technologies Research Center (UTRC)
文摘Principal component analysis(PCA) is fundamental in many pattern recognition applications.Much research has been performed to minimize the reconstruction error in L1-norm based reconstruction error minimization(L1-PCA-REM) since conventional L2-norm based PCA(L2-PCA) is sensitive to outliers.Recently,the variance maximization formulation of PCA with L1-norm(L1-PCA-VM) has been proposed,where new greedy and nongreedy solutions are developed.Armed with the gradient ascent perspective for optimization,we show that the L1-PCA-VM formulation is problematic in learning principal components and that only a greedy solution can achieve robustness motivation,which are verified by experiments on synthetic and real-world datasets.