Foley-Sammon linear discriminant analysis (FSLDA) and uncorrelated linear discriminant analysis (ULDA) are two well-known kinds of linear discriminant analysis. Both ULDA and FSLDA search the kth discriminant vector i...Foley-Sammon linear discriminant analysis (FSLDA) and uncorrelated linear discriminant analysis (ULDA) are two well-known kinds of linear discriminant analysis. Both ULDA and FSLDA search the kth discriminant vector in an n-k+1 dimensional subspace, while they are subject to their respective constraints. Evidenced by strict demonstration, it is clear that in essence ULDA vectors are the covariance-orthogonal vectors of the corresponding eigen-equation. So, the algorithms for the covariance-orthogonal vectors are equivalent to the original algorithm of ULDA, which is time-consuming. Also, it is first revealed that the Fisher criterion value of each FSLDA vector must be not less than that of the corresponding ULDA vector by theory analysis. For a discriminant vector, the larger its Fisher criterion value is, the more powerful in discriminability it is. So, for FSLDA vectors, corresponding to larger Fisher criterion values is an advantage. On the other hand, in general any two feature components extracted by FSLDA vectors are statistically correlated with each other, which may make the discriminant vectors set at a disadvantageous position. In contrast to FSLDA vectors, any two feature components extracted by ULDA vectors are statistically uncorrelated with each other. Two experiments on CENPARMI handwritten numeral database and ORL database are performed. The experimental results are consistent with the theory analysis on Fisher criterion values of ULDA vectors and FSLDA vectors. The experiments also show that the equivalent algorithm of ULDA, presented in this paper, is much more efficient than the original algorithm of ULDA, as the theory analysis expects. Moreover, it appears that if there is high statistical correlation between feature components extracted by FSLDA vectors, FSLDA will not perform well, in spite of larger Fisher criterion value owned by every FSLDA vector. However, when the average correlation coefficient of feature components extracted by FSLDA vectors is at a low level, the performance of FSLDA are comparable with ULDA.展开更多
文摘局部保持投影(Locality preserving projections,LPP)算法只保持了目标在投影后的邻域局部信息,为了更好地刻画数据的流形结构,引入了类内和类间局部散度矩阵,给出了一种基于有效且稳定的大间距准则(Maximum margin criterion,MMC)的不相关保局投影分析方法,该方法在最大化散度矩阵迹差时,引入尺度因子α,对类内和类间局部散度矩阵进行加权,以便找到更适合分类的子空间并且可避免小样本问题;更重要的是,大间距准则下提取的判别特征集一般情况下是统计相关的,造成了特征信息的冗余,因此,通过增加一个不相关约束条件,利用推导出的公式提取不相关判别特征集,这样做,对正确识别更为有利.在Yale人脸库、PIE人脸库和MNIST手写数字库上的测试结果表明,本文方法有效且稳定,与LPP、LDA(Linear discriminant analysis)和LPMIP(Locality-preserved maximum information projection)方法等相比,具有更高的正确识别率。
基金The National Natural Science Foundation of China (Grant No.60472060 ,60473039 and 60472061)
文摘Foley-Sammon linear discriminant analysis (FSLDA) and uncorrelated linear discriminant analysis (ULDA) are two well-known kinds of linear discriminant analysis. Both ULDA and FSLDA search the kth discriminant vector in an n-k+1 dimensional subspace, while they are subject to their respective constraints. Evidenced by strict demonstration, it is clear that in essence ULDA vectors are the covariance-orthogonal vectors of the corresponding eigen-equation. So, the algorithms for the covariance-orthogonal vectors are equivalent to the original algorithm of ULDA, which is time-consuming. Also, it is first revealed that the Fisher criterion value of each FSLDA vector must be not less than that of the corresponding ULDA vector by theory analysis. For a discriminant vector, the larger its Fisher criterion value is, the more powerful in discriminability it is. So, for FSLDA vectors, corresponding to larger Fisher criterion values is an advantage. On the other hand, in general any two feature components extracted by FSLDA vectors are statistically correlated with each other, which may make the discriminant vectors set at a disadvantageous position. In contrast to FSLDA vectors, any two feature components extracted by ULDA vectors are statistically uncorrelated with each other. Two experiments on CENPARMI handwritten numeral database and ORL database are performed. The experimental results are consistent with the theory analysis on Fisher criterion values of ULDA vectors and FSLDA vectors. The experiments also show that the equivalent algorithm of ULDA, presented in this paper, is much more efficient than the original algorithm of ULDA, as the theory analysis expects. Moreover, it appears that if there is high statistical correlation between feature components extracted by FSLDA vectors, FSLDA will not perform well, in spite of larger Fisher criterion value owned by every FSLDA vector. However, when the average correlation coefficient of feature components extracted by FSLDA vectors is at a low level, the performance of FSLDA are comparable with ULDA.