摘要
直接线性鉴别分析(DLDA)是一种以克服小样本问题而提出的LDA扩展方法,被声明利用了包含类内散布矩阵零空间外的所有信息。然而,很多反例表明事实并非如此。为了更深入地了解DLDA的特性,该文从理论上对其进行了分析,得出结论:基于传统Fisher准则的DLDA几乎没利用零空间,将丢失一些有用的鉴别信息;而基于广义Fisher准则的DLDA,若满足一定条件(在高维小样本数据应用中一般都满足)且最优鉴别矢量正交约束,则其等价于零空间LDA和正交LDA。在人脸数据库ORL和YALE上的比较实验结果亦与理论分析一致。
Direct LDA (DLDA) is an extension of Linear Discriminant Analysis (LDA) to deal with the small sample size problem, which is previously claimed to take advantage of all the information, both within and outside of the within-class scatter's null space. However, a lot of counter-examples show that this is not the case. In order to better understand the characteristics of DLDA, this paper presents its theoretical analysis and concludes that: DLDA based on the traditional Fisher criterion nearly does not make use of the information inside the null space, thus some discriminative information may be lost; while one based on other variants of Fisher criterion is equivalent to null-space LDA and orthogonal LDA under the orthogonal constraints among discriminant vectors and a mild condition which holds in many applications involving high-dimensional data. The comparative results on the face database, ORL and YALE, also consistent with the theory analysis.
出处
《电子与信息学报》
EI
CSCD
北大核心
2009年第11期2632-2636,共5页
Journal of Electronics & Information Technology
关键词
模式识别
FISHER准则
降维
线性鉴别分析
小样本
Pattern recognition
Fisher criterion function
Dimensionality reduction
Linear Discriminant Analysis(LDA)
Small sample size