Linear discriminant analysis (LDA) is one of the most popular supervised dimensionality reduction (DR) tech- niques and obtains discriminant projections by maximizing the ratio of average-case between-class scatte...Linear discriminant analysis (LDA) is one of the most popular supervised dimensionality reduction (DR) tech- niques and obtains discriminant projections by maximizing the ratio of average-case between-class scatter to average- case within-class scatter. Two recent discriminant analysis algorithms (DAS), minimal distance maximization (MDM) and worst-case LDA (WLDA), get projections by optimiz- ing worst-case scatters. In this paper, we develop a new LDA framework called LDA with worst between-class separation and average within-class compactness (WSAC) by maximiz- ing the ratio of worst-case between-class scatter to average- case within-class scatter. This can be achieved by relaxing the trace ratio optimization to a distance metric learning prob- lem. Comparative experiments demonstrate its effectiveness. In addition, DA counterparts using the local geometry of data and the kernel trick can likewise be embedded into our frame- work and be solved in the same way.展开更多
基金This work was supported in part by the National Natu- ral Science Foundation of China (Grant No. 61170151 ), the National Natural Science Foundation of Jiangsu Province (BK2011728), and was sponsored by the QingLan Project and the Fund Research Funds for the Central Uni- versifies (NZ2013306).
文摘Linear discriminant analysis (LDA) is one of the most popular supervised dimensionality reduction (DR) tech- niques and obtains discriminant projections by maximizing the ratio of average-case between-class scatter to average- case within-class scatter. Two recent discriminant analysis algorithms (DAS), minimal distance maximization (MDM) and worst-case LDA (WLDA), get projections by optimiz- ing worst-case scatters. In this paper, we develop a new LDA framework called LDA with worst between-class separation and average within-class compactness (WSAC) by maximiz- ing the ratio of worst-case between-class scatter to average- case within-class scatter. This can be achieved by relaxing the trace ratio optimization to a distance metric learning prob- lem. Comparative experiments demonstrate its effectiveness. In addition, DA counterparts using the local geometry of data and the kernel trick can likewise be embedded into our frame- work and be solved in the same way.