Locality preserving projection (LPP) is a typical and popular dimensionality reduction (DR) method,and it can potentially find discriminative projection directions by preserving the local geometric structure in da...Locality preserving projection (LPP) is a typical and popular dimensionality reduction (DR) method,and it can potentially find discriminative projection directions by preserving the local geometric structure in data. However,LPP is based on the neighborhood graph artificially constructed from the original data,and the performance of LPP relies on how well the nearest neighbor criterion work in the original space. To address this issue,a novel DR algorithm,called the self-dependent LPP (sdLPP) is proposed. And it is based on the fact that the nearest neighbor criterion usually achieves better performance in LPP transformed space than that in the original space. Firstly,LPP is performed based on the typical neighborhood graph; then,a new neighborhood graph is constructed in LPP transformed space and repeats LPP. Furthermore,a new criterion,called the improved Laplacian score,is developed as an empirical reference for the discriminative power and the iterative termination. Finally,the feasibility and the effectiveness of the method are verified by several publicly available UCI and face data sets with promising results.展开更多
Semi-supervised discriminant analysis SDA which uses a combination of multiple embedding graphs and kernel SDA KSDA are adopted in supervised speech emotion recognition.When the emotional factors of speech signal samp...Semi-supervised discriminant analysis SDA which uses a combination of multiple embedding graphs and kernel SDA KSDA are adopted in supervised speech emotion recognition.When the emotional factors of speech signal samples are preprocessed different categories of features including pitch zero-cross rate energy durance formant and Mel frequency cepstrum coefficient MFCC as well as their statistical parameters are extracted from the utterances of samples.In the dimensionality reduction stage before the feature vectors are sent into classifiers parameter-optimized SDA and KSDA are performed to reduce dimensionality.Experiments on the Berlin speech emotion database show that SDA for supervised speech emotion recognition outperforms some other state-of-the-art dimensionality reduction methods based on spectral graph learning such as linear discriminant analysis LDA locality preserving projections LPP marginal Fisher analysis MFA etc. when multi-class support vector machine SVM classifiers are used.Additionally KSDA can achieve better recognition performance based on kernelized data mapping compared with the above methods including SDA.展开更多
基金Supported by the National Natural Science Foundation of China (60973097)the Scientific Research Foundation of Liaocheng University(X0810029)~~
文摘Locality preserving projection (LPP) is a typical and popular dimensionality reduction (DR) method,and it can potentially find discriminative projection directions by preserving the local geometric structure in data. However,LPP is based on the neighborhood graph artificially constructed from the original data,and the performance of LPP relies on how well the nearest neighbor criterion work in the original space. To address this issue,a novel DR algorithm,called the self-dependent LPP (sdLPP) is proposed. And it is based on the fact that the nearest neighbor criterion usually achieves better performance in LPP transformed space than that in the original space. Firstly,LPP is performed based on the typical neighborhood graph; then,a new neighborhood graph is constructed in LPP transformed space and repeats LPP. Furthermore,a new criterion,called the improved Laplacian score,is developed as an empirical reference for the discriminative power and the iterative termination. Finally,the feasibility and the effectiveness of the method are verified by several publicly available UCI and face data sets with promising results.
基金The National Natural Science Foundation of China(No.61231002,61273266)the Ph.D.Programs Foundation of Ministry of Education of China(No.20110092130004)
文摘Semi-supervised discriminant analysis SDA which uses a combination of multiple embedding graphs and kernel SDA KSDA are adopted in supervised speech emotion recognition.When the emotional factors of speech signal samples are preprocessed different categories of features including pitch zero-cross rate energy durance formant and Mel frequency cepstrum coefficient MFCC as well as their statistical parameters are extracted from the utterances of samples.In the dimensionality reduction stage before the feature vectors are sent into classifiers parameter-optimized SDA and KSDA are performed to reduce dimensionality.Experiments on the Berlin speech emotion database show that SDA for supervised speech emotion recognition outperforms some other state-of-the-art dimensionality reduction methods based on spectral graph learning such as linear discriminant analysis LDA locality preserving projections LPP marginal Fisher analysis MFA etc. when multi-class support vector machine SVM classifiers are used.Additionally KSDA can achieve better recognition performance based on kernelized data mapping compared with the above methods including SDA.