期刊文献+

Dimensionality reduction via kernel sparse representation 被引量:3

Dimensionality reduction via kernel sparse representation
原文传递
导出
摘要 Dimensionality reduction (DR) methods based on sparse representation as one of the hottest research topics have achieved remarkable performance in many applications in recent years. However, it's a challenge for existing sparse representation based methods to solve nonlinear problem due to the limitations of seeking sparse representation of data in the original space. Motivated by kernel tricks, we proposed a new framework called empirical kernel sparse representation (EKSR) to solve nonlinear problem. In this framework, non- linear separable data are mapped into kernel space in which the nonlinear similarity can be captured, and then the data in kernel space is reconstructed by sparse representation to preserve the sparse structure, which is obtained by minimiz- ing a ~1 regularization-related objective function. EKSR pro- vides new insights into dimensionality reduction and extends two models: 1) empirical kernel sparsity preserving projec- tion (EKSPP), which is a feature extraction method based on sparsity preserving projection (SPP); 2) empirical kernel sparsity score (EKSS), which is a feature selection method based on sparsity score (SS). Both of the two methods can choose neighborhood automatically as the natural discrimi- native power of sparse representation. Compared with sev- eral existing approaches, the proposed framework can reduce computational complexity and be more convenient in prac- tice. Dimensionality reduction (DR) methods based on sparse representation as one of the hottest research topics have achieved remarkable performance in many applications in recent years. However, it's a challenge for existing sparse representation based methods to solve nonlinear problem due to the limitations of seeking sparse representation of data in the original space. Motivated by kernel tricks, we proposed a new framework called empirical kernel sparse representation (EKSR) to solve nonlinear problem. In this framework, non- linear separable data are mapped into kernel space in which the nonlinear similarity can be captured, and then the data in kernel space is reconstructed by sparse representation to preserve the sparse structure, which is obtained by minimiz- ing a ~1 regularization-related objective function. EKSR pro- vides new insights into dimensionality reduction and extends two models: 1) empirical kernel sparsity preserving projec- tion (EKSPP), which is a feature extraction method based on sparsity preserving projection (SPP); 2) empirical kernel sparsity score (EKSS), which is a feature selection method based on sparsity score (SS). Both of the two methods can choose neighborhood automatically as the natural discrimi- native power of sparse representation. Compared with sev- eral existing approaches, the proposed framework can reduce computational complexity and be more convenient in prac- tice.
出处 《Frontiers of Computer Science》 SCIE EI CSCD 2014年第5期807-815,共9页 中国计算机科学前沿(英文版)
关键词 feature extraction feature selection sparse rep-resentation kernel trick feature extraction, feature selection, sparse rep-resentation, kernel trick
  • 引文网络
  • 相关文献

参考文献1

二级参考文献29

  • 1Duda R O, Hart P E, Stork D G. Pattern classification. Wiley- interscience, 2012.
  • 2He X F, Niyogi E Locality preserving projections. In: Thrun S, Saul L, Sch6lkopf B, eds. Advances in Neural Information Processing Systems 16, Cambridge: M1T Press, 2004.
  • 3He X F, Cai D, Yan S C, Zhang H J. Neighborhood preserving embed- ding. In: Proceedings of the 10th IEEE International Conference on Computer Vision. 2005, 1208-1213.
  • 4Yah S C, Xu D, Zhang B Y, Zhang H J, Yang Q, Lin S. Graph embed- ding and extensions: a general framework for dimensionality reduc- tion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(1): 40-51.
  • 5Gao X, Wang X, Li X, Tao D. Transfer latent variable model based on divergence analysis. Pattern Recognition, 2011, 44(10): 2358-2366.
  • 6Wang X, Gao X, Yuan Y, Tao D, Li J. Semi-supervised gaussian pro- cess latent variable model with pairwise constraints. Neurocomputing, 2010, 73(10): 2186-2195.
  • 7Yan J, Zhang B, Liu N, Yan S, Cheng Q, Fan W, Yang Q, Xi W, Chen Z. Effective and efficient dimensionality reduction for large-scale and streaming data preprocessing. IEEE Transactions on Knowledge and Data Engineering, 2006, 18(3): 320--333.
  • 8Lee J A, Verleysen M. Nonlinear dimensionality reduction. Springer 2007.
  • 9Magdalinos P. Linear and non linear dimensionality reduction for dis tributed knowledge discovery. Department of Informatics, Athens Uni versity of Economics and Business, 2011.
  • 10Tenenbaum J B, Silva V, Langford J C. A global geometric framework for nonlinear dimensionality reduction. Science, 2000, 290(5500): 2319-2323.

同被引文献18

引证文献3

二级引证文献1

;
使用帮助 返回顶部