期刊文献+

基于高斯近似l0范数典型相关分析的特征选择

Feature Selection Based on l0-Norm Gaussian-Approximation Penalized Canonical Correlation Analysis
下载PDF
导出
摘要 特征选择的稀疏优化方法是通过求解优化问题稀疏解实现高维数据特征选择的方法,其稀疏惩罚项是实现特征选择的关键。针对l0范数惩罚项的稀疏性能好但求解复杂度高的问题,提出高斯近似l0范数典型相关分析的特征选择模型。以连续、分段光滑和稀疏性能接近l0范数的高斯近似l0范数作为稀疏惩罚项,以数据间的相关性作为优化目标,引入二次逼近函数解决高斯近似l0范数的非凸惩罚求解难问题,用块坐标下降法求解模型实现特征选择。实验结果表明,模型是可实现的,且与现有的同类模型相比,所提模型实现了更优的特征选择。 The feature selection problem of high-dimensional data is usually solved under the framework of penalty optimization, and its sparse penalty is the key to realize feature selection. Aiming at the problem that the l0-norm penalized has good sparse performance but high computational complexity, a feature selection model with Gaussian approximation l0-norm penalized canonical correlation analysis is proposed in the paper. The sparse penalty was Gaussian approximation l0-norm with continuous, piecewise smooth and sparse performance close to l0-norm, and the correlation between data was taken as optimization objection. The quadratic approximation function was used to solve the non-convex penalty problem of Gaussian approximation l0-norm. The feature selection was realized by solving the model with block coordinate descent method. The experimental results show that the model is realizable and it achieves better feature selection than some existing ones.
作者 鲁伊莎 王凯明 肖玉柱 宋学力 LU Yi-sha;WANG Kai-ming;XIAO Yu-zhu;SONG Xue-Li(School of Science,Chang’an University,Xi’an Shanxi 710064,China)
机构地区 长安大学理学院
出处 《计算机仿真》 北大核心 2020年第4期234-238,共5页 Computer Simulation
基金 长安大学中央高校基本科研业务费专项资金资助项目(310812163504)。
关键词 特征选择 高斯函数 稀疏 典型相关分析 Feature selection Gaussian function Sparseness Canonical correlation analysis
  • 相关文献

参考文献1

二级参考文献52

  • 1Li G-Z, Yang J Y. Feature selection for ensemble learning and its application[M]. Machine Learning in Bioinformatics, 2008: 135-155.
  • 2Sheinvald J, Byron Dom, Wayne Niblack. A modelling approach to feature selection[J]. Proc of 10th Int Conf on Pattern Recognition, 1990, 6(1): 535-539.
  • 3Cardie C. Using decision trees to improve case-based learning[C]. Proc of 10th Int Conf on Machine Learning. Amherst, 1993: 25-32.
  • 4Modrzejewski M. Feature selection using rough sets theory[C]. Proc of the European Conf on Machine ,Learning. 1993: 213-226.
  • 5Ding C, Peng H. Minimum redundancy feature selection from microarray gene expression data[J]. J of Bioinformatics and Computational Biology, 2005, 3(2): 185-205.
  • 6Francois Fleuret. Fast binary feature selection with conditional mutual information[J]. J of Machine Learning Research, 2004, 5(10): 1531-1555.
  • 7Kwak N, Choi C-H. Input feature selection by mutual information based on Parzen window[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2002, 24(12): 1667-1671.
  • 8Novovicova J, Petr S, Michal H, et al. Conditional mutual information based feature selection for classification task[C]. Proc of the 12th Iberoamericann Congress on Pattern Recognition. Valparaiso, 2007: 417-426.
  • 9Qu G, Hariri S, Yousif M. A new dependency and correlation analysis for features[J]. IEEE Trans on Knowledge and Data Engineering, 2005, 17(9): 1199- 1207.
  • 10Forman G. An extensive empirical study of feature selection metrics for text classification[J]. J of Machine Learning Research, 2003, 3(11): 1289-1305.

共引文献206

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部