摘要
为解决高维数据属性维度高,不易直接应用的问题,提出通过属性自表达移除不相关和冗余属性的属性选择算法。基于稀疏学习的框架,通过属性自表达考虑属性间的相关性,利用子空间学习的局部保留投影(LPP)算法,确保属性选择时数据的局部结构保持不变。实验结果表明,该算法在UCI等数据集上优于4种对比算法。
To solve the issues that high-dimensional data are hardly used in applications,a feature selection method using the self-representation of samples to remove the redundant and irrelevant features was proposed.The self-representation of samples was used to estimate the correlation among features in a sparse feature selection framework,and also locality preserving projection(LPP)was employed to preserve the local structures of samples.The experimental results on real datasets show that the proposed method outperforms state-of-the-art methods.
出处
《计算机工程与设计》
北大核心
2016年第6期1643-1648,共6页
Computer Engineering and Design
基金
国家自然科学基金项目(61170131
61263035
61363009)
国家863高技术研究发展计划基金项目(2012AA011005)
国家973重点基础研究发展计划基金项目(2013CB329404)
广西自然科学基金项目(2012GXNSFGA060004)
广西高校科学技术研究重点基金项目(2013ZD041)
广西研究生教育创新计划基金项目(YCSZ2015095
YCSZ2015096)
关键词
属性选择
属性自表达
子空间学习
属性约简
稀疏学习
feature selection
characteristics of self-representation
subspace learning
dimensionality reduction
sparse learning