摘要
传统的特征选择方法基本上是以精度为优化目标,没有充分考虑数据样本类别分布倾斜性,在数据分布不平衡的数据集上性能表现不理想。在不平衡数据集上通过有放回的抽样方法独立地从数据集大类样本集中随机抽取多个样本子集,使每次随机抽取的样本数量与小类样本数量一致,然后将各抽取的样本子集分别与小类样本集组合成多个新的训练样本集。对多个新样本集的特征子集以集成学习的方式采用投票机制进行投票,数据集的最终特征子集以得票数目超过半数的特征共同组合而成。在UCI不平衡数据集上的实验结果显示,提出的方法表现出了较好的性能,是一种能够处理不平衡问题的有效特征选择方法。
The traditional feature selection methods are basically aimed for getting the optimal accuracy without full consideration of the data distribution,which can not achieve promising results on imbalanced datasets.A new feature selection method was proposed based on the data distribution modification for imbalanced data sets.This approach could modify data distribution many times by sampling with replacement.The instances of large classes were equal to the minor class samples in each new dataset.Finally,the final selected features were generated by voting mechanism for ensemble learning,which could combine the selected features by receiving more votes than half from all the new training datasets.Experimental results on several UCI datasets showed that the proposed method was an effective feature selection approach for imbalance problems.
出处
《山东大学学报(工学版)》
CAS
北大核心
2011年第3期7-11,22,共6页
Journal of Shandong University(Engineering Science)
基金
国家自然科学基金资助项目(61070061)
广东省自然科学基金资助项目(9151026005000002)
广东省高层次人才资助项目
关键词
不平衡数据集
特征选择
集成学习
抽样
imbalanced data
feature selection
ensemble learning
sampling