摘要
特征选择是模式识别及数据挖掘等领域的重要问题之一。针对高维数据对象,特征选择一方面可以提高分类精度和效率,另一方面可以找出富含信息的特征子集。针对此问题,本文提出一种综合了filter模型及wrapper模型的特征选择方法,首先基于特征之间的信息增益进行特征分组及筛选,然后针对经过筛选而精简的特征子集采用遗传算法进行随机搜索,并采用感知器模型的分类错误率作为评价指标。实验结果表明,该算法可有效地找出具有较好的线性可分离性的特征子集,从而实现降维并提高分类精度。
Feature selection is one of the important problems in the pattern recognition and data mining areas. For highdimensional data, feature selection not only can improve the accuracy and efficiency of classification, but also can discover informative feature subset. This paper proposes a new feature selection method combining filter and wrapper models, which first filters features by feature partition based on information gain, and realizes the near optimal feature subset search on the compact representative feature subset by genetic algorithm; and the feature subset is evaluated by the classification inaccuracy of the perceptron model. The experiments show that the proposed algorithm can find the feature subsets with good linear separability, which results in the low-dimensional data and the good classification accuracy.
出处
《计算机科学》
CSCD
北大核心
2006年第10期193-195,251,共4页
Computer Science
基金
国家自然科学基金资助(60573097)
广东省自然科学基金资助(05200302
04300462)。