摘要
目前已有很多种特征选择方法,但就目前所知,没有一种方法能够在非平衡语料上取得很好的效果.依据特征在类别间的分布特点提出了基于类别分布的特征选择框架.该框架能够利用特征的分布信息选出具有较强区分能力的特征,同时允许给类别灵活地分配权重,分配较大的权重给稀有类别则提高稀有类别的分类效果,所以它适用于非平衡语料,也具有很好的扩展性.另外,OCFS和基于类别分布差异的特征过滤可以看作该框架的特例.实现该框架得到了具体的特征选择方法,Retuers-21578语料及复旦大学语料等两个非平衡语料上的实验表明,它们的Macro和Micro F1效果都优于IG,CHI和OCFS.
Text categorization is an important technique in data mining domain. Extremely high dimension of features makes text categorization processing complex and expensive, and thus effective dimension reduction methods are extraordinarily desired. Feature selection is widely used to reduce dimension. Many feature selection methods have been proposed in recent years. But to the authors' best knowledge, there is no method that performs very well on unbalanced datasets. This paper proposes a feature selection framework based on the category distribution difference of features named category distribution-based feature selection (CDFS). This approach selects features that have strong discriminative power using distribution information of features. At the same time, weights can be flexibly assigned to categories. If larger weights are assigned to rare categories, the performance on rare categories can be improved. So this framework is suitable for unbalanced data and highly extensible. Besides, OCFS and feature filter based on category distribution difference can be viewed as special cases of this framework. A number of implementations of CDFS are given. The experimental results on Reuters-21578 corpus and Fudan corpus (unbalanced datasets) show that both MacroF1 and MicroF1 by implementations of CDFS given in this paper are better than those by IG, CHI and OCFS.
出处
《计算机研究与发展》
EI
CSCD
北大核心
2009年第9期1586-1593,共8页
Journal of Computer Research and Development
基金
国家"九七三"重点基础研究发展计划基金项目(2007CB311103)
国家自然科学基金项目(60873166
60603094)
国家"八六三"高技术研究发展计划基金项目(2006AA010105)~~
关键词
特征选择
非平衡语料
特征降维
文本分类
数据挖掘
feature selection
unbalanced data set
feature deduction
text categorization
data mining