期刊文献+

基于信息熵与动态聚类的文本特征选择方法 被引量:3

Text feature selection method based on information entropy and dynamic clustering
下载PDF
导出
摘要 根据科技文献的结构特点,搭建了一个四层挖掘模式,提出了一种应用于科技文献分类的文本特征选择方法。该方法首先依据科技文献的结构将其分为四个层次,然后采用K-means聚类对前三层逐层实现特征词提取,最后再使用Aprori算法找出第四层的最大频繁项集,并作为第四层的特征词集合。在该方法中,针对K-means算法受初始中心点的影响较大的问题,首先采用信息熵对聚类对象赋权的方式来修正对象间的距离函数,然后再利用初始聚类的赋权函数值选出较合适的初始聚类中心点。同时,通过为K-means算法的终止条件设定标准值,来减少算法迭代次数,以减少学习时间;通过删除由信息动态变化而产生的冗余信息,来减少动态聚类过程中的干扰,从而使算法达到更准确更高效的聚类效果。上述措施使得该文本特征选择方法能够在文献语料库中更加准确地找到特征词,较之以前的方法有很大提升,尤其是在科技文献方面更为适用。实验结果表明,当数据量较大时,该方法结合改进后的K-means算法在科技文献分类方面有较高的性能。 By means of a four-mining model which is constructed based on the structural characteristics of scientific liter- atures, a text feature selection method is proposed to apply in classification of scientific literatures. The proposed method firstly divides scientific literature into four layers according to its structure, and then selects features progressively for the former three layers by K-means algorithm, and finally finds out the maximum frequent itemsets of fourth layer by Aprori algorithm to act as a collection of fourth layer features. Meanwhile, K-means algorithm is also improved which firstly uses information entropy empower the clustering objects to correct the distance function, and then employs empowerment func- tion value to select the optimal initial clustering center, and subsequently reduces algorithm iterations and learning time by setting the standard value for termination condition of the algorithm and reduces interference of dynamic clustering by removing redundant information from the changing information to make the algorithm achieve more accurate and efficient clustering effect. So, it is possible for this proposed method to find features more accurately in the literature corpus. Exper- imental results show that the proposed method is feasible and effective, and has higher performance in scientific litera- ture classification which is compared with the previous methods.
作者 唐立力
出处 《计算机工程与应用》 CSCD 北大核心 2015年第19期152-157,共6页 Computer Engineering and Applications
关键词 K-MEANS算法 动态聚类 特征选择 信息熵 k-means algorithm dynamic clustering feature selection information entropy
  • 相关文献

参考文献15

  • 1Lee J,Kim D W.Mutual information-based multi-label feature selection using interaction information[J].Expert Systems with Applications,2015,42(4):2013-2025.
  • 2Fan Baojie,Cong Yang,Du Yingkui.Discriminative multitask objects tracking with active feature selection and drift correction[J].Pattern Recognition,2014,47(12):3828-3840.
  • 3Wu Xiaodong.Online feature selection with streaming features[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2013,35(5):1178-1192.
  • 4Seeja K R.Feature selection based on closed frequent itemset mining:A case study on SAGE data classification[J].Nurocomputing,2015,151(3):1027-1032.
  • 5Dernoncourt D.Analysis of feature selection stability on high dimension and small sample data[J].Computational Statistics and Data Analysis,2014,71(3):681-693.
  • 6Tabakhi S.An unsupervised feature selection algorithm based on ant colony optimization[J].Engineering Applications of Artificial intelligence,2014,32(2):112-123.
  • 7Abdullah S.An exponential Monte-Carlo algorithm for feature selection problems[J].Computers and Industrial Engineering,2014,67(1):160-167.
  • 8Boutsidis C,Zouzias A.Randomized dimensionality reduction forκ-means clustering[J].IEEE Transactions on Information Theory,2015,61(2):1045-1062.
  • 9Sun Jiangyan.An improved k-means clustering algorithm for the community discovery[J].Journal of Software Engineering,2015,9(2):242-253.
  • 10Xiang Yaguang.Apriori algorithm for economic data mining in sports industry[J].Computer Modelling and New Technologies,2014,18(12):451-455.

二级参考文献23

  • 1杨打生,郭延芬.一种特征选择的信息论算法[J].内蒙古大学学报(自然科学版),2005,36(3):341-345. 被引量:1
  • 2Kira K, Rendell L. The Feature Selection Problem: Traditional Methods and a New Algorithm[C]//Proc. of AAAI'92. San Jose, USA: Is. n.], 1992.
  • 3John G H, Kohavi R, Pfleger K. Irrelevant Features and the Subset Selection Problem[C]//Pr0c. of the 1 lth International Conference on Machine Learning. IS. l.]: Morgan Kauffmann Publishers, 1994 121-129.
  • 4Peng Huangchuan, Long Fuhui, Ding C. Feature Selection Based on Mutual Information: Criteria of Max-dependency, Max- relevance, and Min-redundancy[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(8): 1226-1238.
  • 5Koller D, Sahami M. Toward Optimal Feature Selection[C]//Proc. of International Conference on Machine Learning. [S. 1.]: Morgan Kaufmarm Publishers, 1996: 284-292.
  • 6Yu Lei, Liu Huan. Feature Selection for High-dimensional Data: A Fast Correlation-based Filter Solution[C]//Proc. of the 20th International Conference on Machine Learning. Washington D. C., USA: AAAI Press, 2003.
  • 7Sotoca J, Pla F. Supervised Feature Selection by Clustering Using Conditional Mutual Information-based Distances[J]. Pattern Recognition, 2010, 43(6): 2068-2081.
  • 8Au W, Chan K C C, Wong A K C, et al. Attribute Clustering for Grouping, Selection, and Classification of Gene Expression Data[J]. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2005, 2(2): 83-101.
  • 9Kwak N,Choi C H.Input feature selection for classifica- tion problems[J].IEEE Transactions on Neural Networks, 2002,13(1) : 143-159.
  • 10Estevez P A, Tesmer M, Perez C A, et al.Normalized mutual information feature selection[J].IEEE Transactions on Neural Networks,2009,20(2) : 189-201.

共引文献8

同被引文献23

引证文献3

二级引证文献19

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部