摘要
通过研究html网页结构,实现对Web网页中纯文本内容的提取。通过对传统的特征提取方法和文本分类方法进行研究,提出基于概念词典的概念特征提取方法,通过特征提取使用简单向量模糊距离匹配算法对文本进行分类,设计并实现了一个基于中文概念的Web文本分类系统。通过对实验数据的对比分析,引入概念特征之前分类的准确率最高达到89%,引入概念特征后分类平均效率达到95%以上,较之前有较大提高。
The extraction of plain text content of Web pages is achieved by studying the html page structure,. The commonly used method of feature extraction and text classification is researched. The concept feature extraction method based on the concept dictionary is proposed and the simple vector fuzzy distance matching algorithm is used to classify the text. A Chinese concept of Web-based text classification system is designed and implemented. Through comparative analysis of experimental data, the classification accuracy is up to 89% before the introduction of the concept of characteristic while the classification average efficiency is improved greatly to more than 95 % after the introduction.
出处
《北京信息科技大学学报(自然科学版)》
2013年第2期77-81,共5页
Journal of Beijing Information Science and Technology University
基金
国家自然科学基金资助项目(61070119)
北京大学计算语言学教育部重点实验室开放课题基金资助项目(KLCL-1005)
北京市属市管高等学校人才强教计划基金资助项目(PHR201007131)
北京市教委专项基金(PXM2012-014224-000020)
关键词
WEB文本分类
概念特征
概念词典
模糊距离匹配算法
Web text classification
concept characteristic
concept dictionary
fuzzy distancematching algorithms