摘要
网页检索结果中 ,用户经常会得到内容相同的冗余页面 ,其中大量是由于网站之间的转载造成。它们不但浪费了存储资源 ,并给用户的检索带来诸多不便。本文依据冗余网页的特点引入模糊匹配的思想 ,利用网页文本的内容、结构信息 ,提出了基于特征串的中文网页的快速去重算法 ,同时对算法进行了优化处理。实验结果表明该算法是有效的 ,大规模开放测试的重复网页召回率达 97 3% ,去重正确率达 99 5 %。
Reprinting of information between websites produces a great deal redundant web pages that not only waste storage resource but also bring many burdens to user in retrieval and reading.In this paper string of feature code based algorithm is developed to remove the duplicated web page after analyzing the feature of the redundant web page.The idea of fuzzy matching and information of content and structure of the text of web page are introduced into the algorithm,and the efficiency of the algorithm is optimized.The experiment results show that the algorithm is effective.The recall rate of duplicated web pages reaches 97.3%,and the precision rate of the duplication removal reaches 99.5% in large scale testing.
出处
《中文信息学报》
CSCD
北大核心
2003年第2期28-35,共8页
Journal of Chinese Information Processing
关键词
计算机应用
中文信息处理
特征串
模糊匹配
去重算法
冗余网页
computer application
Chinese information processing
String of Feature Code
Fuzzy Matching
Duplicate Removal Algorithm
Redundant Web Pages