摘要
提出了一种高效的算法来去除互联网上的重复网页。该算法利用HTML标记过滤网页中的干扰信息,然后提取出能表征一张网页的长句作为网页的特征。通过分析两张网页所共享长句的数量,来判断两张网页是否重复。该算法还利用红黑树对网页的长句进行索引,从而把网页去重过程转换为一个搜索长句的过程,减小了算法的时间复杂度。实验结果表明该算法能够高效,准确地去除重复的网页。
We have developed an efficient algorithm to eliminate the duplicate web pages. This algorithm takes advantage of HTML tags to filter the noise of a page, and extracts those long sentences that can represent a page, as the features of the page. And we use the number of long sentences that shared by two pages, as the metric of duplication. This algorithm uses a red-black tree to index those long sentences, and changes the elimination process into a search process. So that it can reduce the running time. The result of our experiments shows that this algorithm can efficiently and correctly eliminate duplicate web pages.
出处
《微型电脑应用》
2009年第8期30-32,5,共3页
Microcomputer Applications
关键词
网页去重
页面去杂
长句
红黑树
Duplicate web page elimination
Page cleanup
Long sentence
Red-black tree