摘要
随着网络技术的迅速发展和互联网络规模的不断扩大,人们能够获得的新闻信息资源也日益丰富。应用搜索引擎进行检索,经常会得到内容相同或相近的新闻网页,它们不但浪费了存储资源,而且加重了用户检索和阅读的负担。网页去重处理是提高搜索引擎的关键技术之一,因此,发现并去除重复网页信息的研究工作具有重要意义。文中提出了一种基于版权信息的新闻网页去重算法,其主要思想是:应用转载的新闻网页大多会标出其来源这一特征,并结合网页文本内容进行新闻网页去重。实验结果表明:该方法有效,对新闻网页实现较好的去重,能够得到较高的正确率及召回率,具有很好的应用价值。
As the World Wide Web grows rapidly to become the largest and the most popular source of readily available informa tion,it is increasingly abundant to access to information sources.Application of search engines,users often get the redundant news webpages with same content or similar news webpages,they will not only be a waste of storage resources,and increase users to re trieve and read the burden.Weeding out duplicated news webpages is one of the key technologies of search engine,Consequent ly,to detect and eliminate those pages in facsimile is of great significance.In this paper,a method based on copyright information is proposed to detect and eliminate the duplicated news webpages,This method basic thought is: reprint of most of the news web pages will be the source of its marked characteristics,combined with the text content of the page to re-page news.The experi mental result indicates that,this method can complete in view of the news content duplicated news webpages,and can be a high accuracy rate and the rate of recall.
出处
《电脑知识与技术(过刊)》
2012年第9X期6211-6214,6227,共5页
Computer Knowledge and Technology
关键词
网页去重
搜索引擎
版权
新闻网页
模糊匹配
duplicated webpages
search engine
copyright
news webpages
fuzzy matching