摘要
Nutch作为一个优秀的开源搜索引擎,其内核代码大量采用了MapReduce的编程模式,被越来越多的企业和团体用来定制符合自身需求的分布式搜索引擎产品。作为优秀的搜索引擎,其重要的前提是如何尽可能多地抓取到网页数据来建立索引。介绍了Nutch基于Hadoop下的分布式网络爬虫工作机制,指出其不足之处,并提出了改进方案,从而使网络爬虫能够更加高效地利用网络资源来抓取网络数据。经过实验测试,证明了此方案比原方案更加高效。
As a good open-source search engine, Nutch kernel code uses a lot of MapReduce programming models, being used by more and more businesses and organizations to customize their needs in line with the distributed search engine product. As a good search engine, one of the important prerequisites is how to grab network data as much as possible to build indexes. This paper introduces Nutch's working mechanism based on Hadoop distributed Web crawler, points out its shortcomings and proposes an improved program, which can make Web crawler using network resources more efficiently to capture network data. Experimental results show that it is indeed more efficient than the original programs.
出处
《计算机科学与探索》
CSCD
2011年第1期68-74,共7页
Journal of Frontiers of Computer Science and Technology
基金
湖南省自然科学基金No.07555084
广东省科技计划项目No.2009B080701031~~