摘要
社交网络数据信息量大、主题性强,具有巨大的数据挖掘价值,是互联网大数据的重要组成部分。针对传统搜索引擎无法利用关键字检索技术直接索引社交网络平台信息的现状,基于众包模式,采用C/S架构,设计社交网络数据采集模型,包含服务端、客户端、存储系统与主题Deep Web爬虫系统4个模块。通过主题Deep Web爬虫的分布式机器节点自动向服务器请求爬虫任务并上传爬取数据,利用Hadoop分布式文件系统对爬取数据进行快速处理并存储结果数据。实验结果表明,主题Deep Web爬虫系统配置简单,支持功能扩展和目标信息直接获取,数据采集模型具有较快的数据获取速度及较高的信息检索效率。
Social network data has the features of informative and strong topicality with significant value for data mining,and it is also a very important part of the Internet big data. How ever,traditional search engines can not use the keyw ords retrieve technology to index the information of social netw ork platform directly,and under such circumstances,this paper designs and implements a data collection model based on crow dsourcing mode and C / S architecture. The model consists of four modules including server,client,storage sub-system and a Deep Web craw ler system. The nodes run the topic Deep Web craw ler system to request new tasks automatically and upload the acquired data,meanw hile the system uses the Hadoop Distributed File System( HDFS) to process data rapidly and store results. The topic Deep Web craw ler system has the features of easy configuration,flexible scalability and direct data collection,and it also proves that data collection model is able to fulfill the tasks in a high success rate and collect data in an efficient w ay.
出处
《计算机工程》
CAS
CSCD
北大核心
2015年第4期36-40,共5页
Computer Engineering
基金
国家"863"计划基金资助项目"基于媒体大数据的大众信息消费服务平台及应用示范"(SS2014AA012305)