A duplicate identification model is presented to deal with semi-structured or unstructured data extracted from multiple data sources in the deep web.First,the extracted data is generated to the entity records in the d...A duplicate identification model is presented to deal with semi-structured or unstructured data extracted from multiple data sources in the deep web.First,the extracted data is generated to the entity records in the data preprocessing module,and then,in the heterogeneous records processing module it calculates the similarity degree of the entity records to obtain the duplicate records based on the weights calculated in the homogeneous records processing module.Unlike traditional methods,the proposed approach is implemented without schema matching in advance.And multiple estimators with selective algorithms are adopted to reach a better matching efficiency.The experimental results show that the duplicate identification model is feasible and efficient.展开更多
To further enhance the efficiencies of search engines,achieving capabilities of searching,indexing and locating the information in the deep web,latent semantic analysis is a simple and effective way.Through the latent...To further enhance the efficiencies of search engines,achieving capabilities of searching,indexing and locating the information in the deep web,latent semantic analysis is a simple and effective way.Through the latent semantic analysis of the attributes in the query interfaces and the unique entrances of the deep web sites,the hidden semantic structure information can be retrieved and dimension reduction can be achieved to a certain extent.Using this semantic structure information,the contents in the site can be inferred and the similarity measures among sites in deep web can be revised.Experimental results show that latent semantic analysis revises and improves the semantic understanding of the query form in the deep web,which overcomes the shortcomings of the keyword-based methods.This approach can be used to effectively search the most similar site for any given site and to obtain a site list which conforms to the restrictions one specifies.展开更多
Deep Web技术使得大量隐藏在接口背后的有用信息更容易被用户查找到.然而,随着数据源的增多,如何从众多的数据源中快速地找到合适的结果这一问题变得越来越重要.通过传统的链接分析方法和相关性评估方法来对数据源进行排序,已经不能满...Deep Web技术使得大量隐藏在接口背后的有用信息更容易被用户查找到.然而,随着数据源的增多,如何从众多的数据源中快速地找到合适的结果这一问题变得越来越重要.通过传统的链接分析方法和相关性评估方法来对数据源进行排序,已经不能满足高精度的要求.提出一种通过抽样方法和数据质量评估来判断数据源的优劣性的算法.本文提出的抽样方法,改进了分层抽样和雪球抽样,使得在较少的样本点时,能够准确的反映整体特征.定义了能基本反映数据源的优劣程度的6个主要质量标准,并给出计算方法;通过质量标准,结合权重向量来量化数据源的质量.实验通过对数据源进行抽样分析,求解数据源得分的期望值,并根据该期望值对数据源进行了整体排序.结果表明,利用抽样对数据源的数据质量进行估计和评分,具有很好的准确性和可操作性.展开更多
基金The National Natural Science Foundation of China(No.60673139)
文摘A duplicate identification model is presented to deal with semi-structured or unstructured data extracted from multiple data sources in the deep web.First,the extracted data is generated to the entity records in the data preprocessing module,and then,in the heterogeneous records processing module it calculates the similarity degree of the entity records to obtain the duplicate records based on the weights calculated in the homogeneous records processing module.Unlike traditional methods,the proposed approach is implemented without schema matching in advance.And multiple estimators with selective algorithms are adopted to reach a better matching efficiency.The experimental results show that the duplicate identification model is feasible and efficient.
文摘To further enhance the efficiencies of search engines,achieving capabilities of searching,indexing and locating the information in the deep web,latent semantic analysis is a simple and effective way.Through the latent semantic analysis of the attributes in the query interfaces and the unique entrances of the deep web sites,the hidden semantic structure information can be retrieved and dimension reduction can be achieved to a certain extent.Using this semantic structure information,the contents in the site can be inferred and the similarity measures among sites in deep web can be revised.Experimental results show that latent semantic analysis revises and improves the semantic understanding of the query form in the deep web,which overcomes the shortcomings of the keyword-based methods.This approach can be used to effectively search the most similar site for any given site and to obtain a site list which conforms to the restrictions one specifies.
基金Supported by the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No.20070422107 (高等学校博士学科点专项科研基金)the Key Science-Technology Project of Shandong Province of China under Grant No.2007GG10001002 (山东省科技攻关项目)
基金Supported by the National Natural Science Foundation of China under Grant No.60573096 (国家自然科学基金)the NSFC-JST Major International (Regional) Joint Research Project under Grant No.60720106001 (NSFC-JST 重大国际(地区)合作项目)