A new mapping approach for automated ontology mapping using web search engines (such as Google) is presented. Based on lexico-syntactic patterns, the hyponymy relationships between ontology concepts can be obtained ...A new mapping approach for automated ontology mapping using web search engines (such as Google) is presented. Based on lexico-syntactic patterns, the hyponymy relationships between ontology concepts can be obtained from the web by search engines and an initial candidate mapping set consisting of ontology concept pairs is generated. According to the concept hierarchies of ontologies, a set of production rules is proposed to delete the concept pairs inconsistent with the ontology semantics from the initial candidate mapping set and add the concept pairs consistent with the ontology semantics to it. Finally, ontology mappings are chosen from the candidate mapping set automatically with a mapping select rule which is based on mutual information. Experimental results show that the F-measure can reach 75% to 100% and it can effectively accomplish the mapping between ontologies.展开更多
As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results a...As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.展开更多
To integrate reasoning and text retrieval, the architecture of a semantic search engine which includes several kinds of queries is proposed, and the semantic search engine Smartch is designed and implemented. Based on...To integrate reasoning and text retrieval, the architecture of a semantic search engine which includes several kinds of queries is proposed, and the semantic search engine Smartch is designed and implemented. Based on a logical reasoning process and a graphic user-defined process, Smartch provides four kinds of search services. They are basic search, concept search, graphic user-defined query and association relationship search. The experimental results show that compared with the traditional search engine, the recall and precision of Smartch are improved. Graphic user-defined queries can accurately locate the information of user needs. Association relationship search can find complicated relationships between concepts. Smartch can perform some intelligent functions based on ontology inference.展开更多
Web search engines are important tools for lexicography.This paper takes translation of business terms("e-commerce"and"e-business")as an example to illustrate the application of web search engines ...Web search engines are important tools for lexicography.This paper takes translation of business terms("e-commerce"and"e-business")as an example to illustrate the application of web search engines in English-Chinese dictionary translation,including the methods of(1)finding the potential Chinese equivalents of the English business terms,and(2)selecting typical and proper Chinese equivalents in accordance with the frequencies and the meanings of the English business terms respectively.展开更多
The concept of Webpage visibility is usually linked to search engine optimization (SEO), and it is based on global in-link metric [1]. SEO is the process of designing Webpages to optimize its potential to rank high on...The concept of Webpage visibility is usually linked to search engine optimization (SEO), and it is based on global in-link metric [1]. SEO is the process of designing Webpages to optimize its potential to rank high on search engines, preferably on the first page of the results page. The purpose of this research study is to analyze the influence of local geographical area, in terms of cultural values, and the effect of local society keywords in increasing Website visibility. Websites were analyzed by accessing the source code of their homepages through Google Chrome browser. Statistical analysis methods were selected to assess and analyze the results of the SEO and search engine visibility (SEV). The results obtained suggest that the development of Web indicators to be included should consider a local idea of visibility, and consider a certain geographical context. The geographical region that the researchers are considering in this research is the Hashemite kingdom of Jordan (HKJ). The results obtained also suggest that the use of social culture keywords leads to increase the Website visibility in search engines as well as localizes the search area such as google.jo, which localizes the search for HKJ.展开更多
FAN Xiao-wen School of Pharmacy, Shenyang Pharmaceutical University, Shenyang,110016The basic types of current search engines which can help users to perfume laborious information-gathering tasks on Internet is propos...FAN Xiao-wen School of Pharmacy, Shenyang Pharmaceutical University, Shenyang,110016The basic types of current search engines which can help users to perfume laborious information-gathering tasks on Internet is proposed. Basically, the search engines can be classified into index engine, directory engine and agent engine on WWW information service. The key technologies of web mine, automatic classifying of documents and ordering regulation of feedback information are discussed. Finally, the developing trend of search engines is pointed out by analyzing their practical application on World Wide Web.展开更多
In this paper, first studied are the distribution characteristics of user behaviors based on log data from a massive web search engine. Analysis shows that stochastic distribution of user queries accords with the char...In this paper, first studied are the distribution characteristics of user behaviors based on log data from a massive web search engine. Analysis shows that stochastic distribution of user queries accords with the characteristics of power-law function and exhibits strong similarity, and the user' s queries and clicked URLs present dramatic locality, which implies that query cache and 'hot click' cache can be employed to improve system performance. Then three typical cache replacement policies are compared, including LRU, FIFO, and LFU with attenuation. In addition, the distribution character-istics of web information are also analyzed, which demonstrates that the link popularity and replica pop-ularity of a URL have positive influence on its importance. Finally, variance between the link popularity and user popularity, and variance between replica popularity and user popularity are analyzed, which give us some important insight that helps us improve the ranking algorithms in a search engine.展开更多
The volume of publically available geospatial data on the web is rapidly increasing due to advances in server-based technologies and the ease at which data can now be created.However,challenges remain with connecting ...The volume of publically available geospatial data on the web is rapidly increasing due to advances in server-based technologies and the ease at which data can now be created.However,challenges remain with connecting individuals searching for geospatial data with servers and websites where such data exist.The objective of this paper is to present a publically available Geospatial Search Engine(GSE)that utilizes a web crawler built on top of the Google search engine in order to search the web for geospatial data.The crawler seeding mechanism combines search terms entered by users with predefined keywords that identify geospatial data services.A procedure runs daily to update map server layers and metadata,and to eliminate servers that go offline.The GSE supports Web Map Services,ArcGIS services,and websites that have geospatial data for download.We applied the GSE to search for all available geospatial services under these formats and provide search results including the spatial distribution of all obtained services.While enhancements to our GSE and to web crawler technology in general lie ahead,our work represents an important step toward realizing the potential of a publically accessible tool for discovering the global availability of geospatial data.展开更多
To semantically integrate heterogeneous resources and provide a unified intelligent access interface, semantic web technology is exploited to publish and interlink machineunderstandable resources so that intelligent s...To semantically integrate heterogeneous resources and provide a unified intelligent access interface, semantic web technology is exploited to publish and interlink machineunderstandable resources so that intelligent search can be supported. TCMSearch, a deployed intelligent search engine for traditional Chinese medicine (TCM), is presented. The core of the system is an integrated knowledge base that uses a TCM domain ontology to represent the instances and relationships in TCM. Machine-learning techniques are used to generate semantic annotations for texts and semantic mappings for relational databases, and then a semantic index is constructed for these resources. The major benefit of representing the semantic index in RDF/OWL is to support some powerful reasoning functions, such as class hierarchies and relation inferences. By combining resource integration with reasoning, the knowledge base can support some intelligent search paradigms besides keyword search, such as correlated search, semantic graph navigation and concept recommendation.展开更多
Influenza is a kind of infectious disease, which spreads quickly and widely. The outbreak of influenza has brought huge losses to society. In this paper, four major categories of flu keywords, “prevention phase”, “...Influenza is a kind of infectious disease, which spreads quickly and widely. The outbreak of influenza has brought huge losses to society. In this paper, four major categories of flu keywords, “prevention phase”, “symptom phase”, “treatment phase”, and “commonly-used phrase” were set. Python web crawler was used to obtain relevant influenza data from the National Influenza Center’s influenza surveillance weekly report and Baidu Index. The establishment of support vector regression (SVR), least absolute shrinkage and selection operator (LASSO), convolutional neural networks (CNN) prediction models through machine learning, took into account the seasonal characteristics of the influenza, also established the time series model (ARMA). The results show that, it is feasible to predict influenza based on web search data. Machine learning shows a certain forecast effect in the prediction of influenza based on web search data. In the future, it will have certain reference value in influenza prediction. The ARMA(3,0) model predicts better results and has greater generalization. Finally, the lack of research in this paper and future research directions are given.展开更多
With the flood of information on the Web, it has become increasingly necessary for users to utilize automated tools in order to find, extract, filter, and evaluate the desired information and knowledge discovery. In t...With the flood of information on the Web, it has become increasingly necessary for users to utilize automated tools in order to find, extract, filter, and evaluate the desired information and knowledge discovery. In this research, we will present a preliminary discussion about using the dominant meaning technique to improve Google Image Web search engine. Google search engine analyzes the text on the page adjacent to the image, the image caption and dozens of other factors to determine the image content. To improve the results, we looked for building a dominant meaning classification model. This paper investigated the influence of using this model to retrieve more efficient images, through sequential procedures to formulate a suitable query. In order to build this model, the specific dataset related to an application domain was collected;K-means algorithm was used to cluster the dataset into K-clusters, and the dominant meaning technique is used to construct a hierarchy model of these clusters. This hierarchy model is used to reformulate a new query. We perform some experiments on Google and validate the effectiveness of the proposed approach. The proposed approach is improved for in precision, recall and F1-measure by 57%, 70%, and 61% respectively.展开更多
现今Web中存在大量缺失、不一致及不精确的数据,而传统的搜索引擎只能根据关键词返回文档片段,无法直接获取目标实体。提出一种新的基于图匹配的实体抽取算法GMEE(Graph Matching Based Entity Extraction),首先将片段按词语分割,进行...现今Web中存在大量缺失、不一致及不精确的数据,而传统的搜索引擎只能根据关键词返回文档片段,无法直接获取目标实体。提出一种新的基于图匹配的实体抽取算法GMEE(Graph Matching Based Entity Extraction),首先将片段按词语分割,进行实体的初步筛选;然后根据各实体之间的结构和语义关系建立“加权语义实体关联图”;最后利用“最大公共子图匹配”策略抽取目标实体。实验结果表明,提出的算法在不需要大量参数训练及传递的情况下,能够对抽取的实体集进行有效的精简,既保证了召回率、准确率,又提高了抽取过程的可解释性。展开更多
Web usage mining,content mining,and structure mining comprise the web mining process.Web-Page Recommendation(WPR)development by incor-porating Data Mining Techniques(DMT)did not include end-users with improved perform...Web usage mining,content mining,and structure mining comprise the web mining process.Web-Page Recommendation(WPR)development by incor-porating Data Mining Techniques(DMT)did not include end-users with improved performance in the obtainedfiltering results.The cluster user profile-based clustering process is delayed when it has a low precision rate.Markov Chain Monte Carlo-Dynamic Clustering(MC2-DC)is based on the User Behavior Profile(UBP)model group’s similar user behavior on a dynamic update of UBP.The Reversible-Jump Concept(RJC)reviews the history with updated UBP and moves to appropriate clusters.Hamilton’s Filtering Framework(HFF)is designed tofilter user data based on personalised information on automatically updated UBP through the Search Engine(SE).The Hamilton Filtered Regime Switching User Query Probability(HFRSUQP)works forward the updated UBP for easy and accuratefiltering of users’interests and improves WPR.A Probabilistic User Result Feature Ranking based on Gaussian Distribution(PURFR-GD)has been developed to user rank results in a web mining process.PURFR-GD decreases the delay time in the end-to-end workflow for SE personalization in various meth-ods by using the Gaussian Distribution Function(GDF).The theoretical analysis and experiment results of the proposed MC2-DC method automatically increase the updated UBP accuracy by 18.78%.HFRSUQP enabled extensive Maximize Log-Likelihood(ML-L)increases to 15.28%of User Personalized Information Search Retrieval Rate(UPISRT).For feature ranking,the PURFR-GD model defines higher Classification Accuracy(CA)and Precision Ratio(PR)while uti-lising minimum Execution Time(ET).Furthermore,UPISRT's ranking perfor-mance has improved by 20%.展开更多
As a basic component of search engine and a series of other services on Web,Web crawler is playing an important role. Roughly,a Web crawler is a program which automatically traverses the Web by downloading documents a...As a basic component of search engine and a series of other services on Web,Web crawler is playing an important role. Roughly,a Web crawler is a program which automatically traverses the Web by downloading documents and following links from page to page. This article detailedly explains the principles and difficulties on the Web crawler,comprehensively argues several hot directions of Web crawler,and at last views the new direction of Web crawler.展开更多
基金The National Natural Science Foundation of China(No60425206,90412003)the Foundation of Excellent Doctoral Dis-sertation of Southeast University (NoYBJJ0502)
文摘A new mapping approach for automated ontology mapping using web search engines (such as Google) is presented. Based on lexico-syntactic patterns, the hyponymy relationships between ontology concepts can be obtained from the web by search engines and an initial candidate mapping set consisting of ontology concept pairs is generated. According to the concept hierarchies of ontologies, a set of production rules is proposed to delete the concept pairs inconsistent with the ontology semantics from the initial candidate mapping set and add the concept pairs consistent with the ontology semantics to it. Finally, ontology mappings are chosen from the candidate mapping set automatically with a mapping select rule which is based on mutual information. Experimental results show that the F-measure can reach 75% to 100% and it can effectively accomplish the mapping between ontologies.
文摘As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.
基金The National Natural Science Foundation of China(No60403027)
文摘To integrate reasoning and text retrieval, the architecture of a semantic search engine which includes several kinds of queries is proposed, and the semantic search engine Smartch is designed and implemented. Based on a logical reasoning process and a graphic user-defined process, Smartch provides four kinds of search services. They are basic search, concept search, graphic user-defined query and association relationship search. The experimental results show that compared with the traditional search engine, the recall and precision of Smartch are improved. Graphic user-defined queries can accurately locate the information of user needs. Association relationship search can find complicated relationships between concepts. Smartch can perform some intelligent functions based on ontology inference.
文摘Web search engines are important tools for lexicography.This paper takes translation of business terms("e-commerce"and"e-business")as an example to illustrate the application of web search engines in English-Chinese dictionary translation,including the methods of(1)finding the potential Chinese equivalents of the English business terms,and(2)selecting typical and proper Chinese equivalents in accordance with the frequencies and the meanings of the English business terms respectively.
文摘The concept of Webpage visibility is usually linked to search engine optimization (SEO), and it is based on global in-link metric [1]. SEO is the process of designing Webpages to optimize its potential to rank high on search engines, preferably on the first page of the results page. The purpose of this research study is to analyze the influence of local geographical area, in terms of cultural values, and the effect of local society keywords in increasing Website visibility. Websites were analyzed by accessing the source code of their homepages through Google Chrome browser. Statistical analysis methods were selected to assess and analyze the results of the SEO and search engine visibility (SEV). The results obtained suggest that the development of Web indicators to be included should consider a local idea of visibility, and consider a certain geographical context. The geographical region that the researchers are considering in this research is the Hashemite kingdom of Jordan (HKJ). The results obtained also suggest that the use of social culture keywords leads to increase the Website visibility in search engines as well as localizes the search area such as google.jo, which localizes the search for HKJ.
文摘FAN Xiao-wen School of Pharmacy, Shenyang Pharmaceutical University, Shenyang,110016The basic types of current search engines which can help users to perfume laborious information-gathering tasks on Internet is proposed. Basically, the search engines can be classified into index engine, directory engine and agent engine on WWW information service. The key technologies of web mine, automatic classifying of documents and ordering regulation of feedback information are discussed. Finally, the developing trend of search engines is pointed out by analyzing their practical application on World Wide Web.
基金This work was supported by the National Grand Fundamental Research of China ( Grant No. G1999032706).
文摘In this paper, first studied are the distribution characteristics of user behaviors based on log data from a massive web search engine. Analysis shows that stochastic distribution of user queries accords with the characteristics of power-law function and exhibits strong similarity, and the user' s queries and clicked URLs present dramatic locality, which implies that query cache and 'hot click' cache can be employed to improve system performance. Then three typical cache replacement policies are compared, including LRU, FIFO, and LFU with attenuation. In addition, the distribution character-istics of web information are also analyzed, which demonstrates that the link popularity and replica pop-ularity of a URL have positive influence on its importance. Finally, variance between the link popularity and user popularity, and variance between replica popularity and user popularity are analyzed, which give us some important insight that helps us improve the ranking algorithms in a search engine.
文摘The volume of publically available geospatial data on the web is rapidly increasing due to advances in server-based technologies and the ease at which data can now be created.However,challenges remain with connecting individuals searching for geospatial data with servers and websites where such data exist.The objective of this paper is to present a publically available Geospatial Search Engine(GSE)that utilizes a web crawler built on top of the Google search engine in order to search the web for geospatial data.The crawler seeding mechanism combines search terms entered by users with predefined keywords that identify geospatial data services.A procedure runs daily to update map server layers and metadata,and to eliminate servers that go offline.The GSE supports Web Map Services,ArcGIS services,and websites that have geospatial data for download.We applied the GSE to search for all available geospatial services under these formats and provide search results including the spatial distribution of all obtained services.While enhancements to our GSE and to web crawler technology in general lie ahead,our work represents an important step toward realizing the potential of a publically accessible tool for discovering the global availability of geospatial data.
基金Program for Changjiang Scholars and Innovative Research Team in University (NoIRT0652)the National High Technology Research and Development Program of China (863 Program) ( No2006AA01A123)
文摘To semantically integrate heterogeneous resources and provide a unified intelligent access interface, semantic web technology is exploited to publish and interlink machineunderstandable resources so that intelligent search can be supported. TCMSearch, a deployed intelligent search engine for traditional Chinese medicine (TCM), is presented. The core of the system is an integrated knowledge base that uses a TCM domain ontology to represent the instances and relationships in TCM. Machine-learning techniques are used to generate semantic annotations for texts and semantic mappings for relational databases, and then a semantic index is constructed for these resources. The major benefit of representing the semantic index in RDF/OWL is to support some powerful reasoning functions, such as class hierarchies and relation inferences. By combining resource integration with reasoning, the knowledge base can support some intelligent search paradigms besides keyword search, such as correlated search, semantic graph navigation and concept recommendation.
文摘Influenza is a kind of infectious disease, which spreads quickly and widely. The outbreak of influenza has brought huge losses to society. In this paper, four major categories of flu keywords, “prevention phase”, “symptom phase”, “treatment phase”, and “commonly-used phrase” were set. Python web crawler was used to obtain relevant influenza data from the National Influenza Center’s influenza surveillance weekly report and Baidu Index. The establishment of support vector regression (SVR), least absolute shrinkage and selection operator (LASSO), convolutional neural networks (CNN) prediction models through machine learning, took into account the seasonal characteristics of the influenza, also established the time series model (ARMA). The results show that, it is feasible to predict influenza based on web search data. Machine learning shows a certain forecast effect in the prediction of influenza based on web search data. In the future, it will have certain reference value in influenza prediction. The ARMA(3,0) model predicts better results and has greater generalization. Finally, the lack of research in this paper and future research directions are given.
文摘With the flood of information on the Web, it has become increasingly necessary for users to utilize automated tools in order to find, extract, filter, and evaluate the desired information and knowledge discovery. In this research, we will present a preliminary discussion about using the dominant meaning technique to improve Google Image Web search engine. Google search engine analyzes the text on the page adjacent to the image, the image caption and dozens of other factors to determine the image content. To improve the results, we looked for building a dominant meaning classification model. This paper investigated the influence of using this model to retrieve more efficient images, through sequential procedures to formulate a suitable query. In order to build this model, the specific dataset related to an application domain was collected;K-means algorithm was used to cluster the dataset into K-clusters, and the dominant meaning technique is used to construct a hierarchy model of these clusters. This hierarchy model is used to reformulate a new query. We perform some experiments on Google and validate the effectiveness of the proposed approach. The proposed approach is improved for in precision, recall and F1-measure by 57%, 70%, and 61% respectively.
文摘现今Web中存在大量缺失、不一致及不精确的数据,而传统的搜索引擎只能根据关键词返回文档片段,无法直接获取目标实体。提出一种新的基于图匹配的实体抽取算法GMEE(Graph Matching Based Entity Extraction),首先将片段按词语分割,进行实体的初步筛选;然后根据各实体之间的结构和语义关系建立“加权语义实体关联图”;最后利用“最大公共子图匹配”策略抽取目标实体。实验结果表明,提出的算法在不需要大量参数训练及传递的情况下,能够对抽取的实体集进行有效的精简,既保证了召回率、准确率,又提高了抽取过程的可解释性。
基金Supporting this study through Taif University Researchers Supporting Project number(TURSP-2020/115),Taif University,Taif,Saudi Arabia.
文摘Web usage mining,content mining,and structure mining comprise the web mining process.Web-Page Recommendation(WPR)development by incor-porating Data Mining Techniques(DMT)did not include end-users with improved performance in the obtainedfiltering results.The cluster user profile-based clustering process is delayed when it has a low precision rate.Markov Chain Monte Carlo-Dynamic Clustering(MC2-DC)is based on the User Behavior Profile(UBP)model group’s similar user behavior on a dynamic update of UBP.The Reversible-Jump Concept(RJC)reviews the history with updated UBP and moves to appropriate clusters.Hamilton’s Filtering Framework(HFF)is designed tofilter user data based on personalised information on automatically updated UBP through the Search Engine(SE).The Hamilton Filtered Regime Switching User Query Probability(HFRSUQP)works forward the updated UBP for easy and accuratefiltering of users’interests and improves WPR.A Probabilistic User Result Feature Ranking based on Gaussian Distribution(PURFR-GD)has been developed to user rank results in a web mining process.PURFR-GD decreases the delay time in the end-to-end workflow for SE personalization in various meth-ods by using the Gaussian Distribution Function(GDF).The theoretical analysis and experiment results of the proposed MC2-DC method automatically increase the updated UBP accuracy by 18.78%.HFRSUQP enabled extensive Maximize Log-Likelihood(ML-L)increases to 15.28%of User Personalized Information Search Retrieval Rate(UPISRT).For feature ranking,the PURFR-GD model defines higher Classification Accuracy(CA)and Precision Ratio(PR)while uti-lising minimum Execution Time(ET).Furthermore,UPISRT's ranking perfor-mance has improved by 20%.
文摘As a basic component of search engine and a series of other services on Web,Web crawler is playing an important role. Roughly,a Web crawler is a program which automatically traverses the Web by downloading documents and following links from page to page. This article detailedly explains the principles and difficulties on the Web crawler,comprehensively argues several hot directions of Web crawler,and at last views the new direction of Web crawler.