As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results a...As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.展开更多
The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in thi...The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in this paper is based on exploiting the implicit feedbacks of user satisfaction during her web browsing history to construct a user profile storing the web pages the user is highly interested in. A weight is assigned to each page stored in the user’s profile;this weight reflects the user’s interest in this page. We name this weight the relative rank of the page, since it depends on the user issuing the query. Therefore, the ranking algorithm provided in this paper is based on the principle that;the rank assigned to a page is the addition of two rank values R_rank and A_rank. A_rank is an absolute rank, since it is fixed for all users issuing the same query, it only depends on the link structures of the web and on the keywords of the query. Thus, it could be calculated by the PageRank algorithm suggested by Brin and Page in 1998 and used by the google search engine. While, R_rank is the relative rank, it is calculated by the methods given in this paper which depends mainly on recording implicit measures of user satisfaction during her previous browsing history.展开更多
In order to rank searching results according to the user preferences,a new personalized web pages ranking algorithm called PWPR(personalized web page ranking)with the idea of adjusting the ranking scores of web page...In order to rank searching results according to the user preferences,a new personalized web pages ranking algorithm called PWPR(personalized web page ranking)with the idea of adjusting the ranking scores of web pages in accordance with user preferences is proposed.PWPR assigns the initial weights based on user interests and creates the virtual links and hubs according to user interests.By measuring user click streams,PWPR incrementally reflects users’ favors for the personalized ranking.To improve the accuracy of ranking, PWPR also takes collaborative filtering into consideration when the query with similar is submitted by users who have similar user interests. Detailed simulation results and comparison with other algorithms prove that the proposed PWPR can adaptively provide personalized ranking and truly relevant information to user preferences.展开更多
INTRODUCTION Types of Contributions Full-length/Research Article(Page limit:20)A complete report on original research,development,or application of control and machine learning methods.All the authors need to provide ...INTRODUCTION Types of Contributions Full-length/Research Article(Page limit:20)A complete report on original research,development,or application of control and machine learning methods.All the authors need to provide their biographies and personal photos.Submitted manuscripts should be nominally around 12 pages in Elsevier's double-column format,but no more than 20 pages(including bios and photos).展开更多
INTRODUCTION.Types of Contributions.Full-length/Research Article(Page limit:20).A complete report on original research,development,or application of control and machine learning methods.All the authors need to provide...INTRODUCTION.Types of Contributions.Full-length/Research Article(Page limit:20).A complete report on original research,development,or application of control and machine learning methods.All the authors need to provide their biographies and personal photos.Submitted manuscripts should be nominally around 12 pages in Elsevier's double-column format,but no more than 20 pages(including bios and photos).展开更多
“Web Design and Website Construction”is a core professional course for e-commerce majors.This article explores how to integrate ideological and political education into the curriculum teaching of the course from thr...“Web Design and Website Construction”is a core professional course for e-commerce majors.This article explores how to integrate ideological and political education into the curriculum teaching of the course from three aspects:the necessity of ideological and political education construction,the construction goals,and the implementation paths.It not only improves students’professional and technical skills,but also guides students to establish a correct outlook on life and values,and cultivates students’comprehensive development of comprehensive literacy.展开更多
In order to use data information in the Internet, it is necessary to extract data from web pages. An HTT tree model representing HTML pages is presented. Based on the HTT model, a wrapper generation algorithm AGW is p...In order to use data information in the Internet, it is necessary to extract data from web pages. An HTT tree model representing HTML pages is presented. Based on the HTT model, a wrapper generation algorithm AGW is proposed. The AGW algorithm utilizes comparing and correcting technique to generate the wrapper with the native characteristic of the HTT tree structure. The AGW algorithm can not only generate the wrapper automatically, but also rebuild the data schema easily and reduce the complexity of the computing.展开更多
Google’s algorithm on PageRank is analyzed in details. Some disadvantages of this algorithm is presented, for instance, preferring old pages, ignoring special sites and inaccurate judge of hyperlinks pointed out from...Google’s algorithm on PageRank is analyzed in details. Some disadvantages of this algorithm is presented, for instance, preferring old pages, ignoring special sites and inaccurate judge of hyperlinks pointed out from one page. Furthermore, author’s improved algorithm is described. Experiments show that the author’s consideration on evaluating the importance of pages can make an improvement over the original algorithm. Based on this improved algorithm a topic specific searching system have been developed.展开更多
The usability of an interface is a fundamental issue to elucidate. Many researchers argued that many usability results and recommendations lack empirical and experimental data. In this research, the usability of the w...The usability of an interface is a fundamental issue to elucidate. Many researchers argued that many usability results and recommendations lack empirical and experimental data. In this research, the usability of the web pages is evaluated using several carefully selected statistical models. Universities web pages are chosen as subjects for this work for ease of comparison and ease of collecting data. A series of experiments has been conducted to investigate into the usability and design of the universities web pages. Prototype web pages have been developed according to the structured methodologies of web pages design and usability. Universities web pages were evaluated together with the prototype web pages using a questionnaire which was designed according to the Human Computer Interactions (HCI) heuristics. Nine (users) respondents’ variables and 14 web pages variables (items) were studied. Stringent statistical analysis was adopted to extract the required information to form the data acquired, and augmented interpretation of the statistical results was followed. The results showed that the analysis of variance (ANOVA) procedure showed there were significant differences among the universities web pages regarding most of the 23 items studied. Duncan Multiple Range Test (DMRT) showed that the prototype usability performed significantly better regarding most of the items. The correlation analysis showed significant positive and negative correlations between many items. The regression analysis revealed that the most significant factors (items) that contributed to the best model of the universities web pages design and usability were: multimedia in the web pages, the web pages icons (alone) organisation and design, and graphics attractiveness. The results showed some of the limitations of some heuristics used in conventional interface systems design and proposed some additional heuristics in web pages design and usability.展开更多
In this paper, we discuss several issues related to automated classification of web pages, especially text classification of web pages. We analyze features selection and categorization algorithms of web pages and give...In this paper, we discuss several issues related to automated classification of web pages, especially text classification of web pages. We analyze features selection and categorization algorithms of web pages and give some suggestions for web pages categorization.展开更多
文摘As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.
文摘The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in this paper is based on exploiting the implicit feedbacks of user satisfaction during her web browsing history to construct a user profile storing the web pages the user is highly interested in. A weight is assigned to each page stored in the user’s profile;this weight reflects the user’s interest in this page. We name this weight the relative rank of the page, since it depends on the user issuing the query. Therefore, the ranking algorithm provided in this paper is based on the principle that;the rank assigned to a page is the addition of two rank values R_rank and A_rank. A_rank is an absolute rank, since it is fixed for all users issuing the same query, it only depends on the link structures of the web and on the keywords of the query. Thus, it could be calculated by the PageRank algorithm suggested by Brin and Page in 1998 and used by the google search engine. While, R_rank is the relative rank, it is calculated by the methods given in this paper which depends mainly on recording implicit measures of user satisfaction during her previous browsing history.
基金The Natural Science Foundation of South-Central University for Nationalities(No.YZZ07006)
文摘In order to rank searching results according to the user preferences,a new personalized web pages ranking algorithm called PWPR(personalized web page ranking)with the idea of adjusting the ranking scores of web pages in accordance with user preferences is proposed.PWPR assigns the initial weights based on user interests and creates the virtual links and hubs according to user interests.By measuring user click streams,PWPR incrementally reflects users’ favors for the personalized ranking.To improve the accuracy of ranking, PWPR also takes collaborative filtering into consideration when the query with similar is submitted by users who have similar user interests. Detailed simulation results and comparison with other algorithms prove that the proposed PWPR can adaptively provide personalized ranking and truly relevant information to user preferences.
文摘INTRODUCTION Types of Contributions Full-length/Research Article(Page limit:20)A complete report on original research,development,or application of control and machine learning methods.All the authors need to provide their biographies and personal photos.Submitted manuscripts should be nominally around 12 pages in Elsevier's double-column format,but no more than 20 pages(including bios and photos).
文摘INTRODUCTION.Types of Contributions.Full-length/Research Article(Page limit:20).A complete report on original research,development,or application of control and machine learning methods.All the authors need to provide their biographies and personal photos.Submitted manuscripts should be nominally around 12 pages in Elsevier's double-column format,but no more than 20 pages(including bios and photos).
基金2021 University-Level Undergraduate High-Quality Curriculum Construction Reform Project of Wuyi University:Web Design and Website Construction(Project number:5071700304C8)。
文摘“Web Design and Website Construction”is a core professional course for e-commerce majors.This article explores how to integrate ideological and political education into the curriculum teaching of the course from three aspects:the necessity of ideological and political education construction,the construction goals,and the implementation paths.It not only improves students’professional and technical skills,but also guides students to establish a correct outlook on life and values,and cultivates students’comprehensive development of comprehensive literacy.
基金the National Grand Fundamental Research 973 Program of China(G1998030414)
文摘In order to use data information in the Internet, it is necessary to extract data from web pages. An HTT tree model representing HTML pages is presented. Based on the HTT model, a wrapper generation algorithm AGW is proposed. The AGW algorithm utilizes comparing and correcting technique to generate the wrapper with the native characteristic of the HTT tree structure. The AGW algorithm can not only generate the wrapper automatically, but also rebuild the data schema easily and reduce the complexity of the computing.
文摘Google’s algorithm on PageRank is analyzed in details. Some disadvantages of this algorithm is presented, for instance, preferring old pages, ignoring special sites and inaccurate judge of hyperlinks pointed out from one page. Furthermore, author’s improved algorithm is described. Experiments show that the author’s consideration on evaluating the importance of pages can make an improvement over the original algorithm. Based on this improved algorithm a topic specific searching system have been developed.
文摘The usability of an interface is a fundamental issue to elucidate. Many researchers argued that many usability results and recommendations lack empirical and experimental data. In this research, the usability of the web pages is evaluated using several carefully selected statistical models. Universities web pages are chosen as subjects for this work for ease of comparison and ease of collecting data. A series of experiments has been conducted to investigate into the usability and design of the universities web pages. Prototype web pages have been developed according to the structured methodologies of web pages design and usability. Universities web pages were evaluated together with the prototype web pages using a questionnaire which was designed according to the Human Computer Interactions (HCI) heuristics. Nine (users) respondents’ variables and 14 web pages variables (items) were studied. Stringent statistical analysis was adopted to extract the required information to form the data acquired, and augmented interpretation of the statistical results was followed. The results showed that the analysis of variance (ANOVA) procedure showed there were significant differences among the universities web pages regarding most of the 23 items studied. Duncan Multiple Range Test (DMRT) showed that the prototype usability performed significantly better regarding most of the items. The correlation analysis showed significant positive and negative correlations between many items. The regression analysis revealed that the most significant factors (items) that contributed to the best model of the universities web pages design and usability were: multimedia in the web pages, the web pages icons (alone) organisation and design, and graphics attractiveness. The results showed some of the limitations of some heuristics used in conventional interface systems design and proposed some additional heuristics in web pages design and usability.
文摘In this paper, we discuss several issues related to automated classification of web pages, especially text classification of web pages. We analyze features selection and categorization algorithms of web pages and give some suggestions for web pages categorization.