This article presents very original and relatively brief or very brief proofs about of two famous problems: 1) Are there any odd perfect numbers? and 2) “Fermat’s last theorem: A new proof of theorem and its general...This article presents very original and relatively brief or very brief proofs about of two famous problems: 1) Are there any odd perfect numbers? and 2) “Fermat’s last theorem: A new proof of theorem and its generalization”. They are achieved with elementary mathematics. This is why these proofs can be easily understood by any mathematician or anyone who knows basic mathematics. Note that, in both problems, proof by contradiction was used as a method of proof. The first of the two problems to date has not been resolved. Its proof is completely original and was not based on the work of other researchers. On the contrary, it was based on a simple observation that all natural divisors of a positive integer appear in pairs. The aim of the first work is to solve one of the unsolved, for many years, problems of the mathematics which belong to the field of number theory. I believe that if the present proof is recognized by the mathematical community, it may signal a different way of solving unsolved problems. For the second problem, it is very important the fact that it is generalized to an arbitrarily large number of variables. This generalization is essentially a new theorem in the field of the number theory. To the classical problem, two solutions are given, which are presented in the chronological order in which they were achieved. <em>Note that the second solution is very short and does not exceed one and a half pages</em>. This leads me to believe that Fermat, as a great mathematician was not lying and that he had probably solved the problem, as he stated in his historic its letter, with a correspondingly brief solution. <em>To win the bet on the question of whether Fermat was telling truth or lying, go immediately to the end of this article before the General Conclusions.</em>展开更多
In order to rank searching results according to the user preferences,a new personalized web pages ranking algorithm called PWPR(personalized web page ranking)with the idea of adjusting the ranking scores of web page...In order to rank searching results according to the user preferences,a new personalized web pages ranking algorithm called PWPR(personalized web page ranking)with the idea of adjusting the ranking scores of web pages in accordance with user preferences is proposed.PWPR assigns the initial weights based on user interests and creates the virtual links and hubs according to user interests.By measuring user click streams,PWPR incrementally reflects users’ favors for the personalized ranking.To improve the accuracy of ranking, PWPR also takes collaborative filtering into consideration when the query with similar is submitted by users who have similar user interests. Detailed simulation results and comparison with other algorithms prove that the proposed PWPR can adaptively provide personalized ranking and truly relevant information to user preferences.展开更多
In order to use data information in the Internet, it is necessary to extract data from web pages. An HTT tree model representing HTML pages is presented. Based on the HTT model, a wrapper generation algorithm AGW is p...In order to use data information in the Internet, it is necessary to extract data from web pages. An HTT tree model representing HTML pages is presented. Based on the HTT model, a wrapper generation algorithm AGW is proposed. The AGW algorithm utilizes comparing and correcting technique to generate the wrapper with the native characteristic of the HTT tree structure. The AGW algorithm can not only generate the wrapper automatically, but also rebuild the data schema easily and reduce the complexity of the computing.展开更多
As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results a...As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.展开更多
Google’s algorithm on PageRank is analyzed in details. Some disadvantages of this algorithm is presented, for instance, preferring old pages, ignoring special sites and inaccurate judge of hyperlinks pointed out from...Google’s algorithm on PageRank is analyzed in details. Some disadvantages of this algorithm is presented, for instance, preferring old pages, ignoring special sites and inaccurate judge of hyperlinks pointed out from one page. Furthermore, author’s improved algorithm is described. Experiments show that the author’s consideration on evaluating the importance of pages can make an improvement over the original algorithm. Based on this improved algorithm a topic specific searching system have been developed.展开更多
The usability of an interface is a fundamental issue to elucidate. Many researchers argued that many usability results and recommendations lack empirical and experimental data. In this research, the usability of the w...The usability of an interface is a fundamental issue to elucidate. Many researchers argued that many usability results and recommendations lack empirical and experimental data. In this research, the usability of the web pages is evaluated using several carefully selected statistical models. Universities web pages are chosen as subjects for this work for ease of comparison and ease of collecting data. A series of experiments has been conducted to investigate into the usability and design of the universities web pages. Prototype web pages have been developed according to the structured methodologies of web pages design and usability. Universities web pages were evaluated together with the prototype web pages using a questionnaire which was designed according to the Human Computer Interactions (HCI) heuristics. Nine (users) respondents’ variables and 14 web pages variables (items) were studied. Stringent statistical analysis was adopted to extract the required information to form the data acquired, and augmented interpretation of the statistical results was followed. The results showed that the analysis of variance (ANOVA) procedure showed there were significant differences among the universities web pages regarding most of the 23 items studied. Duncan Multiple Range Test (DMRT) showed that the prototype usability performed significantly better regarding most of the items. The correlation analysis showed significant positive and negative correlations between many items. The regression analysis revealed that the most significant factors (items) that contributed to the best model of the universities web pages design and usability were: multimedia in the web pages, the web pages icons (alone) organisation and design, and graphics attractiveness. The results showed some of the limitations of some heuristics used in conventional interface systems design and proposed some additional heuristics in web pages design and usability.展开更多
In this paper, we discuss several issues related to automated classification of web pages, especially text classification of web pages. We analyze features selection and categorization algorithms of web pages and give...In this paper, we discuss several issues related to automated classification of web pages, especially text classification of web pages. We analyze features selection and categorization algorithms of web pages and give some suggestions for web pages categorization.展开更多
The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in thi...The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in this paper is based on exploiting the implicit feedbacks of user satisfaction during her web browsing history to construct a user profile storing the web pages the user is highly interested in. A weight is assigned to each page stored in the user’s profile;this weight reflects the user’s interest in this page. We name this weight the relative rank of the page, since it depends on the user issuing the query. Therefore, the ranking algorithm provided in this paper is based on the principle that;the rank assigned to a page is the addition of two rank values R_rank and A_rank. A_rank is an absolute rank, since it is fixed for all users issuing the same query, it only depends on the link structures of the web and on the keywords of the query. Thus, it could be calculated by the PageRank algorithm suggested by Brin and Page in 1998 and used by the google search engine. While, R_rank is the relative rank, it is calculated by the methods given in this paper which depends mainly on recording implicit measures of user satisfaction during her previous browsing history.展开更多
文摘This article presents very original and relatively brief or very brief proofs about of two famous problems: 1) Are there any odd perfect numbers? and 2) “Fermat’s last theorem: A new proof of theorem and its generalization”. They are achieved with elementary mathematics. This is why these proofs can be easily understood by any mathematician or anyone who knows basic mathematics. Note that, in both problems, proof by contradiction was used as a method of proof. The first of the two problems to date has not been resolved. Its proof is completely original and was not based on the work of other researchers. On the contrary, it was based on a simple observation that all natural divisors of a positive integer appear in pairs. The aim of the first work is to solve one of the unsolved, for many years, problems of the mathematics which belong to the field of number theory. I believe that if the present proof is recognized by the mathematical community, it may signal a different way of solving unsolved problems. For the second problem, it is very important the fact that it is generalized to an arbitrarily large number of variables. This generalization is essentially a new theorem in the field of the number theory. To the classical problem, two solutions are given, which are presented in the chronological order in which they were achieved. <em>Note that the second solution is very short and does not exceed one and a half pages</em>. This leads me to believe that Fermat, as a great mathematician was not lying and that he had probably solved the problem, as he stated in his historic its letter, with a correspondingly brief solution. <em>To win the bet on the question of whether Fermat was telling truth or lying, go immediately to the end of this article before the General Conclusions.</em>
基金The Natural Science Foundation of South-Central University for Nationalities(No.YZZ07006)
文摘In order to rank searching results according to the user preferences,a new personalized web pages ranking algorithm called PWPR(personalized web page ranking)with the idea of adjusting the ranking scores of web pages in accordance with user preferences is proposed.PWPR assigns the initial weights based on user interests and creates the virtual links and hubs according to user interests.By measuring user click streams,PWPR incrementally reflects users’ favors for the personalized ranking.To improve the accuracy of ranking, PWPR also takes collaborative filtering into consideration when the query with similar is submitted by users who have similar user interests. Detailed simulation results and comparison with other algorithms prove that the proposed PWPR can adaptively provide personalized ranking and truly relevant information to user preferences.
基金the National Grand Fundamental Research 973 Program of China(G1998030414)
文摘In order to use data information in the Internet, it is necessary to extract data from web pages. An HTT tree model representing HTML pages is presented. Based on the HTT model, a wrapper generation algorithm AGW is proposed. The AGW algorithm utilizes comparing and correcting technique to generate the wrapper with the native characteristic of the HTT tree structure. The AGW algorithm can not only generate the wrapper automatically, but also rebuild the data schema easily and reduce the complexity of the computing.
文摘As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.
文摘Google’s algorithm on PageRank is analyzed in details. Some disadvantages of this algorithm is presented, for instance, preferring old pages, ignoring special sites and inaccurate judge of hyperlinks pointed out from one page. Furthermore, author’s improved algorithm is described. Experiments show that the author’s consideration on evaluating the importance of pages can make an improvement over the original algorithm. Based on this improved algorithm a topic specific searching system have been developed.
文摘The usability of an interface is a fundamental issue to elucidate. Many researchers argued that many usability results and recommendations lack empirical and experimental data. In this research, the usability of the web pages is evaluated using several carefully selected statistical models. Universities web pages are chosen as subjects for this work for ease of comparison and ease of collecting data. A series of experiments has been conducted to investigate into the usability and design of the universities web pages. Prototype web pages have been developed according to the structured methodologies of web pages design and usability. Universities web pages were evaluated together with the prototype web pages using a questionnaire which was designed according to the Human Computer Interactions (HCI) heuristics. Nine (users) respondents’ variables and 14 web pages variables (items) were studied. Stringent statistical analysis was adopted to extract the required information to form the data acquired, and augmented interpretation of the statistical results was followed. The results showed that the analysis of variance (ANOVA) procedure showed there were significant differences among the universities web pages regarding most of the 23 items studied. Duncan Multiple Range Test (DMRT) showed that the prototype usability performed significantly better regarding most of the items. The correlation analysis showed significant positive and negative correlations between many items. The regression analysis revealed that the most significant factors (items) that contributed to the best model of the universities web pages design and usability were: multimedia in the web pages, the web pages icons (alone) organisation and design, and graphics attractiveness. The results showed some of the limitations of some heuristics used in conventional interface systems design and proposed some additional heuristics in web pages design and usability.
文摘In this paper, we discuss several issues related to automated classification of web pages, especially text classification of web pages. We analyze features selection and categorization algorithms of web pages and give some suggestions for web pages categorization.
文摘The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in this paper is based on exploiting the implicit feedbacks of user satisfaction during her web browsing history to construct a user profile storing the web pages the user is highly interested in. A weight is assigned to each page stored in the user’s profile;this weight reflects the user’s interest in this page. We name this weight the relative rank of the page, since it depends on the user issuing the query. Therefore, the ranking algorithm provided in this paper is based on the principle that;the rank assigned to a page is the addition of two rank values R_rank and A_rank. A_rank is an absolute rank, since it is fixed for all users issuing the same query, it only depends on the link structures of the web and on the keywords of the query. Thus, it could be calculated by the PageRank algorithm suggested by Brin and Page in 1998 and used by the google search engine. While, R_rank is the relative rank, it is calculated by the methods given in this paper which depends mainly on recording implicit measures of user satisfaction during her previous browsing history.